1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
7 ============================
8 KVM4NFV Scenario-Description
9 ============================
14 This document describes the procedure to deploy/test KVM4NFV scenarios in a nested virtualization
15 environment. This has been verified with os-nosdn-kvm-ha, os-nosdn-kvm-noha,os-nosdn-kvm_ovs_dpdk-ha,
16 os-nosdn-kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk_bar-ha test scenarios.
21 +-----------------------------+---------------------------------------------+
23 | **Release** | **Features** |
25 +=============================+=============================================+
26 | | - Scenario Testing feature was not part of |
27 | Colorado | the Colorado release of KVM4NFV |
29 +-----------------------------+---------------------------------------------+
30 | | - High Availability/No-High Availability |
31 | | deployment configuration of KVM4NFV |
32 | | software suite using Fuel |
33 | | - Multi-node setup with 3 controller and |
34 | | 2 compute nodes are deployed for HA |
35 | Danube | - Multi-node setup with 1 controller and |
36 | | 3 compute nodes are deployed for NO-HA |
37 | | - Scenarios os-nosdn-kvm_ovs_dpdk-ha, |
38 | | os-nosdn-kvm_ovs_dpdk_bar-ha, |
39 | | os-nosdn-kvm_ovs_dpdk-noha, |
40 | | os-nosdn-kvm_ovs_dpdk_bar-noha |
42 +-----------------------------+---------------------------------------------+
43 | | - High Availability/No-High Availability |
44 | | deployment configuration of KVM4NFV |
45 | | software suite using Apex |
46 | | - Multi-node setup with 3 controller and |
47 | Euphrates | 2 compute nodes are deployed for HA |
48 | | - Multi-node setup with 1 controller and |
49 | | 1 compute node are deployed for NO-HA |
50 | | - Scenarios os-nosdn-kvm_ovs_dpdk-ha, |
51 | | os-nosdn-kvm_ovs_dpdk-noha, |
53 +-----------------------------+---------------------------------------------+
59 The purpose of os-nosdn-kvm_ovs_dpdk-ha,os-nosdn-kvm_ovs_dpdk_bar-ha and
60 os-nosdn-kvm_ovs_dpdk-noha,os-nosdn-kvm_ovs_dpdk_bar-noha scenarios testing is to
61 test the High Availability/No-High Availability deployment and configuration of
62 OPNFV software suite with OpenStack and without SDN software.
64 This OPNFV software suite includes OPNFV KVM4NFV latest software packages
65 for Linux Kernel and QEMU patches for achieving low latency and also OPNFV Barometer for traffic,
66 performance and platform monitoring.
68 When using Fuel installer, High Availability feature is achieved by deploying OpenStack
69 multi-node setup with 1 Fuel-Master,3 controllers and 2 computes nodes. No-High Availability
70 feature is achieved by deploying OpenStack multi-node setup with 1 Fuel-Master,1 controllers
73 When using Apex installer, High Availability feature is achieved by deploying Openstack
74 multi-node setup with 1 undercloud, 3 overcloud controllers and 2 overcloud compute nodes.
75 No-High Availability feature is achieved by deploying Openstack multi-node setup with
76 1 undercloud, 1 overcloud controller and 1 overcloud compute nodes.
78 KVM4NFV packages will be installed on compute nodes as part of deployment.
79 The scenario testcase deploys a multi-node setup by using OPNFV Fuel and Apex deployer.
85 - HARD DISK - Minimum 500GB
86 - Linux OS installed and running
87 - Nested Virtualization enabled, which can be checked by,
91 $ cat /sys/module/kvm_intel/parameters/nested
94 $ cat /proc/cpuinfo | grep vmx
97 If Nested virtualization is disabled, enable it by,
102 $ modeprobe kvm_intel
103 $ echo Y > /sys/module/kvm_intel/parameters/nested
107 $ cat << EOF > /etc/modprobe.d/kvm_intel.conf
108 options kvm-intel nested=1
109 options kvm-intel enable_shadow_vmcs=1
110 options kvm-intel enable_apicv=1
111 options kvm-intel ept=1
113 $ cat << EOF > /etc/sysctl.d/98-rp-filter.conf
114 net.ipv4.conf.default.rp_filter = 0
115 net.ipv4.conf.all.rp_filter = 0
122 **Enable network access after the installation**
123 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
126 Login as "root" user. After the installation complete, the Ethernet interfaces are not enabled by
127 the default in Centos 7, you need to change the line "ONBOOT=no" to "ONBOOT=yes" in the network
128 interface configuration file (such as ifcfg-enp6s0f0 or ifcfg-em1 … whichever you want to connect)
129 in /etc/sysconfig/network-scripts sub-directory. The default BOOTPROTO is dhcp in the network
130 interface configuration file. Then use following command to enable the network access:
134 systemctl restart network
136 **Configuring Proxy**
137 ~~~~~~~~~~~~~~~~~~~~~
140 Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for apt-get if working
141 behind a proxy server.
145 Acquire::http::proxy "http://<username>:<password>@<proxy>:<port>/";
146 Acquire::https::proxy "https://<username>:<password>@<proxy>:<port>/";
147 Acquire::ftp::proxy "ftp://<username>:<password>@<proxy>:<port>/";
148 Acquire::socks::proxy "socks://<username>:<password>@<proxy>:<port>/";
151 Edit /etc/yum.conf to work behind a proxy server by adding the below line.
155 $ echo "proxy=http://<username>:<password>@<proxy>:<port>/" >> /etc/yum.conf
161 Since there is no redsocks package for CentOS Linux release 7.2.1511, you need build redsocks from
162 source yourself. Using following commands to create “proxy_redsocks” sub-directory at /root:
169 Since you can’t download file at your Centos system yet. At other Centos or Ubuntu system, use
170 following command to download redsocks source for Centos into a file “redsocks-src”;
174 wget -O redsocks-src --no-check-certificate https://github.com/darkk/redsocks/zipball/master
176 Also download libevent-devel-2.0.21-4.el7.x86_64.rpm by:
180 wget ftp://fr2.rpmfind.net/linux/centos/7.2.1511/os/x86_64/Packages/libevent-devel-2.0.21-4.el7.x86_64.rpm
182 Copy both redsock-src and libevent-devel-2.0.21-4.el7.x86_64.rpm files into ~/proxy_redsocks in your
183 Centos system by “scp”.
185 Back to your Centos system, first install libevent-devel using libevent-devel-2.0.21-4.el7.x86_64.rpm
191 yum install –y libevent-devel-2.0.21-4.el7.x86_64.rpm
199 cd darkk-redsocks-78a73fc
202 cp redsocks ~/proxy_redsocks/.
204 Create a redsocks.conf in ~/proxy_redsocks with following contents:
211 log = "file:/root/proxy.log";
213 redirector = iptables;
218 // socks5 proxy server
230 local_ip = 127.0.0.1;
234 Start redsocks service by:
239 ./redsocks –c redsocks.conf
242 The redsocks service is not persistent and you need to execute the above-mentioned commands after
245 Create intc-proxy.sh in ~/proxy_redsocks with following contents and make it executable by
246 “chmod +x intc-proxy.sh”:
250 iptables -t nat -N REDSOCKS
251 iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN
252 iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN
253 iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN
254 iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN
255 iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN
256 iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN
257 iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN
258 iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURN
259 iptables -t nat -A REDSOCKS -p tcp -j REDIRECT --to-ports 6666
260 iptables -t nat -A REDSOCKS -p udp -j REDIRECT --to-ports 8888
261 iptables -t nat -A OUTPUT -p tcp -j REDSOCKS
262 iptables -t nat -A PREROUTING -p tcp -j REDSOCKS
264 Enable the REDSOCKS nat chain rule by:
272 These REDSOCKS nat chain rules are not persistent and you need to execute the above-mentioned
273 commands after every reboot.
275 **Network Time Protocol (NTP) setup and configuration**
276 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
282 $ sudo apt-get update
283 $ sudo apt-get install -y ntp
285 Insert the following two lines after “server ntp.ubuntu.com” line and before “ # Access control
286 configuration; see `link`_ for” line in /etc/ntp.conf file:
288 .. _link: /usr/share/doc/ntp-doc/html/accopt.html
293 fudge 127.127.1.0 stratum 10
295 Restart the ntp server to apply the changes
299 $ sudo service ntp restart
304 There are three ways of performing scenario testing,
313 **1 Clone the fuel repo :**
317 $ git clone https://gerrit.opnfv.org/gerrit/fuel.git
319 **2 Checkout to the specific version of the branch to deploy by:**
321 The default branch is master, to use a stable release-version use the below.,
324 To check the current branch
327 To check out a specific branch
328 $ git checkout stable/Colorado
330 **3 Building the Fuel iso :**
337 Provide the necessary options that are required to build an iso.
338 Create a ``customized iso`` as per the deployment needs.
345 (OR) Other way is to download the latest stable fuel iso from `here`_.
347 .. _here: http://artifacts.opnfv.org/fuel.html
351 http://artifacts.opnfv.org/fuel.html
353 **4 Creating a new deployment scenario**
355 ``(i). Naming the scenario file``
357 Include the new deployment scenario yaml file in ~/fuel/deploy/scenario/. The file name should adhere to the following format:
361 <ha | no-ha>_<SDN Controller>_<feature-1>_..._<feature-n>.yaml
365 The deployment configuration file should contain configuration metadata as stated below:
369 deployment-scenario-metadata:
374 ``(iii). “stack-extentions” Module``
376 To include fuel plugins in the deployment configuration file, use the “stack-extentions” key:
382 - module: fuel-plugin-collectd-ceilometer
383 module-config-name: fuel-barometer
384 module-config-version: 1.0.0
385 module-config-override:
386 #module-config overrides
389 The “module-config-name” and “module-config-version” should be same as the name of plugin
392 The “module-config-override” is used to configure the plugin by overrriding the corresponding keys in
393 the plugin config yaml file present in ~/fuel/deploy/config/plugins/.
395 ``(iv). “dea-override-config” Module``
397 To configure the HA/No-HA mode, network segmentation types and role to node assignments, use the
398 “dea-override-config” key.
406 net_segment_type: tun
409 interfaces: interfaces_1
410 role: mongo,controller,opendaylight
412 interfaces: interfaces_1
413 role: mongo,controller
415 interfaces: interfaces_1
416 role: mongo,controller
418 interfaces: interfaces_1
419 role: ceph-osd,compute
421 interfaces: interfaces_1
422 role: ceph-osd,compute
427 description: Configures Nova to store ephemeral volumes in RBD.
428 This works best if Ceph is enabled for volumes and images, too.
429 Enables live migration of all types of Ceph backed VMs (without this
430 option, live migration will only work with VMs launched from
432 label: Ceph RBD for ephemeral volumes (Nova)
437 description: Configures Glance to use the Ceph RBD backend to store
438 images.If enabled, this option will prevent Swift from installing.
439 label: Ceph RBD for images (Glance)
441 - settings:storage.images_vcenter.value == true: Only one Glance
442 backend could be selected.
447 Under the “dea-override-config” should provide atleast {environment:{mode:'value},{net_segment_type:'value'}
448 and {nodes:1,2,...} and can also enable additional stack features such ceph,heat which overrides
449 corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
451 ``(v). “dha-override-config” Module``
453 In order to configure the pod dha definition, use the “dha-override-config” key.
454 This is an optional key present at the ending of the scenario file.
456 ``(vi). Mapping to short scenario name``
458 The scenario.yaml file is used to map the short names of scenario's to the one or more deployment
459 scenario configuration yaml files.
460 The short scenario names should follow the scheme below:
464 [os]-[controller]-[feature]-[mode]-[option]
469 Please note that this field is needed in order to select parent jobs to list and do blocking
470 relations between them.
475 [controller]: mandatory
476 example values: nosdn, ocl, odl, onos
479 possible values: ha, noha
483 Used for the scenarios those do not fit into naming scheme.
484 Optional field in the short scenario name should not be included if there is no optional scenario.
490 2. os-nosdn-kvm_ovs_dpdk_bar-ha
493 Example of how short scenario names are mapped to configuration yaml files:
497 os-nosdn-kvm_ovs_dpdk-ha:
498 configfile: ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
502 - ( - ) used for separator of fields. [os-nosdn-kvm_ovs_dpdk-ha]
504 - ( _ ) used to separate the values belong to the same field. [os-nosdn-kvm_ovs_bar-ha].
506 **5 Deploying the scenario**
508 Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario:
513 $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default \
514 -s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
517 ``-b`` is used to specify the configuration directory
519 ``-f`` is used to re-deploy on the existing deployment
521 ``-i`` is used to specify the image downloaded from artifacts.
523 ``-l`` is used to specify the lab name
525 ``-p`` is used to specify POD name
527 ``-s`` is used to specify the scenario file
533 Check $ sudo ./deploy.sh -h for further information.
538 Apex installer uses CentOS as the platform.
540 **1 Install Packages :**
542 Install necessary packages by following:
547 yum install –y git rpm-build python-setuptools python-setuptools-devel
548 yum install –y epel-release gcc
549 curl -O https://bootstrap.pypa.io/get-pip.py
550 um install –y python3 python34
551 /usr/bin/python3.4 get-pip.py
552 yum install –y python34-devel python34-setuptools
553 yum install –y libffi-devel python-devel openssl-devel
554 yum -y install libxslt-devel libxml2-devel
556 Then you can use “dev_deploy_check.sh“ in Apex installer source to install the remaining necessary
557 packages by following:
562 git clone https://gerrit.opnfv.org/gerrit/p/apex.git
563 export CONFIG=$(pwd)/apex/build
564 export LIB=$(pwd)/apex/lib
565 export PYTHONPATH=$PYTHONPATH:$(pwd)/apex/lib/python
567 ./dev_deploy_check.sh
568 yum install –y python2-oslo-config python2-debtcollector
571 **2 Create ssh key :**
573 Use following commands to create ssh key, when asked for passphrase, just enter return for empty
581 Then prepare the authorized_keys for Apex scenario deployment:
585 cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
587 **3 Create default pool :**
589 Use following command to default pool device:
594 virsh pool-define /dev/stdin <<EOF
598 <path>/var/lib/libvirt/images</path>
603 Use following commands to start and set autostart the default pool device:
607 virsh pool-start default
608 virsh pool-autostart default
610 Use following commands to verify the success of the creation of the default pool device and starting
611 and setting autostart of the default pool device:
616 virsh pool-info default
618 **4 Get Apex source code :**
620 Get Apex installer source code:
624 git clone https://gerrit.opnfv.org/gerrit/p/apex.git
627 **5 Modify code to work behind proxy :**
629 In “lib” sub-directory of Apex source, change line 284 “if ping -c 2 www.google.com > /dev/null;
630 then” to “if curl www.google.com > /dev/null; then” in “common-functions.sh” file, since we can’t
631 ping www.google.com behind Intel proxy.
633 **6 Setup build environment :**
635 Setup build environment by:
640 export BASE=$(pwd)/apex/build
641 export LIB=$(pwd)/apex/lib
642 export PYTHONPATH=$PYTHONPATH:$(pwd)/apex/lib/python
643 export IMAGES=$(pwd)/apex/.build
645 **7 Build Apex installer :**
647 Build undercloud image by:
655 You can look at the targets in ~/apex/build/Makefile to build image for specific feature.
656 Following show how to build vanilla ODL image (this can be used to build the overcloud image for
657 basic (nosdn-nofeature) and opendaylight test scenario:
662 make overcloud-opendaylight
664 You can build the complete full set of images (undercloud, overcloud-full, overcloud-opendaylight,
672 **8 Modification of network_settings.yaml :**
674 Since we are working behind proxy, we need to modify the network_settings.yaml in ~/apex/config/network
675 to make the deployment work properly. In order to avoid checking our modification into the repo
676 accidentally, it is recommend that you copy “network_settings.yaml” to “intc_network_settings.yaml”
677 in the ~/apex/config/network and do following modification in intc_network_settings.yaml:
679 Change dns_nameservers settings from
683 dns_servers: ["8.8.8.8", "8.8.4.4"]
688 dns_servers: ["<ip-address>"]
690 Also, you need to modify deploy.sh in apex/ci from “ntp_server="pool.ntp.org"” to
691 “ntp_server="<ip-address>"” to reflect that fact we couldn’t reach outside NTP server, just use
694 **9 Commands to deploy scenario :**
696 Following shows the commands used to deploy os-nosdn-kvm_ovs_dpdk-noha scenario behind the proxy:
702 ./dev_deploy_check.sh
703 ./deploy.sh -v --ping-site <ping_ip-address> --dnslookup-site <dns_ip-address> -n \
704 ~/apex/config/network/intc_network_settings.yaml -d \
705 ~/apex/config/deploy/os-nosdn-kvm_ovs_dpdk-noha.yaml
707 **10 Accessing the Overcloud dashboard :**
709 If the deployment completes successfully, the last few output lines from the deployment will look
714 INFO: Undercloud VM has been setup to NAT Overcloud public network
715 Undercloud IP: <ip-address>, please connect by doing 'opnfv-util undercloud'
716 Overcloud dashboard available at http://<ip-address>/dashboard
717 INFO: Post Install Configuration Complete
719 **11 Accessing the Undercloud and Overcloud through command line :**
721 At the end of the deployment we obtain the Undercloud ip. One can login to the Undercloud and obtain
722 the Overcloud ip as follows:
730 ssh heat-admin@<overcloud-ip>
736 Install OPNFV-playground (the tool chain to deploy/test CI scenarios in fuel@opnfv, ):
741 $ git clone https://github.com/jonasbjurel/OPNFV-Playground.git
742 $ cd OPNFV-Playground/ci_fuel_opnfv/
744 - Follow the README.rst in this ~/OPNFV-Playground/ci_fuel_opnfv sub-holder to complete all necessary
745 installation and setup.
746 - Section “RUNNING THE PIPELINE” in README.rst explain how to use this ci_pipeline to deploy/test CI
747 test scenarios, you can also use
751 ./ci_pipeline.sh --help ##to learn more options.
755 ``1 Downgrade paramiko package from 2.x.x to 1.10.0``
757 The paramiko package 2.x.x doesn’t work with OPNFV-playground tool chain now, Jira ticket FUEL - 188
758 has been raised for the same.
760 Check paramiko package version by following below steps in your system:
765 Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright",
766 "credits" or "license" for more information.
769 >>> print paramiko.__version__
772 You will get the current paramiko package version, if it is 2.x.x, uninstall this version by
776 $ sudo pip uninstall paramiko
778 Ubuntu 14.04 LTS has python-paramiko package (1.10.0), install it by
782 $ sudo apt-get install python-paramiko
785 Verify it by following:
791 >>> print paramiko.__version__
795 ``2 Clone the fuel@opnfv``
797 Check out the specific version of specific branch of fuel@opnfv
802 $ git clone https://gerrit.opnfv.org/gerrit/fuel.git
804 By default it will be master branch, in-order to deploy on the Colorado/Danube branch, do:
805 $ git checkout stable/Danube
808 ``3 Creating the scenario``
810 Implement the scenario file as described in 3.1.4
812 ``4 Deploying the scenario``
814 You can use the following command to deploy/test os-nosdn kvm_ovs_dpdk-(no)ha and
815 os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario
819 $ cd ~/OPNFV-Playground/ci_fuel_opnfv/
821 For os-nosdn-kvm_ovs_dpdk-ha :
825 $ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk-ha
827 For os-nosdn-kvm_ovs_dpdk_bar-ha:
831 $ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk_bar-ha
833 The “ci_pipeline.sh” first clones the local fuel repo, then deploys the
834 os-nosdn-kvm_ovs_dpdk-ha/os-nosdn-kvm_ovs_dpdk_bar-ha scenario from the given ISO, and run Functest
835 and Yarstick test. The log of the deployment/test (ci.log) can be found in
836 ~/OPNFV-Playground/ci_fuel_opnfv/artifact/master/YYYY-MM-DD—HH.mm, where YYYY-MM-DD—HH.mm is the
837 date/time you start the “ci_pipeline.sh”.
843 Check $ ./ci_pipeline.sh -h for further information.
849 os-nosdn-kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario can be executed from the
853 1. "fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk-ha)
854 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-ha)
855 3. "apex-os-nosdn-kvm_ovs_dpdk-ha-baremetal-master" (os-nosdn-kvm_ovs_dpdk-ha)
858 1. "fuel-os-nosdn-kvm_ovs_dpdk-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk-noha)
859 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-virtual-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-noha)
860 3. "apex-os-nosdn-kvm_ovs_dpdk-noha-baremetal-master" (os-nosdn-kvm_ovs_dpdk-noha)