1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2019 Intel Corporation.
7 Convention for heading levels in Yardstick documentation:
9 ======= Heading 0 (reserved for the title in a document)
15 Avoid deeper levels because they do not render well.
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
29 The steps needed to run Yardstick with NSB testing are:
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
39 Refer to :doc:`04-installation` for more information on Yardstick
42 Several prerequisites are needed for Yardstick (VNF testing):
44 * Python Modules: pyzmq, pika.
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
60 ======= ===================
62 ======= ===================
66 kernel 4.4.0-34-generic
68 ======= ===================
70 Boot and BIOS settings:
72 ============= =================================================
73 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76 iommu=on iommu=pt intel_iommu=on
77 Note: nohz_full and rcu_nocbs is to disable Linux
79 BIOS CPU Power and Performance Policy <Performance>
82 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83 Hyper-Threading Technology (If supported) Enabled
84 Virtualization Techology Enabled
85 Intel(R) VT for Direct I/O Enabled
88 ============= =================================================
90 Install Yardstick (NSB Testing)
91 -------------------------------
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
96 1. Install Yardstick in specified mode: bare metal or container.
97 Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99 sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100 Add such servers to ``install-inventory.ini`` file to either
101 ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102 It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104 Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105 The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
111 Set environment in the file::
113 http_proxy='http://proxy.company.com:port'
114 https_proxy='http://proxy.company.com:port'
116 Set environment variables:
118 .. code-block:: console
120 export http_proxy='http://proxy.company.com:port'
121 export https_proxy='http://proxy.company.com:port'
123 Download the source code and check out the latest stable branch:
125 .. code-block:: console
127 git clone https://gerrit.opnfv.org/gerrit/yardstick
129 # Switch to latest stable branch
130 git checkout stable/gambia
132 Modify the Yardstick installation inventory used by Ansible:
136 cat ./ansible/install-inventory.ini
138 localhost ansible_connection=local
140 # section below is only due backward compatibility.
141 # it will be removed later
145 [yardstick-baremetal]
146 baremetal ansible_host=192.168.2.51 ansible_connection=ssh
148 [yardstick-standalone]
149 standalone ansible_host=192.168.2.52 ansible_connection=ssh
152 # Uncomment credentials below if needed
154 ansible_ssh_pass=root
155 # ansible_ssh_private_key_file=/root/.ssh/id_rsa
156 # When IMG_PROPERTY is passed neither normal nor nsb set
157 # "path_to_vm=/path/to/image" to add it to OpenStack
158 # path_to_img=/tmp/workspace/yardstick-image.img
160 # List of CPUs to be isolated (not used by default)
161 # Grub line will be extended with:
162 # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
163 # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
167 Before running ``nsb_setup.sh`` make sure python is installed on servers
168 added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
172 SSH access without password needs to be configured for all your nodes
173 defined in ``install-inventory.ini`` file.
174 If you want to use password authentication you need to install ``sshpass``::
176 sudo -EH apt-get install sshpass
181 A VM image built by other means than Yardstick can be added to OpenStack.
182 Uncomment and set correct path to the VM image in the
183 ``install-inventory.ini`` file::
185 path_to_img=/tmp/workspace/yardstick-image.img
190 CPU isolation can be applied to the remote servers, like:
191 ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
192 ``install-inventory.ini`` file.
194 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
195 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
196 installs packages to the servers given in ``yardstick-standalone`` and
197 ``yardstick-baremetal`` host groups.
199 To pull Yardstick built based on Ubuntu 18 run::
201 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
203 To change default behavior modify parameters for ``install.yaml`` in
204 ``nsb_setup.sh`` file.
206 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
209 To execute an installation for a **BareMetal** or a **Standalone context**::
213 To execute an installation for an **OpenStack** context::
215 ./nsb_setup.sh <path to admin-openrc.sh>
219 Yardstick may not be operational after distributive linux kernel update if
220 it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
224 The Yardstick VM image (NSB or normal) cannot be built inside a VM.
228 The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
229 Reboot of the servers from ``yardstick-standalone`` or
230 ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
231 required to apply those changes.
233 The above commands will set up Docker with the latest Yardstick code. To
236 docker exec -it yardstick bash
240 It may be needed to configure tty in docker container to extend commandline
241 character length, for example:
243 stty size rows 58 cols 234
245 It will also automatically download all the packages needed for NSB Testing
246 setup. Refer chapter :doc:`04-installation` for more on Docker:
247 :ref:`Install Yardstick using Docker`
249 Bare Metal context example
250 ^^^^^^^^^^^^^^^^^^^^^^^^^^
252 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
254 Perform following steps to install NSB:
256 1. Clone Yardstick repo to jump host.
257 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
258 ``install-inventory.ini`` file to install NSB and dependencies. Install
260 3. Start deployment using docker image based on Ubuntu 16:
262 .. code-block:: console
266 4. Reboot bare metal servers.
267 5. Enter to yardstick container and modify pod yaml file and run tests.
269 Standalone context example for Ubuntu 18
270 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
272 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
273 Ubuntu 18 is installed on all servers.
275 Perform following steps to install NSB:
277 1. Clone Yardstick repo to jump host.
278 2. Add TG server to ``yardstick-baremetal`` group in
279 ``install-inventory.ini`` file to install NSB and dependencies.
280 Add server where VM with sample VNF will be deployed to
281 ``yardstick-standalone`` group in ``install-inventory.ini`` file.
282 Target VM image named ``yardstick-nsb-image.img`` will be placed to
283 ``/var/lib/libvirt/images/``.
284 Install python on servers.
285 3. Modify ``nsb_setup.sh`` on jump host:
287 .. code-block:: console
290 -e IMAGE_PROPERTY='nsb' \
291 -e OS_RELEASE='bionic' \
292 -e INSTALLATION_MODE='container_pull' \
293 -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
294 -i install-inventory.ini install.yaml
296 4. Start deployment with Yardstick docker images based on Ubuntu 18:
298 .. code-block:: console
300 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
303 6. Enter to yardstick container and modify pod yaml file and run tests.
309 .. code-block:: console
311 +----------+ +----------+
317 +----------+ +----------+
321 Environment parameters and credentials
322 --------------------------------------
324 Configure yardstick.conf
325 ^^^^^^^^^^^^^^^^^^^^^^^^
327 If you did not run ``yardstick env influxdb`` inside the container to generate
328 ``yardstick.conf``, then create the config file manually (run inside the
331 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
332 vi /etc/yardstick/yardstick.conf
334 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
341 dispatcher = influxdb
343 [dispatcher_influxdb]
345 target = http://{YOUR_IP_HERE}:8086
351 trex_path=/opt/nsb_bin/trex/scripts
352 bin_path=/opt/nsb_bin
353 trex_client_lib=/opt/nsb_bin/trex_client/stl
355 Run Yardstick - Network Service Testcases
356 -----------------------------------------
358 NS testing - using yardstick CLI
359 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
361 See :doc:`04-installation`
363 Connect to the Yardstick container::
365 docker exec -it yardstick /bin/bash
367 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
369 source /etc/yardstick/openstack.creds
371 In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
374 export EXTERNAL_NETWORK="<openstack public network>"
376 Finally, you should be able to run the testcase::
378 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
380 Network Service Benchmarking - Bare-Metal
381 -----------------------------------------
383 Bare-Metal Config pod.yaml describing Topology
384 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
386 Bare-Metal 2-Node setup
387 +++++++++++++++++++++++
388 .. code-block:: console
390 +----------+ +----------+
396 +----------+ +----------+
399 Bare-Metal 3-Node setup - Correlated Traffic
400 ++++++++++++++++++++++++++++++++++++++++++++
401 .. code-block:: console
403 +----------+ +----------+ +------------+
406 | | (0)----->(0) | | | UDP |
407 | TG1 | | DUT | | Replay |
409 | | | |(1)<---->(0)| |
410 +----------+ +----------+ +------------+
411 trafficgen_0 vnf trafficgen_1
414 Bare-Metal Config pod.yaml
415 ^^^^^^^^^^^^^^^^^^^^^^^^^^
416 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
417 topology and update all the required fields.::
419 cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
431 xe0: # logical name from topology.yaml and vnfd.yaml
433 driver: i40e # default kernel driver
435 local_ip: "152.16.100.20"
436 netmask: "255.255.255.0"
437 local_mac: "00:00:00:00:00:01"
438 xe1: # logical name from topology.yaml and vnfd.yaml
440 driver: i40e # default kernel driver
442 local_ip: "152.16.40.20"
443 netmask: "255.255.255.0"
444 local_mac: "00:00:00:00:00:02"
452 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
454 xe0: # logical name from topology.yaml and vnfd.yaml
456 driver: i40e # default kernel driver
458 local_ip: "152.16.100.19"
459 netmask: "255.255.255.0"
460 local_mac: "00:00:00:00:00:03"
462 xe1: # logical name from topology.yaml and vnfd.yaml
464 driver: i40e # default kernel driver
466 local_ip: "152.16.40.19"
467 netmask: "255.255.255.0"
468 local_mac: "00:00:00:00:00:04"
470 - network: "152.16.100.20"
471 netmask: "255.255.255.0"
472 gateway: "152.16.100.20"
474 - network: "152.16.40.20"
475 netmask: "255.255.255.0"
476 gateway: "152.16.40.20"
479 - network: "0064:ff9b:0:0:0:0:9810:6414"
481 gateway: "0064:ff9b:0:0:0:0:9810:6414"
483 - network: "0064:ff9b:0:0:0:0:9810:2814"
485 gateway: "0064:ff9b:0:0:0:0:9810:2814"
489 Standalone Virtualization
490 -------------------------
495 SR-IOV Pre-requisites
496 +++++++++++++++++++++
498 On Host, where VM is created:
499 1. Create and configure a bridge named ``br-int`` for VM to connect to
500 external network. Currently this can be done using VXLAN tunnel.
502 Execute the following on host, where VM is created::
504 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
506 brctl addif br-int vxlan0
507 ip link set dev vxlan0 up
508 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
509 ip link set dev br-int up
511 .. note:: You may need to add extra rules to iptable to forward traffic.
513 .. code-block:: console
515 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
516 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
518 Execute the following on a jump host:
520 .. code-block:: console
522 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
523 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
524 ip link set dev vxlan0 up
526 .. note:: Host and jump host are different baremetal servers.
528 2. Modify test case management CIDR.
529 IP addresses IP#1, IP#2 and CIDR must be in the same network.
539 3. Build guest image for VNF to run.
540 Most of the sample test cases in Yardstick are using a guest image called
541 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
542 Yardstick has a tool for building this custom image with SampleVNF.
543 It is necessary to have ``sudo`` rights to use this tool.
545 Also you may need to install several additional packages to use this tool, by
546 following the commands below::
548 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
550 This image can be built using the following command in the directory where
551 Yardstick is installed::
553 export YARD_IMG_ARCH='amd64'
554 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
556 For instructions on generating a cloud image using Ansible, refer to
557 :doc:`04-installation`.
559 .. note:: VM should be build with static IP and be accessible from the
563 SR-IOV Config pod.yaml describing Topology
564 ++++++++++++++++++++++++++++++++++++++++++
568 .. code-block:: console
570 +--------------------+
576 +--------------------+
577 | VF NIC | | VF NIC |
578 +--------+ +--------+
582 +----------+ +-------------------------+
585 | | (0)<----->(0) | ------ SUT | |
587 | | (n)<----->(n) | ----------------- |
589 +----------+ +-------------------------+
594 SR-IOV 3-Node setup - Correlated Traffic
595 ++++++++++++++++++++++++++++++++++++++++
596 .. code-block:: console
598 +--------------------+
604 +--------------------+
605 | VF NIC | | VF NIC |
606 +--------+ +--------+
610 +----------+ +---------------------+ +--------------+
613 | | (0)<----->(0) |----- | | | TG2 |
614 | TG1 | | SUT | | | (UDP Replay) |
616 | | (n)<----->(n) | -----| (n)<-->(n) | |
617 +----------+ +---------------------+ +--------------+
618 trafficgen_0 host trafficgen_1
620 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
621 topology and update all the required fields.
623 .. code-block:: console
625 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
626 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
628 .. note:: Update all the required fields like ip, user, password, pcis, etc...
630 SR-IOV Config pod_trex.yaml
631 +++++++++++++++++++++++++++
642 key_filename: /root/.ssh/id_rsa
644 xe0: # logical name from topology.yaml and vnfd.yaml
646 driver: i40e # default kernel driver
648 local_ip: "152.16.100.20"
649 netmask: "255.255.255.0"
650 local_mac: "00:00:00:00:00:01"
651 xe1: # logical name from topology.yaml and vnfd.yaml
653 driver: i40e # default kernel driver
655 local_ip: "152.16.40.20"
656 netmask: "255.255.255.0"
657 local_mac: "00:00:00:00:00:02"
659 SR-IOV Config host_sriov.yaml
660 +++++++++++++++++++++++++++++
672 SR-IOV testcase update:
673 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
675 Update contexts section
676 '''''''''''''''''''''''
683 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
684 - type: StandaloneSriov
685 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
689 images: "/var/lib/libvirt/images/ubuntu.qcow2"
695 user: "" # update VM username
696 password: "" # update password
701 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
708 phy_port: "0000:05:00.0"
710 cidr: '152.16.100.10/24'
711 gateway_ip: '152.16.100.20'
713 phy_port: "0000:05:00.1"
715 cidr: '152.16.40.10/24'
716 gateway_ip: '152.16.100.20'
722 OVS-DPDK Pre-requisites
723 +++++++++++++++++++++++
725 On Host, where VM is created:
726 1. Create and configure a bridge named ``br-int`` for VM to connect to
727 external network. Currently this can be done using VXLAN tunnel.
729 Execute the following on host, where VM is created:
731 .. code-block:: console
733 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
735 brctl addif br-int vxlan0
736 ip link set dev vxlan0 up
737 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
738 ip link set dev br-int up
740 .. note:: May be needed to add extra rules to iptable to forward traffic.
742 .. code-block:: console
744 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
745 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
747 Execute the following on a jump host:
749 .. code-block:: console
751 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
752 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
753 ip link set dev vxlan0 up
755 .. note:: Host and jump host are different baremetal servers.
757 2. Modify test case management CIDR.
758 IP addresses IP#1, IP#2 and CIDR must be in the same network.
768 3. Build guest image for VNF to run.
769 Most of the sample test cases in Yardstick are using a guest image called
770 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
771 Yardstick has a tool for building this custom image with SampleVNF.
772 It is necessary to have ``sudo`` rights to use this tool.
774 You may need to install several additional packages to use this tool, by
775 following the commands below::
777 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
779 This image can be built using the following command in the directory where
780 Yardstick is installed::
782 export YARD_IMG_ARCH='amd64'
783 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
784 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
786 for more details refer to chapter :doc:`04-installation`
788 .. note:: VM should be build with static IP and should be accessible from
791 4. OVS & DPDK version:
793 * OVS 2.7 and DPDK 16.11.1 above version is supported
795 Refer setup instructions at `OVS-DPDK`_ on host.
797 OVS-DPDK Config pod.yaml describing Topology
798 ++++++++++++++++++++++++++++++++++++++++++++
800 OVS-DPDK 2-Node setup
801 +++++++++++++++++++++
803 .. code-block:: console
805 +--------------------+
811 +--------------------+
812 | virtio | | virtio |
813 +--------+ +--------+
817 +--------+ +--------+
818 | vHOST0 | | vHOST1 |
819 +----------+ +-------------------------+
822 | | (0)<----->(0) | ------ | |
825 | | (n)<----->(n) |------------------ |
826 +----------+ +-------------------------+
830 OVS-DPDK 3-Node setup - Correlated Traffic
831 ++++++++++++++++++++++++++++++++++++++++++
833 .. code-block:: console
835 +--------------------+
841 +--------------------+
842 | virtio | | virtio |
843 +--------+ +--------+
847 +--------+ +--------+
848 | vHOST0 | | vHOST1 |
849 +----------+ +-------------------------+ +------------+
852 | | (0)<----->(0) | ------ | | | TG2 |
853 | TG1 | | SUT | | |(UDP Replay)|
854 | | | (ovs-dpdk) | | | |
855 | | (n)<----->(n) | ------ |(n)<-->(n)| |
856 +----------+ +-------------------------+ +------------+
857 trafficgen_0 host trafficgen_1
860 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
861 the topology and update all the required fields::
863 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
864 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
866 .. note:: Update all the required fields like ip, user, password, pcis, etc...
868 OVS-DPDK Config pod_trex.yaml
869 +++++++++++++++++++++++++++++
881 xe0: # logical name from topology.yaml and vnfd.yaml
883 driver: i40e # default kernel driver
885 local_ip: "152.16.100.20"
886 netmask: "255.255.255.0"
887 local_mac: "00:00:00:00:00:01"
888 xe1: # logical name from topology.yaml and vnfd.yaml
890 driver: i40e # default kernel driver
892 local_ip: "152.16.40.20"
893 netmask: "255.255.255.0"
894 local_mac: "00:00:00:00:00:02"
896 OVS-DPDK Config host_ovs.yaml
897 +++++++++++++++++++++++++++++
909 ovs_dpdk testcase update:
910 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
912 Update contexts section
913 '''''''''''''''''''''''
920 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
921 - type: StandaloneOvsDpdk
923 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
937 images: "/var/lib/libvirt/images/ubuntu.qcow2"
943 user: "" # update VM username
944 password: "" # update password
949 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
956 phy_port: "0000:05:00.0"
958 cidr: '152.16.100.10/24'
959 gateway_ip: '152.16.100.20'
961 phy_port: "0000:05:00.1"
963 cidr: '152.16.40.10/24'
964 gateway_ip: '152.16.100.20'
966 OVS-DPDK configuration options
967 ++++++++++++++++++++++++++++++
969 There are number of configuration options available for OVS-DPDK context in
970 test case. Mostly they are used for performance tuning.
975 OVS-DPDK properties example under *ovs_properties* section:
977 .. code-block:: console
992 dpdk_pmd-rxq-affinity:
997 vhost_pmd-rxq-affinity:
1003 OVS-DPDK properties description:
1005 +-------------------------+-------------------------------------------------+
1006 | Parameters | Detail |
1007 +=========================+=================================================+
1008 | version || Version of OVS and DPDK to be installed |
1009 | || There is a relation between OVS and DPDK |
1010 | | version which can be found at |
1011 | | `OVS-DPDK-versions`_ |
1012 | || By default OVS: 2.6.0, DPDK: 16.07.2 |
1013 +-------------------------+-------------------------------------------------+
1014 | lcore_mask || Core bitmask used during DPDK initialization |
1015 | | where the non-datapath OVS-DPDK threads such |
1016 | | as handler and revalidator threads run |
1017 +-------------------------+-------------------------------------------------+
1018 | pmd_cpu_mask || Core bitmask that sets which cores are used by |
1019 | || OVS-DPDK for datapath packet processing |
1020 +-------------------------+-------------------------------------------------+
1021 | pmd_threads || Number of PMD threads used by OVS-DPDK for |
1023 | || This core mask is evaluated in Yardstick |
1024 | || It will be used if pmd_cpu_mask is not given |
1026 +-------------------------+-------------------------------------------------+
1027 | ram || Amount of RAM to be used for each socket, MB |
1028 | || Default is 2048 MB |
1029 +-------------------------+-------------------------------------------------+
1030 | queues || Number of RX queues used for DPDK physical |
1032 +-------------------------+-------------------------------------------------+
1033 | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
1034 | || e.g.: <port number> : <queue-id>:<core-id> |
1035 +-------------------------+-------------------------------------------------+
1036 | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
1037 | || e.g.: <port number> : <queue-id>:<core-id> |
1038 +-------------------------+-------------------------------------------------+
1039 | vpath || User path for openvswitch files |
1040 | || Default is ``/usr/local`` |
1041 +-------------------------+-------------------------------------------------+
1042 | max_idle || The maximum time that idle flows will remain |
1043 | | cached in the datapath, ms |
1044 +-------------------------+-------------------------------------------------+
1050 VM image properties example under *flavor* section:
1052 .. code-block:: console
1058 machine_type: 'pc-i440fx-xenial'
1065 <vcpupin vcpu="0" cpuset="7"/>
1066 <vcpupin vcpu="1" cpuset="8"/>
1068 <vcpupin vcpu="11" cpuset="18"/>
1069 <emulatorpin cpuset="11"/>
1072 VM image properties description:
1074 +-------------------------+-------------------------------------------------+
1075 | Parameters | Detail |
1076 +=========================+=================================================+
1077 | images || Path to the VM image generated by |
1078 | | ``nsb_setup.sh`` |
1079 | || Default path is ``/var/lib/libvirt/images/`` |
1080 | || Default file name ``yardstick-nsb-image.img`` |
1081 | | or ``yardstick-image.img`` |
1082 +-------------------------+-------------------------------------------------+
1083 | ram || Amount of RAM to be used for VM |
1084 | || Default is 4096 MB |
1085 +-------------------------+-------------------------------------------------+
1086 | hw:cpu_sockets || Number of sockets provided to the guest VM |
1088 +-------------------------+-------------------------------------------------+
1089 | hw:cpu_cores || Number of cores provided to the guest VM |
1091 +-------------------------+-------------------------------------------------+
1092 | hw:cpu_threads || Number of threads provided to the guest VM |
1094 +-------------------------+-------------------------------------------------+
1095 | hw_socket || Generate vcpu cpuset from given HW socket |
1097 +-------------------------+-------------------------------------------------+
1098 | cputune || Maps virtual cpu with logical cpu |
1099 +-------------------------+-------------------------------------------------+
1100 | machine_type || Machine type to be emulated in VM |
1101 | || Default is 'pc-i440fx-xenial' |
1102 +-------------------------+-------------------------------------------------+
1105 OpenStack with SR-IOV support
1106 -----------------------------
1108 This section describes how to run a Sample VNF test case, using Heat context,
1109 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1110 DevStack, with SR-IOV support.
1113 Single node OpenStack with external TG
1114 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1116 .. code-block:: console
1118 +----------------------------+
1119 |OpenStack(DevStack) |
1121 | +--------------------+ |
1122 | |sample-VNF VM | |
1127 | +--------+ +--------+ |
1128 | | VF NIC | | VF NIC | |
1129 | +-----+--+--+----+---+ |
1132 +----------+ +---------+----------+-------+
1136 | TG | (PF0)<----->(PF0) +---------+ | |
1138 | | (PF1)<----->(PF1) +--------------------+ |
1140 +----------+ +----------------------------+
1144 Host pre-configuration
1145 ++++++++++++++++++++++
1147 .. warning:: The following configuration requires sudo access to the system.
1148 Make sure that your user have the access.
1150 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1151 manufacturers disable this extension by default.
1153 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1154 config file ``/etc/default/grub``.
1156 For the Intel platform::
1159 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1162 For the AMD platform::
1165 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1168 Update the grub configuration file and restart the system:
1170 .. warning:: The following command will reboot the system.
1177 Make sure the extension has been enabled::
1179 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1181 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
1182 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1183 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1184 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1185 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1186 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1187 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1188 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1190 .. TODO: Refer to the yardstick installation guide for proxy set up
1192 Setup system proxy (if needed). Add the following configuration into the
1193 ``/etc/environment`` file:
1195 .. note:: The proxy server name/port and IPs should be changed according to
1196 actual/current proxy configuration in the lab.
1200 export http_proxy=http://proxy.company.com:port
1201 export https_proxy=http://proxy.company.com:port
1202 export ftp_proxy=http://proxy.company.com:port
1203 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1204 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1210 sudo -EH apt-get update
1211 sudo -EH apt-get upgrade
1212 sudo -EH apt-get dist-upgrade
1214 Install dependencies needed for DevStack
1218 sudo -EH apt-get install python python-dev python-pip
1220 Setup SR-IOV ports on the host:
1222 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1223 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1224 interface names should be changed according to the HW environment used for
1229 sudo ip link set dev enp24s0f0 up
1230 sudo ip link set dev enp24s0f1 up
1231 sudo ip link set dev enp24s0f3 up
1234 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1235 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1238 DevStack installation
1239 +++++++++++++++++++++
1241 If you want to try out NSB, but don't have OpenStack set-up, you can use
1242 `Devstack`_ to install OpenStack on a host. Please note, that the
1243 ``stable/pike`` branch of devstack repo should be used during the installation.
1244 The required ``local.conf`` configuration file is described below.
1246 DevStack configuration file:
1248 .. note:: Update the devstack configuration file by replacing angluar brackets
1249 with a short description inside.
1251 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1252 commands to get device and vendor id of the virtual function (VF).
1254 .. literalinclude:: code/single-devstack-local.conf
1257 Start the devstack installation on a host.
1259 TG host configuration
1260 +++++++++++++++++++++
1262 Yardstick automatically installs and configures Trex traffic generator on TG
1263 host based on provided POD file (see below). Anyway, it's recommended to check
1264 the compatibility of the installed NIC on the TG server with software Trex
1265 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1267 Run the Sample VNF test case
1268 ++++++++++++++++++++++++++++
1270 There is an example of Sample VNF test case ready to be executed in an
1271 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1272 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
1274 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1277 Create pod file for TG in the yardstick repo folder located in the yardstick
1280 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1281 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1282 command to get the PF PCI address for ``vpci`` field.
1284 .. literalinclude:: code/single-yardstick-pod.conf
1287 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1288 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1289 context using steps described in `NS testing - using yardstick CLI`_ section.
1292 Multi node OpenStack TG and VNF setup (two nodes)
1293 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1295 .. code-block:: console
1297 +----------------------------+ +----------------------------+
1298 |OpenStack(DevStack) | |OpenStack(DevStack) |
1300 | +--------------------+ | | +--------------------+ |
1301 | |sample-VNF VM | | | |sample-VNF VM | |
1303 | | TG | | | | DUT | |
1304 | | trafficgen_0 | | | | (VNF) | |
1306 | +--------+ +--------+ | | +--------+ +--------+ |
1307 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1308 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1311 +--------+-----------+-------+ +---------+----------+-------+
1312 | VF0 VF1 | | VF0 VF1 |
1314 | | SUT2 | | | | SUT1 | |
1315 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1317 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1319 +----------------------------+ +----------------------------+
1320 host2 (compute) host1 (controller)
1323 Controller/Compute pre-configuration
1324 ++++++++++++++++++++++++++++++++++++
1326 Pre-configuration of the controller and compute hosts are the same as
1327 described in `Host pre-configuration`_ section.
1329 DevStack configuration
1330 ++++++++++++++++++++++
1332 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1333 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1334 devstack repo should be used during the installation.
1336 .. note:: Update the devstack configuration files by replacing angluar brackets
1337 with a short description inside.
1339 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1340 commands to get device and vendor id of the virtual function (VF).
1342 DevStack configuration file for controller host:
1344 .. literalinclude:: code/multi-devstack-controller-local.conf
1347 DevStack configuration file for compute host:
1349 .. literalinclude:: code/multi-devstack-compute-local.conf
1352 Start the devstack installation on the controller and compute hosts.
1354 Run the sample vFW TC
1355 +++++++++++++++++++++
1357 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1360 Run the sample vFW RFC2544 SR-IOV test case
1361 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1362 in the heat context using steps described in
1363 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1368 yardstick -d task start --task-args='{"provider": "sriov"}' \
1369 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1372 Enabling other Traffic generators
1373 ---------------------------------
1378 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1379 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1380 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1381 ``<IxOS version>Linux64.bin.tar.gz``
1382 If the installation was not done inside the container, after installing
1383 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1384 sure you can run this cmd inside the yardstick container. Usually user is
1385 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1386 ``/usr/bin/ixiapython<ver>`` inside the container.
1388 2. Update ``pod_ixia.yaml`` file with ixia details.
1390 .. code-block:: console
1392 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1393 etc/yardstick/nodes/pod_ixia.yaml
1395 Config ``pod_ixia.yaml``
1397 .. literalinclude:: code/pod_ixia.yaml
1400 for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1401 for ovs-dpdk/sriov configuration
1403 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1404 You will also need to configure the IxLoad machine to start the IXIA
1405 IxosTclServer. This can be started like so:
1407 * Connect to the IxLoad machine using RDP
1409 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1411 ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1413 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1415 5. Execute testcase in samplevnf folder e.g.
1416 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1421 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1422 installed as part of the requirements of the project.
1424 1. Update ``pod_ixia.yaml`` file with ixia details.
1426 .. code-block:: console
1428 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1429 etc/yardstick/nodes/pod_ixia.yaml
1431 Configure ``pod_ixia.yaml``
1433 .. literalinclude:: code/pod_ixia.yaml
1436 for sriov/ovs_dpdk pod files, please refer to above
1437 `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1439 2. Start IxNetwork TCL Server
1440 You will also need to configure the IxNetwork machine to start the IXIA
1441 IxNetworkTclServer. This can be started like so:
1443 * Connect to the IxNetwork machine using RDP
1445 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1446 (or ``IxNetworkApiServer``)
1448 3. Execute testcase in samplevnf folder e.g.
1449 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1454 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1455 to be preinstalled and properly configured.
1459 32-bit Java installation is required for the Spirent Landslide TCL API.
1461 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1464 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1465 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1466 section of installation instructions.
1468 - LsApi (Tcl API module)
1470 Follow Landslide documentation for detailed instructions on Linux
1471 installation of Tcl API and its dependencies
1472 ``http://TAS_HOST_IP/tclapiinstall.html``.
1473 For working with LsApi Python wrapper only steps 1-5 are required.
1475 .. note:: After installation make sure your API home path is included in
1476 ``PYTHONPATH`` environment variable.
1479 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1480 For LsApi module to initialize correctly following lines (184-186) in
1483 .. code-block:: python
1485 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1487 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1489 should be changed to:
1491 .. code-block:: python
1493 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1494 if not ldpath == '':
1495 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1497 .. note:: The Spirent landslide TCL software package needs to be updated in case
1498 the user upgrades to a new version of Spirent landslide software.