1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2019 Intel Corporation.
7 Convention for heading levels in Yardstick documentation:
9 ======= Heading 0 (reserved for the title in a document)
15 Avoid deeper levels because they do not render well.
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
29 The steps needed to run Yardstick with NSB testing are:
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
39 Refer to :doc:`04-installation` for more information on Yardstick
42 Several prerequisites are needed for Yardstick (VNF testing):
44 * Python Modules: pyzmq, pika.
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
60 ======= ===================
62 ======= ===================
66 kernel 4.4.0-34-generic
68 ======= ===================
70 Boot and BIOS settings:
72 ============= =================================================
73 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76 iommu=on iommu=pt intel_iommu=on
77 Note: nohz_full and rcu_nocbs is to disable Linux
79 BIOS CPU Power and Performance Policy <Performance>
82 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83 Hyper-Threading Technology (If supported) Enabled
84 Virtualization Techology Enabled
85 Intel(R) VT for Direct I/O Enabled
88 ============= =================================================
90 Install Yardstick (NSB Testing)
91 -------------------------------
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
96 1. Install Yardstick in specified mode: bare metal or container.
97 Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99 sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100 Add such servers to ``install-inventory.ini`` file to either
101 ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102 It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104 Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105 The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
113 http_proxy='http://proxy.company.com:port'
114 https_proxy='http://proxy.company.com:port'
116 .. code-block:: console
118 export http_proxy='http://proxy.company.com:port'
119 export https_proxy='http://proxy.company.com:port'
121 Download the source code and check out the latest stable branch
123 .. code-block:: console
125 git clone https://gerrit.opnfv.org/gerrit/yardstick
127 # Switch to latest stable branch
128 git checkout stable/gambia
130 Modify the Yardstick installation inventory used by Ansible::
132 cat ./ansible/install-inventory.ini
134 localhost ansible_connection=local
136 # section below is only due backward compatibility.
137 # it will be removed later
141 [yardstick-baremetal]
142 baremetal ansible_host=192.168.2.51 ansible_connection=ssh
144 [yardstick-standalone]
145 standalone ansible_host=192.168.2.52 ansible_connection=ssh
148 # Uncomment credentials below if needed
150 ansible_ssh_pass=root
151 # ansible_ssh_private_key_file=/root/.ssh/id_rsa
152 # When IMG_PROPERTY is passed neither normal nor nsb set
153 # "path_to_vm=/path/to/image" to add it to OpenStack
154 # path_to_img=/tmp/workspace/yardstick-image.img
156 # List of CPUs to be isolated (not used by default)
157 # Grub line will be extended with:
158 # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
159 # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
163 Before running ``nsb_setup.sh`` make sure python is installed on servers
164 added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
168 SSH access without password needs to be configured for all your nodes
169 defined in ``install-inventory.ini`` file.
170 If you want to use password authentication you need to install ``sshpass``::
172 sudo -EH apt-get install sshpass
177 A VM image built by other means than Yardstick can be added to OpenStack.
178 Uncomment and set correct path to the VM image in the
179 ``install-inventory.ini`` file::
181 path_to_img=/tmp/workspace/yardstick-image.img
186 CPU isolation can be applied to the remote servers, like:
187 ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
188 ``install-inventory.ini`` file.
190 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
191 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
192 installs packages to the servers given in ``yardstick-standalone`` and
193 ``yardstick-baremetal`` host groups.
195 To pull Yardstick built based on Ubuntu 18 run::
197 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
199 To change default behavior modify parameters for ``install.yaml`` in
200 ``nsb_setup.sh`` file.
202 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
205 To execute an installation for a **BareMetal** or a **Standalone context**::
209 To execute an installation for an **OpenStack** context::
211 ./nsb_setup.sh <path to admin-openrc.sh>
215 Yardstick may not be operational after distributive linux kernel update if
216 it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
220 The Yardstick VM image (NSB or normal) cannot be built inside a VM.
224 The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
225 Reboot of the servers from ``yardstick-standalone`` or
226 ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
227 required to apply those changes.
229 The above commands will set up Docker with the latest Yardstick code. To
232 docker exec -it yardstick bash
236 It may be needed to configure tty in docker container to extend commandline
237 character length, for example:
239 stty size rows 58 cols 234
241 It will also automatically download all the packages needed for NSB Testing
242 setup. Refer chapter :doc:`04-installation` for more on Docker.
244 **Install Yardstick using Docker (recommended)**
246 Bare Metal context example
247 ^^^^^^^^^^^^^^^^^^^^^^^^^^
249 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
251 Perform following steps to install NSB:
253 1. Clone Yardstick repo to jump host.
254 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
255 ``install-inventory.ini`` file to install NSB and dependencies. Install
257 3. Start deployment using docker image based on Ubuntu 16:
259 .. code-block:: console
263 4. Reboot bare metal servers.
264 5. Enter to yardstick container and modify pod yaml file and run tests.
266 Standalone context example for Ubuntu 18
267 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
269 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
270 Ubuntu 18 is installed on all servers.
272 Perform following steps to install NSB:
274 1. Clone Yardstick repo to jump host.
275 2. Add TG server to ``yardstick-baremetal`` group in
276 ``install-inventory.ini`` file to install NSB and dependencies.
277 Add server where VM with sample VNF will be deployed to
278 ``yardstick-standalone`` group in ``install-inventory.ini`` file.
279 Target VM image named ``yardstick-nsb-image.img`` will be placed to
280 ``/var/lib/libvirt/images/``.
281 Install python on servers.
282 3. Modify ``nsb_setup.sh`` on jump host:
284 .. code-block:: console
287 -e IMAGE_PROPERTY='nsb' \
288 -e OS_RELEASE='bionic' \
289 -e INSTALLATION_MODE='container_pull' \
290 -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
291 -i install-inventory.ini install.yaml
293 4. Start deployment with Yardstick docker images based on Ubuntu 18:
295 .. code-block:: console
297 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
300 6. Enter to yardstick container and modify pod yaml file and run tests.
306 .. code-block:: console
308 +----------+ +----------+
314 +----------+ +----------+
318 Environment parameters and credentials
319 --------------------------------------
321 Configure yardstick.conf
322 ^^^^^^^^^^^^^^^^^^^^^^^^
324 If you did not run ``yardstick env influxdb`` inside the container to generate
325 ``yardstick.conf``, then create the config file manually (run inside the
328 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
329 vi /etc/yardstick/yardstick.conf
331 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
336 dispatcher = influxdb
338 [dispatcher_influxdb]
340 target = http://{YOUR_IP_HERE}:8086
346 trex_path=/opt/nsb_bin/trex/scripts
347 bin_path=/opt/nsb_bin
348 trex_client_lib=/opt/nsb_bin/trex_client/stl
350 Run Yardstick - Network Service Testcases
351 -----------------------------------------
353 NS testing - using yardstick CLI
354 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
356 See :doc:`04-installation`
358 Connect to the Yardstick container::
360 docker exec -it yardstick /bin/bash
362 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
363 source /etc/yardstick/openstack.creds
365 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
368 export EXTERNAL_NETWORK="<openstack public network>"
370 Finally, you should be able to run the testcase::
372 yardstick --debug task start ./yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
374 Network Service Benchmarking - Bare-Metal
375 -----------------------------------------
377 Bare-Metal Config pod.yaml describing Topology
378 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
380 Bare-Metal 2-Node setup
381 +++++++++++++++++++++++
382 .. code-block:: console
384 +----------+ +----------+
390 +----------+ +----------+
393 Bare-Metal 3-Node setup - Correlated Traffic
394 ++++++++++++++++++++++++++++++++++++++++++++
395 .. code-block:: console
397 +----------+ +----------+ +------------+
400 | | (0)----->(0) | | | UDP |
401 | TG1 | | DUT | | Replay |
403 | | | |(1)<---->(0)| |
404 +----------+ +----------+ +------------+
405 trafficgen_0 vnf trafficgen_1
408 Bare-Metal Config pod.yaml
409 ^^^^^^^^^^^^^^^^^^^^^^^^^^
410 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
411 topology and update all the required fields.::
413 cp ./etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
425 xe0: # logical name from topology.yaml and vnfd.yaml
427 driver: i40e # default kernel driver
429 local_ip: "152.16.100.20"
430 netmask: "255.255.255.0"
431 local_mac: "00:00:00:00:00:01"
432 xe1: # logical name from topology.yaml and vnfd.yaml
434 driver: i40e # default kernel driver
436 local_ip: "152.16.40.20"
437 netmask: "255.255.255.0"
438 local_mac: "00:00.00:00:00:02"
446 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
448 xe0: # logical name from topology.yaml and vnfd.yaml
450 driver: i40e # default kernel driver
452 local_ip: "152.16.100.19"
453 netmask: "255.255.255.0"
454 local_mac: "00:00:00:00:00:03"
456 xe1: # logical name from topology.yaml and vnfd.yaml
458 driver: i40e # default kernel driver
460 local_ip: "152.16.40.19"
461 netmask: "255.255.255.0"
462 local_mac: "00:00:00:00:00:04"
464 - network: "152.16.100.20"
465 netmask: "255.255.255.0"
466 gateway: "152.16.100.20"
468 - network: "152.16.40.20"
469 netmask: "255.255.255.0"
470 gateway: "152.16.40.20"
473 - network: "0064:ff9b:0:0:0:0:9810:6414"
475 gateway: "0064:ff9b:0:0:0:0:9810:6414"
477 - network: "0064:ff9b:0:0:0:0:9810:2814"
479 gateway: "0064:ff9b:0:0:0:0:9810:2814"
483 Standalone Virtualization
484 -------------------------
489 SR-IOV Pre-requisites
490 +++++++++++++++++++++
492 On Host, where VM is created:
493 a) Create and configure a bridge named ``br-int`` for VM to connect to
494 external network. Currently this can be done using VXLAN tunnel.
496 Execute the following on host, where VM is created::
498 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
500 brctl addif br-int vxlan0
501 ip link set dev vxlan0 up
502 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
503 ip link set dev br-int up
505 .. note:: You may need to add extra rules to iptable to forward traffic.
507 .. code-block:: console
509 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
510 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
512 Execute the following on a jump host:
514 .. code-block:: console
516 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
517 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
518 ip link set dev vxlan0 up
520 .. note:: Host and jump host are different baremetal servers.
522 b) Modify test case management CIDR.
523 IP addresses IP#1, IP#2 and CIDR must be in the same network.
533 c) Build guest image for VNF to run.
534 Most of the sample test cases in Yardstick are using a guest image called
535 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
536 Yardstick has a tool for building this custom image with SampleVNF.
537 It is necessary to have ``sudo`` rights to use this tool.
539 Also you may need to install several additional packages to use this tool, by
540 following the commands below::
542 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
544 This image can be built using the following command in the directory where
545 Yardstick is installed::
547 export YARD_IMG_ARCH='amd64'
548 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
550 For instructions on generating a cloud image using Ansible, refer to
551 :doc:`04-installation`.
553 for more details refer to chapter :doc:`04-installation`
555 .. note:: VM should be build with static IP and be accessible from the
559 SR-IOV Config pod.yaml describing Topology
560 ++++++++++++++++++++++++++++++++++++++++++
564 .. code-block:: console
566 +--------------------+
572 +--------------------+
573 | VF NIC | | VF NIC |
574 +--------+ +--------+
578 +----------+ +-------------------------+
581 | | (0)<----->(0) | ------ SUT | |
583 | | (n)<----->(n) | ----------------- |
585 +----------+ +-------------------------+
590 SR-IOV 3-Node setup - Correlated Traffic
591 ++++++++++++++++++++++++++++++++++++++++
592 .. code-block:: console
594 +--------------------+
600 +--------------------+
601 | VF NIC | | VF NIC |
602 +--------+ +--------+
606 +----------+ +---------------------+ +--------------+
609 | | (0)<----->(0) |----- | | | TG2 |
610 | TG1 | | SUT | | | (UDP Replay) |
612 | | (n)<----->(n) | -----| (n)<-->(n) | |
613 +----------+ +---------------------+ +--------------+
614 trafficgen_0 host trafficgen_1
616 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
617 topology and update all the required fields.
619 .. code-block:: console
621 cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
622 cp ./etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
624 .. note:: Update all the required fields like ip, user, password, pcis, etc...
626 SR-IOV Config pod_trex.yaml
627 +++++++++++++++++++++++++++
638 key_filename: /root/.ssh/id_rsa
640 xe0: # logical name from topology.yaml and vnfd.yaml
642 driver: i40e # default kernel driver
644 local_ip: "152.16.100.20"
645 netmask: "255.255.255.0"
646 local_mac: "00:00:00:00:00:01"
647 xe1: # logical name from topology.yaml and vnfd.yaml
649 driver: i40e # default kernel driver
651 local_ip: "152.16.40.20"
652 netmask: "255.255.255.0"
653 local_mac: "00:00.00:00:00:02"
655 SR-IOV Config host_sriov.yaml
656 +++++++++++++++++++++++++++++
668 SR-IOV testcase update:
669 ``./samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
671 Update contexts section
672 '''''''''''''''''''''''
679 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
680 - type: StandaloneSriov
681 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
685 images: "/var/lib/libvirt/images/ubuntu.qcow2"
691 user: "" # update VM username
692 password: "" # update password
697 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
704 phy_port: "0000:05:00.0"
706 cidr: '152.16.100.10/24'
707 gateway_ip: '152.16.100.20'
709 phy_port: "0000:05:00.1"
711 cidr: '152.16.40.10/24'
712 gateway_ip: '152.16.100.20'
718 OVS-DPDK Pre-requisites
719 +++++++++++++++++++++++
721 On Host, where VM is created:
722 a) Create and configure a bridge named ``br-int`` for VM to connect to
723 external network. Currently this can be done using VXLAN tunnel.
725 Execute the following on host, where VM is created:
727 .. code-block:: console
729 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
731 brctl addif br-int vxlan0
732 ip link set dev vxlan0 up
733 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
734 ip link set dev br-int up
736 .. note:: May be needed to add extra rules to iptable to forward traffic.
738 .. code-block:: console
740 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
741 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
743 Execute the following on a jump host:
745 .. code-block:: console
747 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
748 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
749 ip link set dev vxlan0 up
751 .. note:: Host and jump host are different baremetal servers.
753 b) Modify test case management CIDR.
754 IP addresses IP#1, IP#2 and CIDR must be in the same network.
764 c) Build guest image for VNF to run.
765 Most of the sample test cases in Yardstick are using a guest image called
766 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
767 Yardstick has a tool for building this custom image with SampleVNF.
768 It is necessary to have ``sudo`` rights to use this tool.
770 You may need to install several additional packages to use this tool, by
771 following the commands below::
773 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
775 This image can be built using the following command in the directory where
776 Yardstick is installed::
778 export YARD_IMG_ARCH='amd64'
779 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
780 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
782 for more details refer to chapter :doc:`04-installation`
784 .. note:: VM should be build with static IP and should be accessible from
787 3. OVS & DPDK version.
788 * OVS 2.7 and DPDK 16.11.1 above version is supported
790 4. Setup `OVS-DPDK`_ on host.
793 OVS-DPDK Config pod.yaml describing Topology
794 ++++++++++++++++++++++++++++++++++++++++++++
796 OVS-DPDK 2-Node setup
797 +++++++++++++++++++++
799 .. code-block:: console
801 +--------------------+
807 +--------------------+
808 | virtio | | virtio |
809 +--------+ +--------+
813 +--------+ +--------+
814 | vHOST0 | | vHOST1 |
815 +----------+ +-------------------------+
818 | | (0)<----->(0) | ------ | |
821 | | (n)<----->(n) |------------------ |
822 +----------+ +-------------------------+
826 OVS-DPDK 3-Node setup - Correlated Traffic
827 ++++++++++++++++++++++++++++++++++++++++++
829 .. code-block:: console
831 +--------------------+
837 +--------------------+
838 | virtio | | virtio |
839 +--------+ +--------+
843 +--------+ +--------+
844 | vHOST0 | | vHOST1 |
845 +----------+ +-------------------------+ +------------+
848 | | (0)<----->(0) | ------ | | | TG2 |
849 | TG1 | | SUT | | |(UDP Replay)|
850 | | | (ovs-dpdk) | | | |
851 | | (n)<----->(n) | ------ |(n)<-->(n)| |
852 +----------+ +-------------------------+ +------------+
853 trafficgen_0 host trafficgen_1
856 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
857 the topology and update all the required fields::
859 cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
860 cp ./etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
862 .. note:: Update all the required fields like ip, user, password, pcis, etc...
864 OVS-DPDK Config pod_trex.yaml
865 +++++++++++++++++++++++++++++
877 xe0: # logical name from topology.yaml and vnfd.yaml
879 driver: i40e # default kernel driver
881 local_ip: "152.16.100.20"
882 netmask: "255.255.255.0"
883 local_mac: "00:00:00:00:00:01"
884 xe1: # logical name from topology.yaml and vnfd.yaml
886 driver: i40e # default kernel driver
888 local_ip: "152.16.40.20"
889 netmask: "255.255.255.0"
890 local_mac: "00:00.00:00:00:02"
892 OVS-DPDK Config host_ovs.yaml
893 +++++++++++++++++++++++++++++
905 ovs_dpdk testcase update:
906 ``./samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
908 Update contexts section
909 '''''''''''''''''''''''
916 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
917 - type: StandaloneOvsDpdk
919 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
933 images: "/var/lib/libvirt/images/ubuntu.qcow2"
939 user: "" # update VM username
940 password: "" # update password
945 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
952 phy_port: "0000:05:00.0"
954 cidr: '152.16.100.10/24'
955 gateway_ip: '152.16.100.20'
957 phy_port: "0000:05:00.1"
959 cidr: '152.16.40.10/24'
960 gateway_ip: '152.16.100.20'
962 OVS-DPDK configuration options
963 ++++++++++++++++++++++++++++++
965 There are number of configuration options available for OVS-DPDK context in
966 test case. Mostly they are used for performance tuning.
971 OVS-DPDK properties example under *ovs_properties* section:
973 .. code-block:: console
988 dpdk_pmd-rxq-affinity:
993 vhost_pmd-rxq-affinity:
999 OVS-DPDK properties description:
1001 +-------------------------+-------------------------------------------------+
1002 | Parameters | Detail |
1003 +=========================+=================================================+
1004 | version || Version of OVS and DPDK to be installed |
1005 | || There is a relation between OVS and DPDK |
1006 | | version which can be found at |
1007 | | `OVS-DPDK-versions`_ |
1008 | || By default OVS: 2.6.0, DPDK: 16.07.2 |
1009 +-------------------------+-------------------------------------------------+
1010 | lcore_mask || Core bitmask used during DPDK initialization |
1011 | | where the non-datapath OVS-DPDK threads such |
1012 | | as handler and revalidator threads run |
1013 +-------------------------+-------------------------------------------------+
1014 | pmd_cpu_mask || Core bitmask that sets which cores are used by |
1015 | || OVS-DPDK for datapath packet processing |
1016 +-------------------------+-------------------------------------------------+
1017 | pmd_threads || Number of PMD threads used by OVS-DPDK for |
1019 | || This core mask is evaluated in Yardstick |
1020 | || It will be used if pmd_cpu_mask is not given |
1022 +-------------------------+-------------------------------------------------+
1023 | ram || Amount of RAM to be used for each socket, MB |
1024 | || Default is 2048 MB |
1025 +-------------------------+-------------------------------------------------+
1026 | queues || Number of RX queues used for DPDK physical |
1028 +-------------------------+-------------------------------------------------+
1029 | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
1030 | || e.g.: <port number> : <queue-id>:<core-id> |
1031 +-------------------------+-------------------------------------------------+
1032 | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
1033 | || e.g.: <port number> : <queue-id>:<core-id> |
1034 +-------------------------+-------------------------------------------------+
1035 | vpath || User path for openvswitch files |
1036 | || Default is ``/usr/local`` |
1037 +-------------------------+-------------------------------------------------+
1038 | max_idle || The maximum time that idle flows will remain |
1039 | | cached in the datapath, ms |
1040 +-------------------------+-------------------------------------------------+
1046 VM image properties example under *flavor* section:
1048 .. code-block:: console
1054 machine_type: 'pc-i440fx-xenial'
1061 <vcpupin vcpu="0" cpuset="7"/>
1062 <vcpupin vcpu="1" cpuset="8"/>
1064 <vcpupin vcpu="11" cpuset="18"/>
1065 <emulatorpin cpuset="11"/>
1068 VM image properties description:
1070 +-------------------------+-------------------------------------------------+
1071 | Parameters | Detail |
1072 +=========================+=================================================+
1073 | images || Path to the VM image generated by |
1074 | | ``nsb_setup.sh`` |
1075 | || Default path is ``/var/lib/libvirt/images/`` |
1076 | || Default file name ``yardstick-nsb-image.img`` |
1077 | | or ``yardstick-image.img`` |
1078 +-------------------------+-------------------------------------------------+
1079 | ram || Amount of RAM to be used for VM |
1080 | || Default is 4096 MB |
1081 +-------------------------+-------------------------------------------------+
1082 | hw:cpu_sockets || Number of sockets provided to the guest VM |
1084 +-------------------------+-------------------------------------------------+
1085 | hw:cpu_cores || Number of cores provided to the guest VM |
1087 +-------------------------+-------------------------------------------------+
1088 | hw:cpu_threads || Number of threads provided to the guest VM |
1090 +-------------------------+-------------------------------------------------+
1091 | hw_socket || Generate vcpu cpuset from given HW socket |
1093 +-------------------------+-------------------------------------------------+
1094 | cputune || Maps virtual cpu with logical cpu |
1095 +-------------------------+-------------------------------------------------+
1096 | machine_type || Machine type to be emulated in VM |
1097 | || Default is 'pc-i440fx-xenial' |
1098 +-------------------------+-------------------------------------------------+
1101 OpenStack with SR-IOV support
1102 -----------------------------
1104 This section describes how to run a Sample VNF test case, using Heat context,
1105 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1106 DevStack, with SR-IOV support.
1109 Single node OpenStack with external TG
1110 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1112 .. code-block:: console
1114 +----------------------------+
1115 |OpenStack(DevStack) |
1117 | +--------------------+ |
1118 | |sample-VNF VM | |
1123 | +--------+ +--------+ |
1124 | | VF NIC | | VF NIC | |
1125 | +-----+--+--+----+---+ |
1128 +----------+ +---------+----------+-------+
1132 | TG | (PF0)<----->(PF0) +---------+ | |
1134 | | (PF1)<----->(PF1) +--------------------+ |
1136 +----------+ +----------------------------+
1140 Host pre-configuration
1141 ++++++++++++++++++++++
1143 .. warning:: The following configuration requires sudo access to the system.
1144 Make sure that your user have the access.
1146 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1147 manufacturers disable this extension by default.
1149 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1150 config file ``/etc/default/grub``.
1152 For the Intel platform::
1155 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1158 For the AMD platform::
1161 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1164 Update the grub configuration file and restart the system:
1166 .. warning:: The following command will reboot the system.
1173 Make sure the extension has been enabled::
1175 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1177 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
1178 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1179 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1180 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1181 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1182 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1183 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1184 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1186 .. TODO: Refer to the yardstick installation guide for proxy set up
1188 Setup system proxy (if needed). Add the following configuration into the
1189 ``/etc/environment`` file:
1191 .. note:: The proxy server name/port and IPs should be changed according to
1192 actual/current proxy configuration in the lab.
1196 export http_proxy=http://proxy.company.com:port
1197 export https_proxy=http://proxy.company.com:port
1198 export ftp_proxy=http://proxy.company.com:port
1199 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1200 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1206 sudo -EH apt-get update
1207 sudo -EH apt-get upgrade
1208 sudo -EH apt-get dist-upgrade
1210 Install dependencies needed for DevStack
1214 sudo -EH apt-get install python python-dev python-pip
1216 Setup SR-IOV ports on the host:
1218 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1219 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1220 interface names should be changed according to the HW environment used for
1225 sudo ip link set dev enp24s0f0 up
1226 sudo ip link set dev enp24s0f1 up
1227 sudo ip link set dev enp24s0f3 up
1230 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1231 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1234 DevStack installation
1235 +++++++++++++++++++++
1237 If you want to try out NSB, but don't have OpenStack set-up, you can use
1238 `Devstack`_ to install OpenStack on a host. Please note, that the
1239 ``stable/pike`` branch of devstack repo should be used during the installation.
1240 The required ``local.conf`` configuration file are described below.
1242 DevStack configuration file:
1244 .. note:: Update the devstack configuration file by replacing angluar brackets
1245 with a short description inside.
1247 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1248 commands to get device and vendor id of the virtual function (VF).
1250 .. literalinclude:: code/single-devstack-local.conf
1253 Start the devstack installation on a host.
1255 TG host configuration
1256 +++++++++++++++++++++
1258 Yardstick automatically installs and configures Trex traffic generator on TG
1259 host based on provided POD file (see below). Anyway, it's recommended to check
1260 the compatibility of the installed NIC on the TG server with software Trex
1261 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1263 Run the Sample VNF test case
1264 ++++++++++++++++++++++++++++
1266 There is an example of Sample VNF test case ready to be executed in an
1267 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1268 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1270 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1273 Create pod file for TG in the yardstick repo folder located in the yardstick
1276 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1277 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1278 command to get the PF PCI address for ``vpci`` field.
1280 .. literalinclude:: code/single-yardstick-pod.conf
1283 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1284 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1285 context using steps described in `NS testing - using yardstick CLI`_ section.
1288 Multi node OpenStack TG and VNF setup (two nodes)
1289 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1291 .. code-block:: console
1293 +----------------------------+ +----------------------------+
1294 |OpenStack(DevStack) | |OpenStack(DevStack) |
1296 | +--------------------+ | | +--------------------+ |
1297 | |sample-VNF VM | | | |sample-VNF VM | |
1299 | | TG | | | | DUT | |
1300 | | trafficgen_0 | | | | (VNF) | |
1302 | +--------+ +--------+ | | +--------+ +--------+ |
1303 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1304 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1307 +--------+-----------+-------+ +---------+----------+-------+
1308 | VF0 VF1 | | VF0 VF1 |
1310 | | SUT2 | | | | SUT1 | |
1311 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1313 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1315 +----------------------------+ +----------------------------+
1316 host2 (compute) host1 (controller)
1319 Controller/Compute pre-configuration
1320 ++++++++++++++++++++++++++++++++++++
1322 Pre-configuration of the controller and compute hosts are the same as
1323 described in `Host pre-configuration`_ section.
1325 DevStack configuration
1326 ++++++++++++++++++++++
1328 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1329 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1330 devstack repo should be used during the installation.
1332 .. note:: Update the devstack configuration files by replacing angluar brackets
1333 with a short description inside.
1335 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1336 commands to get device and vendor id of the virtual function (VF).
1338 DevStack configuration file for controller host:
1340 .. literalinclude:: code/multi-devstack-controller-local.conf
1343 DevStack configuration file for compute host:
1345 .. literalinclude:: code/multi-devstack-compute-local.conf
1348 Start the devstack installation on the controller and compute hosts.
1350 Run the sample vFW TC
1351 +++++++++++++++++++++
1353 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1356 Run the sample vFW RFC2544 SR-IOV test case
1357 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1358 in the heat context using steps described in
1359 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1364 yardstick -d task start --task-args='{"provider": "sriov"}' \
1365 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1368 Enabling other Traffic generators
1369 ---------------------------------
1374 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1375 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1376 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1377 ``<IxOS version>Linux64.bin.tar.gz``
1378 If the installation was not done inside the container, after installing
1379 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1380 sure you can run this cmd inside the yardstick container. Usually user is
1381 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1382 ``/usr/bin/ixiapython<ver>`` inside the container.
1384 2. Update ``pod_ixia.yaml`` file with ixia details.
1386 .. code-block:: console
1388 cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1389 /etc/yardstick/nodes/pod_ixia.yaml
1391 Config ``pod_ixia.yaml``
1393 .. literalinclude:: code/pod_ixia.yaml
1396 for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1397 for ovs-dpdk/sriov configuration
1399 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1400 You will also need to configure the IxLoad machine to start the IXIA
1401 IxosTclServer. This can be started like so:
1403 * Connect to the IxLoad machine using RDP
1405 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1407 ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1409 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1411 5. Execute testcase in samplevnf folder e.g.
1412 ``./samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1417 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1418 installed as part of the requirements of the project.
1420 1. Update ``pod_ixia.yaml`` file with ixia details.
1422 .. code-block:: console
1424 cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1425 /etc/yardstick/nodes/pod_ixia.yaml
1427 Configure ``pod_ixia.yaml``
1429 .. literalinclude:: code/pod_ixia.yaml
1432 for sriov/ovs_dpdk pod files, please refer to above
1433 `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1435 2. Start IxNetwork TCL Server
1436 You will also need to configure the IxNetwork machine to start the IXIA
1437 IxNetworkTclServer. This can be started like so:
1439 * Connect to the IxNetwork machine using RDP
1441 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1442 (or ``IxNetworkApiServer``)
1444 3. Execute testcase in samplevnf folder e.g.
1445 ``./samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1450 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1451 to be preinstalled and properly configured.
1455 32-bit Java installation is required for the Spirent Landslide TCL API.
1457 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1460 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1461 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1462 section of installation instructions.
1464 - LsApi (Tcl API module)
1466 Follow Landslide documentation for detailed instructions on Linux
1467 installation of Tcl API and its dependencies
1468 ``http://TAS_HOST_IP/tclapiinstall.html``.
1469 For working with LsApi Python wrapper only steps 1-5 are required.
1471 .. note:: After installation make sure your API home path is included in
1472 ``PYTHONPATH`` environment variable.
1475 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1476 For LsApi module to initialize correctly following lines (184-186) in
1479 .. code-block:: python
1481 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1483 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1485 should be changed to:
1487 .. code-block:: python
1489 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1490 if not ldpath == '':
1491 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1493 .. note:: The Spirent landslide TCL software package needs to be updated in case
1494 the user upgrades to a new version of Spirent landslide software.