1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2019 Intel Corporation.
7 Convention for heading levels in Yardstick documentation:
9 ======= Heading 0 (reserved for the title in a document)
15 Avoid deeper levels because they do not render well.
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
29 The steps needed to run Yardstick with NSB testing are:
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
39 Refer to :doc:`04-installation` for more information on Yardstick
42 Several prerequisites are needed for Yardstick (VNF testing):
44 * Python Modules: pyzmq, pika.
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
60 ======= ===================
62 ======= ===================
66 kernel 4.4.0-34-generic
68 ======= ===================
70 Boot and BIOS settings:
72 ============= =================================================
73 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76 iommu=on iommu=pt intel_iommu=on
77 Note: nohz_full and rcu_nocbs is to disable Linux
79 BIOS CPU Power and Performance Policy <Performance>
82 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83 Hyper-Threading Technology (If supported) Enabled
84 Virtualization Techology Enabled
85 Intel(R) VT for Direct I/O Enabled
88 ============= =================================================
90 Install Yardstick (NSB Testing)
91 -------------------------------
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
96 1. Install Yardstick in specified mode: bare metal or container.
97 Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99 sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100 Add such servers to ``install-inventory.ini`` file to either
101 ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102 It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104 Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105 The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
113 http_proxy='http://proxy.company.com:port'
114 https_proxy='http://proxy.company.com:port'
116 .. code-block:: console
118 export http_proxy='http://proxy.company.com:port'
119 export https_proxy='http://proxy.company.com:port'
121 Download the source code and check out the latest stable branch
123 .. code-block:: console
125 git clone https://gerrit.opnfv.org/gerrit/yardstick
127 # Switch to latest stable branch
128 git checkout stable/gambia
130 Modify the Yardstick installation inventory used by Ansible::
132 cat ./ansible/install-inventory.ini
134 localhost ansible_connection=local
136 # section below is only due backward compatibility.
137 # it will be removed later
141 [yardstick-baremetal]
142 baremetal ansible_host=192.168.2.51 ansible_connection=ssh
144 [yardstick-standalone]
145 standalone ansible_host=192.168.2.52 ansible_connection=ssh
148 # Uncomment credentials below if needed
150 ansible_ssh_pass=root
151 # ansible_ssh_private_key_file=/root/.ssh/id_rsa
152 # When IMG_PROPERTY is passed neither normal nor nsb set
153 # "path_to_vm=/path/to/image" to add it to OpenStack
154 # path_to_img=/tmp/workspace/yardstick-image.img
156 # List of CPUs to be isolated (not used by default)
157 # Grub line will be extended with:
158 # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
159 # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
163 Before running ``nsb_setup.sh`` make sure python is installed on servers
164 added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
168 SSH access without password needs to be configured for all your nodes
169 defined in ``install-inventory.ini`` file.
170 If you want to use password authentication you need to install ``sshpass``::
172 sudo -EH apt-get install sshpass
177 A VM image built by other means than Yardstick can be added to OpenStack.
178 Uncomment and set correct path to the VM image in the
179 ``install-inventory.ini`` file::
181 path_to_img=/tmp/workspace/yardstick-image.img
186 CPU isolation can be applied to the remote servers, like:
187 ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
188 ``install-inventory.ini`` file.
190 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
191 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
192 installs packages to the servers given in ``yardstick-standalone`` and
193 ``yardstick-baremetal`` host groups.
195 To pull Yardstick built based on Ubuntu 18 run::
197 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
199 To change default behavior modify parameters for ``install.yaml`` in
200 ``nsb_setup.sh`` file.
202 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
205 To execute an installation for a **BareMetal** or a **Standalone context**::
209 To execute an installation for an **OpenStack** context::
211 ./nsb_setup.sh <path to admin-openrc.sh>
215 Yardstick may not be operational after distributive linux kernel update if
216 it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
220 The Yardstick VM image (NSB or normal) cannot be built inside a VM.
224 The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
225 Reboot of the servers from ``yardstick-standalone`` or
226 ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
227 required to apply those changes.
229 The above commands will set up Docker with the latest Yardstick code. To
232 docker exec -it yardstick bash
236 It may be needed to configure tty in docker container to extend commandline
237 character length, for example:
239 stty size rows 58 cols 234
241 It will also automatically download all the packages needed for NSB Testing
242 setup. Refer chapter :doc:`04-installation` for more on Docker.
244 **Install Yardstick using Docker (recommended)**
246 Bare Metal context example
247 ^^^^^^^^^^^^^^^^^^^^^^^^^^
249 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
251 Perform following steps to install NSB:
253 1. Clone Yardstick repo to jump host.
254 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
255 ``install-inventory.ini`` file to install NSB and dependencies. Install
257 3. Start deployment using docker image based on Ubuntu 16:
259 .. code-block:: console
263 4. Reboot bare metal servers.
264 5. Enter to yardstick container and modify pod yaml file and run tests.
266 Standalone context example for Ubuntu 18
267 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
269 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
270 Ubuntu 18 is installed on all servers.
272 Perform following steps to install NSB:
274 1. Clone Yardstick repo to jump host.
275 2. Add TG server to ``yardstick-baremetal`` group in
276 ``install-inventory.ini`` file to install NSB and dependencies.
277 Add server where VM with sample VNF will be deployed to
278 ``yardstick-standalone`` group in ``install-inventory.ini`` file.
279 Target VM image named ``yardstick-nsb-image.img`` will be placed to
280 ``/var/lib/libvirt/images/``.
281 Install python on servers.
282 3. Modify ``nsb_setup.sh`` on jump host:
284 .. code-block:: console
287 -e IMAGE_PROPERTY='nsb' \
288 -e OS_RELEASE='bionic' \
289 -e INSTALLATION_MODE='container_pull' \
290 -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
291 -i install-inventory.ini install.yaml
293 4. Start deployment with Yardstick docker images based on Ubuntu 18:
295 .. code-block:: console
297 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
300 6. Enter to yardstick container and modify pod yaml file and run tests.
306 .. code-block:: console
308 +----------+ +----------+
314 +----------+ +----------+
318 Environment parameters and credentials
319 --------------------------------------
321 Configure yardstick.conf
322 ^^^^^^^^^^^^^^^^^^^^^^^^
324 If you did not run ``yardstick env influxdb`` inside the container to generate
325 ``yardstick.conf``, then create the config file manually (run inside the
328 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
329 vi /etc/yardstick/yardstick.conf
331 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
336 dispatcher = influxdb
338 [dispatcher_influxdb]
340 target = http://{YOUR_IP_HERE}:8086
346 trex_path=/opt/nsb_bin/trex/scripts
347 bin_path=/opt/nsb_bin
348 trex_client_lib=/opt/nsb_bin/trex_client/stl
350 Run Yardstick - Network Service Testcases
351 -----------------------------------------
353 NS testing - using yardstick CLI
354 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
356 See :doc:`04-installation`
358 Connect to the Yardstick container::
360 docker exec -it yardstick /bin/bash
362 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
363 source /etc/yardstick/openstack.creds
365 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
368 export EXTERNAL_NETWORK="<openstack public network>"
370 Finally, you should be able to run the testcase::
372 yardstick --debug task start ./yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
374 Network Service Benchmarking - Bare-Metal
375 -----------------------------------------
377 Bare-Metal Config pod.yaml describing Topology
378 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
380 Bare-Metal 2-Node setup
381 +++++++++++++++++++++++
382 .. code-block:: console
384 +----------+ +----------+
390 +----------+ +----------+
393 Bare-Metal 3-Node setup - Correlated Traffic
394 ++++++++++++++++++++++++++++++++++++++++++++
395 .. code-block:: console
397 +----------+ +----------+ +------------+
400 | | (0)----->(0) | | | UDP |
401 | TG1 | | DUT | | Replay |
403 | | | |(1)<---->(0)| |
404 +----------+ +----------+ +------------+
405 trafficgen_0 vnf trafficgen_1
408 Bare-Metal Config pod.yaml
409 ^^^^^^^^^^^^^^^^^^^^^^^^^^
410 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
411 topology and update all the required fields.::
413 cp ./etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
425 xe0: # logical name from topology.yaml and vnfd.yaml
427 driver: i40e # default kernel driver
429 local_ip: "152.16.100.20"
430 netmask: "255.255.255.0"
431 local_mac: "00:00:00:00:00:01"
432 xe1: # logical name from topology.yaml and vnfd.yaml
434 driver: i40e # default kernel driver
436 local_ip: "152.16.40.20"
437 netmask: "255.255.255.0"
438 local_mac: "00:00.00:00:00:02"
446 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
448 xe0: # logical name from topology.yaml and vnfd.yaml
450 driver: i40e # default kernel driver
452 local_ip: "152.16.100.19"
453 netmask: "255.255.255.0"
454 local_mac: "00:00:00:00:00:03"
456 xe1: # logical name from topology.yaml and vnfd.yaml
458 driver: i40e # default kernel driver
460 local_ip: "152.16.40.19"
461 netmask: "255.255.255.0"
462 local_mac: "00:00:00:00:00:04"
464 - network: "152.16.100.20"
465 netmask: "255.255.255.0"
466 gateway: "152.16.100.20"
468 - network: "152.16.40.20"
469 netmask: "255.255.255.0"
470 gateway: "152.16.40.20"
473 - network: "0064:ff9b:0:0:0:0:9810:6414"
475 gateway: "0064:ff9b:0:0:0:0:9810:6414"
477 - network: "0064:ff9b:0:0:0:0:9810:2814"
479 gateway: "0064:ff9b:0:0:0:0:9810:2814"
483 Standalone Virtualization
484 -------------------------
486 VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
487 to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
488 manually. Test case example, context section::
498 SR-IOV Pre-requisites
499 +++++++++++++++++++++
501 On Host, where VM is created:
502 a) Create and configure a bridge named ``br-int`` for VM to connect to
503 external network. Currently this can be done using VXLAN tunnel.
505 Execute the following on host, where VM is created::
507 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
509 brctl addif br-int vxlan0
510 ip link set dev vxlan0 up
511 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
512 ip link set dev br-int up
514 .. note:: You may need to add extra rules to iptable to forward traffic.
516 .. code-block:: console
518 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
519 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
521 Execute the following on a jump host:
523 .. code-block:: console
525 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
526 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
527 ip link set dev vxlan0 up
529 .. note:: Host and jump host are different baremetal servers.
531 b) Modify test case management CIDR.
532 IP addresses IP#1, IP#2 and CIDR must be in the same network.
542 c) Build guest image for VNF to run.
543 Most of the sample test cases in Yardstick are using a guest image called
544 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
545 Yardstick has a tool for building this custom image with SampleVNF.
546 It is necessary to have ``sudo`` rights to use this tool.
548 Also you may need to install several additional packages to use this tool, by
549 following the commands below::
551 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
553 This image can be built using the following command in the directory where
554 Yardstick is installed::
556 export YARD_IMG_ARCH='amd64'
557 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
559 For instructions on generating a cloud image using Ansible, refer to
560 :doc:`04-installation`.
562 for more details refer to chapter :doc:`04-installation`
564 .. note:: VM should be build with static IP and be accessible from the
568 SR-IOV Config pod.yaml describing Topology
569 ++++++++++++++++++++++++++++++++++++++++++
573 .. code-block:: console
575 +--------------------+
581 +--------------------+
582 | VF NIC | | VF NIC |
583 +--------+ +--------+
587 +----------+ +-------------------------+
590 | | (0)<----->(0) | ------ SUT | |
592 | | (n)<----->(n) | ----------------- |
594 +----------+ +-------------------------+
599 SR-IOV 3-Node setup - Correlated Traffic
600 ++++++++++++++++++++++++++++++++++++++++
601 .. code-block:: console
603 +--------------------+
609 +--------------------+
610 | VF NIC | | VF NIC |
611 +--------+ +--------+
615 +----------+ +---------------------+ +--------------+
618 | | (0)<----->(0) |----- | | | TG2 |
619 | TG1 | | SUT | | | (UDP Replay) |
621 | | (n)<----->(n) | -----| (n)<-->(n) | |
622 +----------+ +---------------------+ +--------------+
623 trafficgen_0 host trafficgen_1
625 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
626 topology and update all the required fields.
628 .. code-block:: console
630 cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
631 cp ./etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
633 .. note:: Update all the required fields like ip, user, password, pcis, etc...
635 SR-IOV Config pod_trex.yaml
636 +++++++++++++++++++++++++++
647 key_filename: /root/.ssh/id_rsa
649 xe0: # logical name from topology.yaml and vnfd.yaml
651 driver: i40e # default kernel driver
653 local_ip: "152.16.100.20"
654 netmask: "255.255.255.0"
655 local_mac: "00:00:00:00:00:01"
656 xe1: # logical name from topology.yaml and vnfd.yaml
658 driver: i40e # default kernel driver
660 local_ip: "152.16.40.20"
661 netmask: "255.255.255.0"
662 local_mac: "00:00.00:00:00:02"
664 SR-IOV Config host_sriov.yaml
665 +++++++++++++++++++++++++++++
677 SR-IOV testcase update:
678 ``./samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
680 Update contexts section
681 '''''''''''''''''''''''
688 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
689 - type: StandaloneSriov
690 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
694 images: "/var/lib/libvirt/images/ubuntu.qcow2"
700 user: "" # update VM username
701 password: "" # update password
706 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
713 phy_port: "0000:05:00.0"
715 cidr: '152.16.100.10/24'
716 gateway_ip: '152.16.100.20'
718 phy_port: "0000:05:00.1"
720 cidr: '152.16.40.10/24'
721 gateway_ip: '152.16.100.20'
724 SRIOV configuration options
725 +++++++++++++++++++++++++++
727 The only configuration option available for SRIOV is *vpci*. It is used as base
728 address for VFs that are created during SRIOV test case execution.
730 .. code-block:: yaml+jinja
734 phy_port: "0000:05:00.0"
736 cidr: '152.16.100.10/24'
737 gateway_ip: '152.16.100.20'
739 phy_port: "0000:05:00.1"
741 cidr: '152.16.40.10/24'
742 gateway_ip: '152.16.100.20'
744 .. _`VM image properties label`:
749 VM image properties example under *flavor* section:
751 .. code-block:: console
757 machine_type: 'pc-i440fx-xenial'
764 <vcpupin vcpu="0" cpuset="7"/>
765 <vcpupin vcpu="1" cpuset="8"/>
767 <vcpupin vcpu="11" cpuset="18"/>
768 <emulatorpin cpuset="11"/>
773 VM image properties description:
775 +-------------------------+-------------------------------------------------+
776 | Parameters | Detail |
777 +=========================+=================================================+
778 | images || Path to the VM image generated by |
779 | | ``nsb_setup.sh`` |
780 | || Default path is ``/var/lib/libvirt/images/`` |
781 | || Default file name ``yardstick-nsb-image.img`` |
782 | | or ``yardstick-image.img`` |
783 +-------------------------+-------------------------------------------------+
784 | ram || Amount of RAM to be used for VM |
785 | || Default is 4096 MB |
786 +-------------------------+-------------------------------------------------+
787 | hw:cpu_sockets || Number of sockets provided to the guest VM |
789 +-------------------------+-------------------------------------------------+
790 | hw:cpu_cores || Number of cores provided to the guest VM |
792 +-------------------------+-------------------------------------------------+
793 | hw:cpu_threads || Number of threads provided to the guest VM |
795 +-------------------------+-------------------------------------------------+
796 | hw_socket || Generate vcpu cpuset from given HW socket |
798 +-------------------------+-------------------------------------------------+
799 | cputune || Maps virtual cpu with logical cpu |
800 +-------------------------+-------------------------------------------------+
801 | machine_type || Machine type to be emulated in VM |
802 | || Default is 'pc-i440fx-xenial' |
803 +-------------------------+-------------------------------------------------+
804 | user || User name to access the VM |
805 | || Default value is 'root' |
806 +-------------------------+-------------------------------------------------+
807 | password || Password to access the VM |
808 +-------------------------+-------------------------------------------------+
814 OVS-DPDK Pre-requisites
815 +++++++++++++++++++++++
817 On Host, where VM is created:
818 a) Create and configure a bridge named ``br-int`` for VM to connect to
819 external network. Currently this can be done using VXLAN tunnel.
821 Execute the following on host, where VM is created:
823 .. code-block:: console
825 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
827 brctl addif br-int vxlan0
828 ip link set dev vxlan0 up
829 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
830 ip link set dev br-int up
832 .. note:: May be needed to add extra rules to iptable to forward traffic.
834 .. code-block:: console
836 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
837 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
839 Execute the following on a jump host:
841 .. code-block:: console
843 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
844 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
845 ip link set dev vxlan0 up
847 .. note:: Host and jump host are different baremetal servers.
849 b) Modify test case management CIDR.
850 IP addresses IP#1, IP#2 and CIDR must be in the same network.
860 c) Build guest image for VNF to run.
861 Most of the sample test cases in Yardstick are using a guest image called
862 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
863 Yardstick has a tool for building this custom image with SampleVNF.
864 It is necessary to have ``sudo`` rights to use this tool.
866 You may need to install several additional packages to use this tool, by
867 following the commands below::
869 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
871 This image can be built using the following command in the directory where
872 Yardstick is installed::
874 export YARD_IMG_ARCH='amd64'
875 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
876 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
878 for more details refer to chapter :doc:`04-installation`
880 .. note:: VM should be build with static IP and should be accessible from
883 3. OVS & DPDK version.
884 * OVS 2.7 and DPDK 16.11.1 above version is supported
886 4. Setup `OVS-DPDK`_ on host.
889 OVS-DPDK Config pod.yaml describing Topology
890 ++++++++++++++++++++++++++++++++++++++++++++
892 OVS-DPDK 2-Node setup
893 +++++++++++++++++++++
895 .. code-block:: console
897 +--------------------+
903 +--------------------+
904 | virtio | | virtio |
905 +--------+ +--------+
909 +--------+ +--------+
910 | vHOST0 | | vHOST1 |
911 +----------+ +-------------------------+
914 | | (0)<----->(0) | ------ | |
917 | | (n)<----->(n) |------------------ |
918 +----------+ +-------------------------+
922 OVS-DPDK 3-Node setup - Correlated Traffic
923 ++++++++++++++++++++++++++++++++++++++++++
925 .. code-block:: console
927 +--------------------+
933 +--------------------+
934 | virtio | | virtio |
935 +--------+ +--------+
939 +--------+ +--------+
940 | vHOST0 | | vHOST1 |
941 +----------+ +-------------------------+ +------------+
944 | | (0)<----->(0) | ------ | | | TG2 |
945 | TG1 | | SUT | | |(UDP Replay)|
946 | | | (ovs-dpdk) | | | |
947 | | (n)<----->(n) | ------ |(n)<-->(n)| |
948 +----------+ +-------------------------+ +------------+
949 trafficgen_0 host trafficgen_1
952 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
953 the topology and update all the required fields::
955 cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
956 cp ./etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
958 .. note:: Update all the required fields like ip, user, password, pcis, etc...
960 OVS-DPDK Config pod_trex.yaml
961 +++++++++++++++++++++++++++++
973 xe0: # logical name from topology.yaml and vnfd.yaml
975 driver: i40e # default kernel driver
977 local_ip: "152.16.100.20"
978 netmask: "255.255.255.0"
979 local_mac: "00:00:00:00:00:01"
980 xe1: # logical name from topology.yaml and vnfd.yaml
982 driver: i40e # default kernel driver
984 local_ip: "152.16.40.20"
985 netmask: "255.255.255.0"
986 local_mac: "00:00.00:00:00:02"
988 OVS-DPDK Config host_ovs.yaml
989 +++++++++++++++++++++++++++++
1001 ovs_dpdk testcase update:
1002 ``./samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
1004 Update contexts section
1005 '''''''''''''''''''''''
1007 .. code-block:: YAML
1012 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
1013 - type: StandaloneOvsDpdk
1015 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
1029 images: "/var/lib/libvirt/images/ubuntu.qcow2"
1035 user: "" # update VM username
1036 password: "" # update password
1041 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
1048 phy_port: "0000:05:00.0"
1049 vpci: "0000:00:07.0"
1050 cidr: '152.16.100.10/24'
1051 gateway_ip: '152.16.100.20'
1053 phy_port: "0000:05:00.1"
1054 vpci: "0000:00:08.0"
1055 cidr: '152.16.40.10/24'
1056 gateway_ip: '152.16.100.20'
1058 OVS-DPDK configuration options
1059 ++++++++++++++++++++++++++++++
1061 There are number of configuration options available for OVS-DPDK context in
1062 test case. Mostly they are used for performance tuning.
1064 OVS-DPDK properties:
1065 ''''''''''''''''''''
1067 OVS-DPDK properties example under *ovs_properties* section:
1069 .. code-block:: console
1076 pmd_cpu_mask: "0x3c"
1084 dpdk_pmd-rxq-affinity:
1089 vhost_pmd-rxq-affinity:
1095 OVS-DPDK properties description:
1097 +-------------------------+-------------------------------------------------+
1098 | Parameters | Detail |
1099 +=========================+=================================================+
1100 | version || Version of OVS and DPDK to be installed |
1101 | || There is a relation between OVS and DPDK |
1102 | | version which can be found at |
1103 | | `OVS-DPDK-versions`_ |
1104 | || By default OVS: 2.6.0, DPDK: 16.07.2 |
1105 +-------------------------+-------------------------------------------------+
1106 | lcore_mask || Core bitmask used during DPDK initialization |
1107 | | where the non-datapath OVS-DPDK threads such |
1108 | | as handler and revalidator threads run |
1109 +-------------------------+-------------------------------------------------+
1110 | pmd_cpu_mask || Core bitmask that sets which cores are used by |
1111 | || OVS-DPDK for datapath packet processing |
1112 +-------------------------+-------------------------------------------------+
1113 | pmd_threads || Number of PMD threads used by OVS-DPDK for |
1115 | || This core mask is evaluated in Yardstick |
1116 | || It will be used if pmd_cpu_mask is not given |
1118 +-------------------------+-------------------------------------------------+
1119 | ram || Amount of RAM to be used for each socket, MB |
1120 | || Default is 2048 MB |
1121 +-------------------------+-------------------------------------------------+
1122 | queues || Number of RX queues used for DPDK physical |
1124 +-------------------------+-------------------------------------------------+
1125 | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
1126 | || e.g.: <port number> : <queue-id>:<core-id> |
1127 +-------------------------+-------------------------------------------------+
1128 | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
1129 | || e.g.: <port number> : <queue-id>:<core-id> |
1130 +-------------------------+-------------------------------------------------+
1131 | vpath || User path for openvswitch files |
1132 | || Default is ``/usr/local`` |
1133 +-------------------------+-------------------------------------------------+
1134 | max_idle || The maximum time that idle flows will remain |
1135 | | cached in the datapath, ms |
1136 +-------------------------+-------------------------------------------------+
1142 VM image properties are same as for SRIOV :ref:`VM image properties label`.
1145 OpenStack with SR-IOV support
1146 -----------------------------
1148 This section describes how to run a Sample VNF test case, using Heat context,
1149 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1150 DevStack, with SR-IOV support.
1153 Single node OpenStack with external TG
1154 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1156 .. code-block:: console
1158 +----------------------------+
1159 |OpenStack(DevStack) |
1161 | +--------------------+ |
1162 | |sample-VNF VM | |
1167 | +--------+ +--------+ |
1168 | | VF NIC | | VF NIC | |
1169 | +-----+--+--+----+---+ |
1172 +----------+ +---------+----------+-------+
1176 | TG | (PF0)<----->(PF0) +---------+ | |
1178 | | (PF1)<----->(PF1) +--------------------+ |
1180 +----------+ +----------------------------+
1184 Host pre-configuration
1185 ++++++++++++++++++++++
1187 .. warning:: The following configuration requires sudo access to the system.
1188 Make sure that your user have the access.
1190 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1191 manufacturers disable this extension by default.
1193 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1194 config file ``/etc/default/grub``.
1196 For the Intel platform::
1199 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1202 For the AMD platform::
1205 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1208 Update the grub configuration file and restart the system:
1210 .. warning:: The following command will reboot the system.
1217 Make sure the extension has been enabled::
1219 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1221 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
1222 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1223 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1224 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1225 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1226 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1227 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1228 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1230 .. TODO: Refer to the yardstick installation guide for proxy set up
1232 Setup system proxy (if needed). Add the following configuration into the
1233 ``/etc/environment`` file:
1235 .. note:: The proxy server name/port and IPs should be changed according to
1236 actual/current proxy configuration in the lab.
1240 export http_proxy=http://proxy.company.com:port
1241 export https_proxy=http://proxy.company.com:port
1242 export ftp_proxy=http://proxy.company.com:port
1243 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1244 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1250 sudo -EH apt-get update
1251 sudo -EH apt-get upgrade
1252 sudo -EH apt-get dist-upgrade
1254 Install dependencies needed for DevStack
1258 sudo -EH apt-get install python python-dev python-pip
1260 Setup SR-IOV ports on the host:
1262 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1263 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1264 interface names should be changed according to the HW environment used for
1269 sudo ip link set dev enp24s0f0 up
1270 sudo ip link set dev enp24s0f1 up
1271 sudo ip link set dev enp24s0f3 up
1274 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1275 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1278 DevStack installation
1279 +++++++++++++++++++++
1281 If you want to try out NSB, but don't have OpenStack set-up, you can use
1282 `Devstack`_ to install OpenStack on a host. Please note, that the
1283 ``stable/pike`` branch of devstack repo should be used during the installation.
1284 The required ``local.conf`` configuration file are described below.
1286 DevStack configuration file:
1288 .. note:: Update the devstack configuration file by replacing angluar brackets
1289 with a short description inside.
1291 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1292 commands to get device and vendor id of the virtual function (VF).
1294 .. literalinclude:: code/single-devstack-local.conf
1297 Start the devstack installation on a host.
1299 TG host configuration
1300 +++++++++++++++++++++
1302 Yardstick automatically installs and configures Trex traffic generator on TG
1303 host based on provided POD file (see below). Anyway, it's recommended to check
1304 the compatibility of the installed NIC on the TG server with software Trex
1305 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1307 Run the Sample VNF test case
1308 ++++++++++++++++++++++++++++
1310 There is an example of Sample VNF test case ready to be executed in an
1311 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1312 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1314 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1317 Create pod file for TG in the yardstick repo folder located in the yardstick
1320 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1321 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1322 command to get the PF PCI address for ``vpci`` field.
1324 .. literalinclude:: code/single-yardstick-pod.conf
1327 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1328 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1329 context using steps described in `NS testing - using yardstick CLI`_ section.
1332 Multi node OpenStack TG and VNF setup (two nodes)
1333 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1335 .. code-block:: console
1337 +----------------------------+ +----------------------------+
1338 |OpenStack(DevStack) | |OpenStack(DevStack) |
1340 | +--------------------+ | | +--------------------+ |
1341 | |sample-VNF VM | | | |sample-VNF VM | |
1343 | | TG | | | | DUT | |
1344 | | trafficgen_0 | | | | (VNF) | |
1346 | +--------+ +--------+ | | +--------+ +--------+ |
1347 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1348 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1351 +--------+-----------+-------+ +---------+----------+-------+
1352 | VF0 VF1 | | VF0 VF1 |
1354 | | SUT2 | | | | SUT1 | |
1355 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1357 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1359 +----------------------------+ +----------------------------+
1360 host2 (compute) host1 (controller)
1363 Controller/Compute pre-configuration
1364 ++++++++++++++++++++++++++++++++++++
1366 Pre-configuration of the controller and compute hosts are the same as
1367 described in `Host pre-configuration`_ section.
1369 DevStack configuration
1370 ++++++++++++++++++++++
1372 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1373 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1374 devstack repo should be used during the installation.
1376 .. note:: Update the devstack configuration files by replacing angluar brackets
1377 with a short description inside.
1379 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1380 commands to get device and vendor id of the virtual function (VF).
1382 DevStack configuration file for controller host:
1384 .. literalinclude:: code/multi-devstack-controller-local.conf
1387 DevStack configuration file for compute host:
1389 .. literalinclude:: code/multi-devstack-compute-local.conf
1392 Start the devstack installation on the controller and compute hosts.
1394 Run the sample vFW TC
1395 +++++++++++++++++++++
1397 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1400 Run the sample vFW RFC2544 SR-IOV test case
1401 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1402 in the heat context using steps described in
1403 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1408 yardstick -d task start --task-args='{"provider": "sriov"}' \
1409 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1412 Enabling other Traffic generators
1413 ---------------------------------
1418 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1419 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1420 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1421 ``<IxOS version>Linux64.bin.tar.gz``
1422 If the installation was not done inside the container, after installing
1423 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1424 sure you can run this cmd inside the yardstick container. Usually user is
1425 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1426 ``/usr/bin/ixiapython<ver>`` inside the container.
1428 2. Update ``pod_ixia.yaml`` file with ixia details.
1430 .. code-block:: console
1432 cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1433 /etc/yardstick/nodes/pod_ixia.yaml
1435 Config ``pod_ixia.yaml``
1437 .. literalinclude:: code/pod_ixia.yaml
1440 for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1441 for ovs-dpdk/sriov configuration
1443 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1444 You will also need to configure the IxLoad machine to start the IXIA
1445 IxosTclServer. This can be started like so:
1447 * Connect to the IxLoad machine using RDP
1449 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1451 ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1453 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1455 5. Execute testcase in samplevnf folder e.g.
1456 ``./samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1461 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1462 installed as part of the requirements of the project.
1464 1. Update ``pod_ixia.yaml`` file with ixia details.
1466 .. code-block:: console
1468 cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1469 /etc/yardstick/nodes/pod_ixia.yaml
1471 Configure ``pod_ixia.yaml``
1473 .. literalinclude:: code/pod_ixia.yaml
1476 for sriov/ovs_dpdk pod files, please refer to above
1477 `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1479 2. Start IxNetwork TCL Server
1480 You will also need to configure the IxNetwork machine to start the IXIA
1481 IxNetworkTclServer. This can be started like so:
1483 * Connect to the IxNetwork machine using RDP
1485 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1486 (or ``IxNetworkApiServer``)
1488 3. Execute testcase in samplevnf folder e.g.
1489 ``./samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1494 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1495 to be preinstalled and properly configured.
1499 32-bit Java installation is required for the Spirent Landslide TCL API.
1501 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1504 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1505 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1506 section of installation instructions.
1508 - LsApi (Tcl API module)
1510 Follow Landslide documentation for detailed instructions on Linux
1511 installation of Tcl API and its dependencies
1512 ``http://TAS_HOST_IP/tclapiinstall.html``.
1513 For working with LsApi Python wrapper only steps 1-5 are required.
1515 .. note:: After installation make sure your API home path is included in
1516 ``PYTHONPATH`` environment variable.
1519 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1520 For LsApi module to initialize correctly following lines (184-186) in
1523 .. code-block:: python
1525 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1527 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1529 should be changed to:
1531 .. code-block:: python
1533 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1534 if not ldpath == '':
1535 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1537 .. note:: The Spirent landslide TCL software package needs to be updated in case
1538 the user upgrades to a new version of Spirent landslide software.