1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2019 Intel Corporation.
7 Convention for heading levels in Yardstick documentation:
9 ======= Heading 0 (reserved for the title in a document)
15 Avoid deeper levels because they do not render well.
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
29 The steps needed to run Yardstick with NSB testing are:
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
39 Refer to :doc:`04-installation` for more information on Yardstick
42 Several prerequisites are needed for Yardstick (VNF testing):
44 * Python Modules: pyzmq, pika.
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
60 ======= ===================
62 ======= ===================
66 kernel 4.4.0-34-generic
68 ======= ===================
70 Boot and BIOS settings:
72 ============= =================================================
73 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76 iommu=on iommu=pt intel_iommu=on
77 Note: nohz_full and rcu_nocbs is to disable Linux
79 BIOS CPU Power and Performance Policy <Performance>
82 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83 Hyper-Threading Technology (If supported) Enabled
84 Virtualization Techology Enabled
85 Intel(R) VT for Direct I/O Enabled
88 ============= =================================================
90 Install Yardstick (NSB Testing)
91 -------------------------------
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
96 1. Install Yardstick in specified mode: bare metal or container.
97 Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99 sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100 Add such servers to ``install-inventory.ini`` file to either
101 ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102 It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104 Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105 The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
111 Set environment in the file::
113 http_proxy='http://proxy.company.com:port'
114 https_proxy='http://proxy.company.com:port'
116 Set environment variables:
118 .. code-block:: console
120 export http_proxy='http://proxy.company.com:port'
121 export https_proxy='http://proxy.company.com:port'
123 Download the source code and check out the latest stable branch:
125 .. code-block:: console
127 git clone https://gerrit.opnfv.org/gerrit/yardstick
129 # Switch to latest stable branch
130 git checkout stable/gambia
132 Modify the Yardstick installation inventory used by Ansible:
136 cat ./ansible/install-inventory.ini
138 localhost ansible_connection=local
140 # section below is only due backward compatibility.
141 # it will be removed later
145 [yardstick-baremetal]
146 baremetal ansible_host=192.168.2.51 ansible_connection=ssh
148 [yardstick-standalone]
149 standalone ansible_host=192.168.2.52 ansible_connection=ssh
152 # Uncomment credentials below if needed
154 ansible_ssh_pass=root
155 # ansible_ssh_private_key_file=/root/.ssh/id_rsa
156 # When IMG_PROPERTY is passed neither normal nor nsb set
157 # "path_to_vm=/path/to/image" to add it to OpenStack
158 # path_to_img=/tmp/workspace/yardstick-image.img
160 # List of CPUs to be isolated (not used by default)
161 # Grub line will be extended with:
162 # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
163 # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
167 Before running ``nsb_setup.sh`` make sure python is installed on servers
168 added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
172 SSH access without password needs to be configured for all your nodes
173 defined in ``install-inventory.ini`` file.
174 If you want to use password authentication you need to install ``sshpass``::
176 sudo -EH apt-get install sshpass
181 A VM image built by other means than Yardstick can be added to OpenStack.
182 Uncomment and set correct path to the VM image in the
183 ``install-inventory.ini`` file::
185 path_to_img=/tmp/workspace/yardstick-image.img
190 CPU isolation can be applied to the remote servers, like:
191 ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
192 ``install-inventory.ini`` file.
194 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
195 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
196 installs packages to the servers given in ``yardstick-standalone`` and
197 ``yardstick-baremetal`` host groups.
199 To pull Yardstick built based on Ubuntu 18 run::
201 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
203 To change default behavior modify parameters for ``install.yaml`` in
204 ``nsb_setup.sh`` file.
206 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
209 To execute an installation for a **BareMetal** or a **Standalone context**::
213 To execute an installation for an **OpenStack** context::
215 ./nsb_setup.sh <path to admin-openrc.sh>
219 Yardstick may not be operational after distributive linux kernel update if
220 it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
224 The Yardstick VM image (NSB or normal) cannot be built inside a VM.
228 The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
229 Reboot of the servers from ``yardstick-standalone`` or
230 ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
231 required to apply those changes.
233 The above commands will set up Docker with the latest Yardstick code. To
236 docker exec -it yardstick bash
240 It may be needed to configure tty in docker container to extend commandline
241 character length, for example:
243 stty size rows 58 cols 234
245 It will also automatically download all the packages needed for NSB Testing
246 setup. Refer chapter :doc:`04-installation` for more on Docker:
247 :ref:`Install Yardstick using Docker`
249 Bare Metal context example
250 ^^^^^^^^^^^^^^^^^^^^^^^^^^
252 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
254 Perform following steps to install NSB:
256 1. Clone Yardstick repo to jump host.
257 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
258 ``install-inventory.ini`` file to install NSB and dependencies. Install
260 3. Start deployment using docker image based on Ubuntu 16:
262 .. code-block:: console
266 4. Reboot bare metal servers.
267 5. Enter to yardstick container and modify pod yaml file and run tests.
269 Standalone context example for Ubuntu 18
270 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
272 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
273 Ubuntu 18 is installed on all servers.
275 Perform following steps to install NSB:
277 1. Clone Yardstick repo to jump host.
278 2. Add TG server to ``yardstick-baremetal`` group in
279 ``install-inventory.ini`` file to install NSB and dependencies.
280 Add server where VM with sample VNF will be deployed to
281 ``yardstick-standalone`` group in ``install-inventory.ini`` file.
282 Target VM image named ``yardstick-nsb-image.img`` will be placed to
283 ``/var/lib/libvirt/images/``.
284 Install python on servers.
285 3. Modify ``nsb_setup.sh`` on jump host:
287 .. code-block:: console
290 -e IMAGE_PROPERTY='nsb' \
291 -e OS_RELEASE='bionic' \
292 -e INSTALLATION_MODE='container_pull' \
293 -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
294 -i install-inventory.ini install.yaml
296 4. Start deployment with Yardstick docker images based on Ubuntu 18:
298 .. code-block:: console
300 ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
303 6. Enter to yardstick container and modify pod yaml file and run tests.
309 .. code-block:: console
311 +----------+ +----------+
317 +----------+ +----------+
321 Environment parameters and credentials
322 --------------------------------------
324 Configure yardstick.conf
325 ^^^^^^^^^^^^^^^^^^^^^^^^
327 If you did not run ``yardstick env influxdb`` inside the container to generate
328 ``yardstick.conf``, then create the config file manually (run inside the
331 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
332 vi /etc/yardstick/yardstick.conf
334 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
341 dispatcher = influxdb
343 [dispatcher_influxdb]
345 target = http://{YOUR_IP_HERE}:8086
351 trex_path=/opt/nsb_bin/trex/scripts
352 bin_path=/opt/nsb_bin
353 trex_client_lib=/opt/nsb_bin/trex_client/stl
355 Run Yardstick - Network Service Testcases
356 -----------------------------------------
358 NS testing - using yardstick CLI
359 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
361 See :doc:`04-installation`
363 Connect to the Yardstick container::
365 docker exec -it yardstick /bin/bash
367 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
369 source /etc/yardstick/openstack.creds
371 In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
374 export EXTERNAL_NETWORK="<openstack public network>"
376 Finally, you should be able to run the testcase::
378 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
380 Network Service Benchmarking - Bare-Metal
381 -----------------------------------------
383 Bare-Metal Config pod.yaml describing Topology
384 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
386 Bare-Metal 2-Node setup
387 +++++++++++++++++++++++
388 .. code-block:: console
390 +----------+ +----------+
396 +----------+ +----------+
399 Bare-Metal 3-Node setup - Correlated Traffic
400 ++++++++++++++++++++++++++++++++++++++++++++
401 .. code-block:: console
403 +----------+ +----------+ +------------+
406 | | (0)----->(0) | | | UDP |
407 | TG1 | | DUT | | Replay |
409 | | | |(1)<---->(0)| |
410 +----------+ +----------+ +------------+
411 trafficgen_0 vnf trafficgen_1
414 Bare-Metal Config pod.yaml
415 ^^^^^^^^^^^^^^^^^^^^^^^^^^
416 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
417 topology and update all the required fields.::
419 cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
431 xe0: # logical name from topology.yaml and vnfd.yaml
433 driver: i40e # default kernel driver
435 local_ip: "152.16.100.20"
436 netmask: "255.255.255.0"
437 local_mac: "00:00:00:00:00:01"
438 xe1: # logical name from topology.yaml and vnfd.yaml
440 driver: i40e # default kernel driver
442 local_ip: "152.16.40.20"
443 netmask: "255.255.255.0"
444 local_mac: "00:00:00:00:00:02"
452 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
454 xe0: # logical name from topology.yaml and vnfd.yaml
456 driver: i40e # default kernel driver
458 local_ip: "152.16.100.19"
459 netmask: "255.255.255.0"
460 local_mac: "00:00:00:00:00:03"
462 xe1: # logical name from topology.yaml and vnfd.yaml
464 driver: i40e # default kernel driver
466 local_ip: "152.16.40.19"
467 netmask: "255.255.255.0"
468 local_mac: "00:00:00:00:00:04"
470 - network: "152.16.100.20"
471 netmask: "255.255.255.0"
472 gateway: "152.16.100.20"
474 - network: "152.16.40.20"
475 netmask: "255.255.255.0"
476 gateway: "152.16.40.20"
479 - network: "0064:ff9b:0:0:0:0:9810:6414"
481 gateway: "0064:ff9b:0:0:0:0:9810:6414"
483 - network: "0064:ff9b:0:0:0:0:9810:2814"
485 gateway: "0064:ff9b:0:0:0:0:9810:2814"
489 Standalone Virtualization
490 -------------------------
492 VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
493 to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
494 manually. Test case example, context section::
504 SR-IOV Pre-requisites
505 +++++++++++++++++++++
507 On Host, where VM is created:
508 1. Create and configure a bridge named ``br-int`` for VM to connect to
509 external network. Currently this can be done using VXLAN tunnel.
511 Execute the following on host, where VM is created::
513 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
515 brctl addif br-int vxlan0
516 ip link set dev vxlan0 up
517 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
518 ip link set dev br-int up
520 .. note:: You may need to add extra rules to iptable to forward traffic.
522 .. code-block:: console
524 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
525 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
527 Execute the following on a jump host:
529 .. code-block:: console
531 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
532 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
533 ip link set dev vxlan0 up
535 .. note:: Host and jump host are different baremetal servers.
537 2. Modify test case management CIDR.
538 IP addresses IP#1, IP#2 and CIDR must be in the same network.
548 3. Build guest image for VNF to run.
549 Most of the sample test cases in Yardstick are using a guest image called
550 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
551 Yardstick has a tool for building this custom image with SampleVNF.
552 It is necessary to have ``sudo`` rights to use this tool.
554 Also you may need to install several additional packages to use this tool, by
555 following the commands below::
557 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
559 This image can be built using the following command in the directory where
560 Yardstick is installed::
562 export YARD_IMG_ARCH='amd64'
563 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
565 For instructions on generating a cloud image using Ansible, refer to
566 :doc:`04-installation`.
568 .. note:: VM should be build with static IP and be accessible from the
572 SR-IOV Config pod.yaml describing Topology
573 ++++++++++++++++++++++++++++++++++++++++++
577 .. code-block:: console
579 +--------------------+
585 +--------------------+
586 | VF NIC | | VF NIC |
587 +--------+ +--------+
591 +----------+ +-------------------------+
594 | | (0)<----->(0) | ------ SUT | |
596 | | (n)<----->(n) | ----------------- |
598 +----------+ +-------------------------+
603 SR-IOV 3-Node setup - Correlated Traffic
604 ++++++++++++++++++++++++++++++++++++++++
605 .. code-block:: console
607 +--------------------+
613 +--------------------+
614 | VF NIC | | VF NIC |
615 +--------+ +--------+
619 +----------+ +---------------------+ +--------------+
622 | | (0)<----->(0) |----- | | | TG2 |
623 | TG1 | | SUT | | | (UDP Replay) |
625 | | (n)<----->(n) | -----| (n)<-->(n) | |
626 +----------+ +---------------------+ +--------------+
627 trafficgen_0 host trafficgen_1
629 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
630 topology and update all the required fields.
632 .. code-block:: console
634 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
635 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
637 .. note:: Update all the required fields like ip, user, password, pcis, etc...
639 SR-IOV Config pod_trex.yaml
640 +++++++++++++++++++++++++++
651 key_filename: /root/.ssh/id_rsa
653 xe0: # logical name from topology.yaml and vnfd.yaml
655 driver: i40e # default kernel driver
657 local_ip: "152.16.100.20"
658 netmask: "255.255.255.0"
659 local_mac: "00:00:00:00:00:01"
660 xe1: # logical name from topology.yaml and vnfd.yaml
662 driver: i40e # default kernel driver
664 local_ip: "152.16.40.20"
665 netmask: "255.255.255.0"
666 local_mac: "00:00:00:00:00:02"
668 SR-IOV Config host_sriov.yaml
669 +++++++++++++++++++++++++++++
681 SR-IOV testcase update:
682 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
684 Update contexts section
685 '''''''''''''''''''''''
692 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
693 - type: StandaloneSriov
694 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
698 images: "/var/lib/libvirt/images/ubuntu.qcow2"
704 user: "" # update VM username
705 password: "" # update password
710 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
717 phy_port: "0000:05:00.0"
719 cidr: '152.16.100.10/24'
720 gateway_ip: '152.16.100.20'
722 phy_port: "0000:05:00.1"
724 cidr: '152.16.40.10/24'
725 gateway_ip: '152.16.100.20'
728 SRIOV configuration options
729 +++++++++++++++++++++++++++
731 The only configuration option available for SRIOV is *vpci*. It is used as base
732 address for VFs that are created during SRIOV test case execution.
734 .. code-block:: yaml+jinja
738 phy_port: "0000:05:00.0"
740 cidr: '152.16.100.10/24'
741 gateway_ip: '152.16.100.20'
743 phy_port: "0000:05:00.1"
745 cidr: '152.16.40.10/24'
746 gateway_ip: '152.16.100.20'
748 .. _`VM image properties label`:
753 VM image properties example under *flavor* section:
755 .. code-block:: console
761 machine_type: 'pc-i440fx-xenial'
768 <vcpupin vcpu="0" cpuset="7"/>
769 <vcpupin vcpu="1" cpuset="8"/>
771 <vcpupin vcpu="11" cpuset="18"/>
772 <emulatorpin cpuset="11"/>
777 VM image properties description:
779 +-------------------------+-------------------------------------------------+
780 | Parameters | Detail |
781 +=========================+=================================================+
782 | images || Path to the VM image generated by |
783 | | ``nsb_setup.sh`` |
784 | || Default path is ``/var/lib/libvirt/images/`` |
785 | || Default file name ``yardstick-nsb-image.img`` |
786 | | or ``yardstick-image.img`` |
787 +-------------------------+-------------------------------------------------+
788 | ram || Amount of RAM to be used for VM |
789 | || Default is 4096 MB |
790 +-------------------------+-------------------------------------------------+
791 | hw:cpu_sockets || Number of sockets provided to the guest VM |
793 +-------------------------+-------------------------------------------------+
794 | hw:cpu_cores || Number of cores provided to the guest VM |
796 +-------------------------+-------------------------------------------------+
797 | hw:cpu_threads || Number of threads provided to the guest VM |
799 +-------------------------+-------------------------------------------------+
800 | hw_socket || Generate vcpu cpuset from given HW socket |
802 +-------------------------+-------------------------------------------------+
803 | cputune || Maps virtual cpu with logical cpu |
804 +-------------------------+-------------------------------------------------+
805 | machine_type || Machine type to be emulated in VM |
806 | || Default is 'pc-i440fx-xenial' |
807 +-------------------------+-------------------------------------------------+
808 | user || User name to access the VM |
809 | || Default value is 'root' |
810 +-------------------------+-------------------------------------------------+
811 | password || Password to access the VM |
812 +-------------------------+-------------------------------------------------+
818 OVS-DPDK Pre-requisites
819 +++++++++++++++++++++++
821 On Host, where VM is created:
822 1. Create and configure a bridge named ``br-int`` for VM to connect to
823 external network. Currently this can be done using VXLAN tunnel.
825 Execute the following on host, where VM is created:
827 .. code-block:: console
829 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
831 brctl addif br-int vxlan0
832 ip link set dev vxlan0 up
833 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
834 ip link set dev br-int up
836 .. note:: May be needed to add extra rules to iptable to forward traffic.
838 .. code-block:: console
840 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
841 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
843 Execute the following on a jump host:
845 .. code-block:: console
847 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
848 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
849 ip link set dev vxlan0 up
851 .. note:: Host and jump host are different baremetal servers.
853 2. Modify test case management CIDR.
854 IP addresses IP#1, IP#2 and CIDR must be in the same network.
864 3. Build guest image for VNF to run.
865 Most of the sample test cases in Yardstick are using a guest image called
866 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
867 Yardstick has a tool for building this custom image with SampleVNF.
868 It is necessary to have ``sudo`` rights to use this tool.
870 You may need to install several additional packages to use this tool, by
871 following the commands below::
873 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
875 This image can be built using the following command in the directory where
876 Yardstick is installed::
878 export YARD_IMG_ARCH='amd64'
879 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
880 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
882 for more details refer to chapter :doc:`04-installation`
884 .. note:: VM should be build with static IP and should be accessible from
887 4. OVS & DPDK version:
889 * OVS 2.7 and DPDK 16.11.1 above version is supported
891 Refer setup instructions at `OVS-DPDK`_ on host.
893 OVS-DPDK Config pod.yaml describing Topology
894 ++++++++++++++++++++++++++++++++++++++++++++
896 OVS-DPDK 2-Node setup
897 +++++++++++++++++++++
899 .. code-block:: console
901 +--------------------+
907 +--------------------+
908 | virtio | | virtio |
909 +--------+ +--------+
913 +--------+ +--------+
914 | vHOST0 | | vHOST1 |
915 +----------+ +-------------------------+
918 | | (0)<----->(0) | ------ | |
921 | | (n)<----->(n) |------------------ |
922 +----------+ +-------------------------+
926 OVS-DPDK 3-Node setup - Correlated Traffic
927 ++++++++++++++++++++++++++++++++++++++++++
929 .. code-block:: console
931 +--------------------+
937 +--------------------+
938 | virtio | | virtio |
939 +--------+ +--------+
943 +--------+ +--------+
944 | vHOST0 | | vHOST1 |
945 +----------+ +-------------------------+ +------------+
948 | | (0)<----->(0) | ------ | | | TG2 |
949 | TG1 | | SUT | | |(UDP Replay)|
950 | | | (ovs-dpdk) | | | |
951 | | (n)<----->(n) | ------ |(n)<-->(n)| |
952 +----------+ +-------------------------+ +------------+
953 trafficgen_0 host trafficgen_1
956 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
957 the topology and update all the required fields::
959 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
960 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
962 .. note:: Update all the required fields like ip, user, password, pcis, etc...
964 OVS-DPDK Config pod_trex.yaml
965 +++++++++++++++++++++++++++++
977 xe0: # logical name from topology.yaml and vnfd.yaml
979 driver: i40e # default kernel driver
981 local_ip: "152.16.100.20"
982 netmask: "255.255.255.0"
983 local_mac: "00:00:00:00:00:01"
984 xe1: # logical name from topology.yaml and vnfd.yaml
986 driver: i40e # default kernel driver
988 local_ip: "152.16.40.20"
989 netmask: "255.255.255.0"
990 local_mac: "00:00:00:00:00:02"
992 OVS-DPDK Config host_ovs.yaml
993 +++++++++++++++++++++++++++++
1005 ovs_dpdk testcase update:
1006 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
1008 Update contexts section
1009 '''''''''''''''''''''''
1011 .. code-block:: YAML
1016 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
1017 - type: StandaloneOvsDpdk
1019 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
1033 images: "/var/lib/libvirt/images/ubuntu.qcow2"
1039 user: "" # update VM username
1040 password: "" # update password
1045 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
1052 phy_port: "0000:05:00.0"
1053 vpci: "0000:00:07.0"
1054 cidr: '152.16.100.10/24'
1055 gateway_ip: '152.16.100.20'
1057 phy_port: "0000:05:00.1"
1058 vpci: "0000:00:08.0"
1059 cidr: '152.16.40.10/24'
1060 gateway_ip: '152.16.100.20'
1062 OVS-DPDK configuration options
1063 ++++++++++++++++++++++++++++++
1065 There are number of configuration options available for OVS-DPDK context in
1066 test case. Mostly they are used for performance tuning.
1068 OVS-DPDK properties:
1069 ''''''''''''''''''''
1071 OVS-DPDK properties example under *ovs_properties* section:
1073 .. code-block:: console
1080 pmd_cpu_mask: "0x3c"
1088 dpdk_pmd-rxq-affinity:
1093 vhost_pmd-rxq-affinity:
1099 OVS-DPDK properties description:
1101 +-------------------------+-------------------------------------------------+
1102 | Parameters | Detail |
1103 +=========================+=================================================+
1104 | version || Version of OVS and DPDK to be installed |
1105 | || There is a relation between OVS and DPDK |
1106 | | version which can be found at |
1107 | | `OVS-DPDK-versions`_ |
1108 | || By default OVS: 2.6.0, DPDK: 16.07.2 |
1109 +-------------------------+-------------------------------------------------+
1110 | lcore_mask || Core bitmask used during DPDK initialization |
1111 | | where the non-datapath OVS-DPDK threads such |
1112 | | as handler and revalidator threads run |
1113 +-------------------------+-------------------------------------------------+
1114 | pmd_cpu_mask || Core bitmask that sets which cores are used by |
1115 | || OVS-DPDK for datapath packet processing |
1116 +-------------------------+-------------------------------------------------+
1117 | pmd_threads || Number of PMD threads used by OVS-DPDK for |
1119 | || This core mask is evaluated in Yardstick |
1120 | || It will be used if pmd_cpu_mask is not given |
1122 +-------------------------+-------------------------------------------------+
1123 | ram || Amount of RAM to be used for each socket, MB |
1124 | || Default is 2048 MB |
1125 +-------------------------+-------------------------------------------------+
1126 | queues || Number of RX queues used for DPDK physical |
1128 +-------------------------+-------------------------------------------------+
1129 | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
1130 | || e.g.: <port number> : <queue-id>:<core-id> |
1131 +-------------------------+-------------------------------------------------+
1132 | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
1133 | || e.g.: <port number> : <queue-id>:<core-id> |
1134 +-------------------------+-------------------------------------------------+
1135 | vpath || User path for openvswitch files |
1136 | || Default is ``/usr/local`` |
1137 +-------------------------+-------------------------------------------------+
1138 | max_idle || The maximum time that idle flows will remain |
1139 | | cached in the datapath, ms |
1140 +-------------------------+-------------------------------------------------+
1146 VM image properties are same as for SRIOV :ref:`VM image properties label`.
1149 OpenStack with SR-IOV support
1150 -----------------------------
1152 This section describes how to run a Sample VNF test case, using Heat context,
1153 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1154 DevStack, with SR-IOV support.
1157 Single node OpenStack with external TG
1158 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1160 .. code-block:: console
1162 +----------------------------+
1163 |OpenStack(DevStack) |
1165 | +--------------------+ |
1166 | |sample-VNF VM | |
1171 | +--------+ +--------+ |
1172 | | VF NIC | | VF NIC | |
1173 | +-----+--+--+----+---+ |
1176 +----------+ +---------+----------+-------+
1180 | TG | (PF0)<----->(PF0) +---------+ | |
1182 | | (PF1)<----->(PF1) +--------------------+ |
1184 +----------+ +----------------------------+
1188 Host pre-configuration
1189 ++++++++++++++++++++++
1191 .. warning:: The following configuration requires sudo access to the system.
1192 Make sure that your user have the access.
1194 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1195 manufacturers disable this extension by default.
1197 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1198 config file ``/etc/default/grub``.
1200 For the Intel platform::
1203 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1206 For the AMD platform::
1209 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1212 Update the grub configuration file and restart the system:
1214 .. warning:: The following command will reboot the system.
1221 Make sure the extension has been enabled::
1223 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1225 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
1226 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1227 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1228 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1229 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1230 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1231 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1232 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1234 .. TODO: Refer to the yardstick installation guide for proxy set up
1236 Setup system proxy (if needed). Add the following configuration into the
1237 ``/etc/environment`` file:
1239 .. note:: The proxy server name/port and IPs should be changed according to
1240 actual/current proxy configuration in the lab.
1244 export http_proxy=http://proxy.company.com:port
1245 export https_proxy=http://proxy.company.com:port
1246 export ftp_proxy=http://proxy.company.com:port
1247 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1248 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1254 sudo -EH apt-get update
1255 sudo -EH apt-get upgrade
1256 sudo -EH apt-get dist-upgrade
1258 Install dependencies needed for DevStack
1262 sudo -EH apt-get install python python-dev python-pip
1264 Setup SR-IOV ports on the host:
1266 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1267 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1268 interface names should be changed according to the HW environment used for
1273 sudo ip link set dev enp24s0f0 up
1274 sudo ip link set dev enp24s0f1 up
1275 sudo ip link set dev enp24s0f3 up
1278 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1279 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1282 DevStack installation
1283 +++++++++++++++++++++
1285 If you want to try out NSB, but don't have OpenStack set-up, you can use
1286 `Devstack`_ to install OpenStack on a host. Please note, that the
1287 ``stable/pike`` branch of devstack repo should be used during the installation.
1288 The required ``local.conf`` configuration file is described below.
1290 DevStack configuration file:
1292 .. note:: Update the devstack configuration file by replacing angluar brackets
1293 with a short description inside.
1295 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1296 commands to get device and vendor id of the virtual function (VF).
1298 .. literalinclude:: code/single-devstack-local.conf
1301 Start the devstack installation on a host.
1303 TG host configuration
1304 +++++++++++++++++++++
1306 Yardstick automatically installs and configures Trex traffic generator on TG
1307 host based on provided POD file (see below). Anyway, it's recommended to check
1308 the compatibility of the installed NIC on the TG server with software Trex
1309 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1311 Run the Sample VNF test case
1312 ++++++++++++++++++++++++++++
1314 There is an example of Sample VNF test case ready to be executed in an
1315 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1316 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
1318 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1321 Create pod file for TG in the yardstick repo folder located in the yardstick
1324 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1325 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1326 command to get the PF PCI address for ``vpci`` field.
1328 .. literalinclude:: code/single-yardstick-pod.conf
1331 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1332 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1333 context using steps described in `NS testing - using yardstick CLI`_ section.
1336 Multi node OpenStack TG and VNF setup (two nodes)
1337 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1339 .. code-block:: console
1341 +----------------------------+ +----------------------------+
1342 |OpenStack(DevStack) | |OpenStack(DevStack) |
1344 | +--------------------+ | | +--------------------+ |
1345 | |sample-VNF VM | | | |sample-VNF VM | |
1347 | | TG | | | | DUT | |
1348 | | trafficgen_0 | | | | (VNF) | |
1350 | +--------+ +--------+ | | +--------+ +--------+ |
1351 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1352 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1355 +--------+-----------+-------+ +---------+----------+-------+
1356 | VF0 VF1 | | VF0 VF1 |
1358 | | SUT2 | | | | SUT1 | |
1359 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1361 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1363 +----------------------------+ +----------------------------+
1364 host2 (compute) host1 (controller)
1367 Controller/Compute pre-configuration
1368 ++++++++++++++++++++++++++++++++++++
1370 Pre-configuration of the controller and compute hosts are the same as
1371 described in `Host pre-configuration`_ section.
1373 DevStack configuration
1374 ++++++++++++++++++++++
1376 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1377 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1378 devstack repo should be used during the installation.
1380 .. note:: Update the devstack configuration files by replacing angluar brackets
1381 with a short description inside.
1383 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1384 commands to get device and vendor id of the virtual function (VF).
1386 DevStack configuration file for controller host:
1388 .. literalinclude:: code/multi-devstack-controller-local.conf
1391 DevStack configuration file for compute host:
1393 .. literalinclude:: code/multi-devstack-compute-local.conf
1396 Start the devstack installation on the controller and compute hosts.
1398 Run the sample vFW TC
1399 +++++++++++++++++++++
1401 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1404 Run the sample vFW RFC2544 SR-IOV test case
1405 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1406 in the heat context using steps described in
1407 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1412 yardstick -d task start --task-args='{"provider": "sriov"}' \
1413 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1416 Enabling other Traffic generators
1417 ---------------------------------
1422 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1423 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1424 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1425 ``<IxOS version>Linux64.bin.tar.gz``
1426 If the installation was not done inside the container, after installing
1427 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1428 sure you can run this cmd inside the yardstick container. Usually user is
1429 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1430 ``/usr/bin/ixiapython<ver>`` inside the container.
1432 2. Update ``pod_ixia.yaml`` file with ixia details.
1434 .. code-block:: console
1436 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1437 etc/yardstick/nodes/pod_ixia.yaml
1439 Config ``pod_ixia.yaml``
1441 .. literalinclude:: code/pod_ixia.yaml
1444 for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1445 for ovs-dpdk/sriov configuration
1447 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1448 You will also need to configure the IxLoad machine to start the IXIA
1449 IxosTclServer. This can be started like so:
1451 * Connect to the IxLoad machine using RDP
1453 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1455 ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1457 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1459 5. Execute testcase in samplevnf folder e.g.
1460 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1465 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1466 installed as part of the requirements of the project.
1468 1. Update ``pod_ixia.yaml`` file with ixia details.
1470 .. code-block:: console
1472 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1473 etc/yardstick/nodes/pod_ixia.yaml
1475 Configure ``pod_ixia.yaml``
1477 .. literalinclude:: code/pod_ixia.yaml
1480 for sriov/ovs_dpdk pod files, please refer to above
1481 `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1483 2. Start IxNetwork TCL Server
1484 You will also need to configure the IxNetwork machine to start the IXIA
1485 IxNetworkTclServer. This can be started like so:
1487 * Connect to the IxNetwork machine using RDP
1489 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1490 (or ``IxNetworkApiServer``)
1492 3. Execute testcase in samplevnf folder e.g.
1493 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1498 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1499 to be preinstalled and properly configured.
1503 32-bit Java installation is required for the Spirent Landslide TCL API.
1505 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1508 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1509 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1510 section of installation instructions.
1512 - LsApi (Tcl API module)
1514 Follow Landslide documentation for detailed instructions on Linux
1515 installation of Tcl API and its dependencies
1516 ``http://TAS_HOST_IP/tclapiinstall.html``.
1517 For working with LsApi Python wrapper only steps 1-5 are required.
1519 .. note:: After installation make sure your API home path is included in
1520 ``PYTHONPATH`` environment variable.
1523 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1524 For LsApi module to initialize correctly following lines (184-186) in
1527 .. code-block:: python
1529 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1531 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1533 should be changed to:
1535 .. code-block:: python
1537 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1538 if not ldpath == '':
1539 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1541 .. note:: The Spirent landslide TCL software package needs to be updated in case
1542 the user upgrades to a new version of Spirent landslide software.