1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
21 The steps needed to run Yardstick with NSB testing are:
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
32 Refer chapter Yardstick Installation for more information on yardstick
35 Several prerequisites are needed for Yardstick (VNF testing):
37 * Python Modules: pyzmq, pika.
48 Hardware & Software Ingredients
49 -------------------------------
54 ======= ===================
56 ======= ===================
60 kernel 4.4.0-34-generic
62 ======= ===================
64 Boot and BIOS settings:
67 ============= =================================================
68 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71 iommu=on iommu=pt intel_iommu=on
72 Note: nohz_full and rcu_nocbs is to disable Linux
74 BIOS CPU Power and Performance Policy <Performance>
77 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78 Hyper-Threading Technology (If supported) Enabled
79 Virtualization Techology Enabled
80 Intel(R) VT for Direct I/O Enabled
83 ============= =================================================
87 Install Yardstick (NSB Testing)
88 ===============================
90 Download the source code and install Yardstick from it
92 .. code-block:: console
94 git clone https://gerrit.opnfv.org/gerrit/yardstick
98 # Switch to latest stable branch
99 # git checkout <tag or stable branch>
100 git checkout stable/euphrates
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
108 http_proxy='http://proxy.company.com:port'
109 https_proxy='http://proxy.company.com:port'
111 .. code-block:: console
113 export http_proxy='http://proxy.company.com:port'
114 export https_proxy='http://proxy.company.com:port'
116 The last step is to modify the Yardstick installation inventory, used by
121 cat ./ansible/install-inventory.ini
123 localhost ansible_connection=local
125 [yardstick-standalone]
126 yardstick-standalone-node ansible_host=192.168.1.2
127 yardstick-standalone-node-2 ansible_host=192.168.1.3
129 # section below is only due backward compatibility.
130 # it will be removed later
140 SSH access without password needs to be configured for all your nodes defined in
141 ``install-inventory.ini`` file.
142 If you want to use password authentication you need to install sshpass
144 .. code-block:: console
146 sudo -EH apt-get install sshpass
148 To execute an installation for a Bare-Metal or a Standalone context:
150 .. code-block:: console
155 To execute an installation for an OpenStack context:
157 .. code-block:: console
159 ./nsb_setup.sh <path to admin-openrc.sh>
161 Above command setup docker with latest yardstick code. To execute
163 .. code-block:: console
165 docker exec -it yardstick bash
167 It will also automatically download all the packages needed for NSB Testing
168 setup. Refer chapter :doc:`04-installation` for more on docker
169 **Install Yardstick using Docker (recommended)**
171 Another way to execute an installation for a Bare-Metal or a Standalone context
172 is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
178 .. code-block:: console
180 +----------+ +----------+
186 +----------+ +----------+
190 Environment parameters and credentials
191 ======================================
193 Config yardstick conf
194 ---------------------
196 If user did not run 'yardstick env influxdb' inside the container, which will
197 generate correct ``yardstick.conf``, then create the config file manually (run
198 inside the container):
201 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
202 vi /etc/yardstick/yardstick.conf
204 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
210 dispatcher = file, influxdb
212 [dispatcher_influxdb]
214 target = http://{YOUR_IP_HERE}:8086
220 trex_path=/opt/nsb_bin/trex/scripts
221 bin_path=/opt/nsb_bin
222 trex_client_lib=/opt/nsb_bin/trex_client/stl
224 Run Yardstick - Network Service Testcases
225 =========================================
228 NS testing - using yardstick CLI
229 --------------------------------
231 See :doc:`04-installation`
233 .. code-block:: console
236 docker exec -it yardstick /bin/bash
237 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
238 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
239 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
241 Network Service Benchmarking - Bare-Metal
242 =========================================
244 Bare-Metal Config pod.yaml describing Topology
245 ----------------------------------------------
247 Bare-Metal 2-Node setup
248 ^^^^^^^^^^^^^^^^^^^^^^^
249 .. code-block:: console
251 +----------+ +----------+
257 +----------+ +----------+
260 Bare-Metal 3-Node setup - Correlated Traffic
261 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
262 .. code-block:: console
264 +----------+ +----------+ +------------+
267 | | (0)----->(0) | | | UDP |
268 | TG1 | | DUT | | Replay |
270 | | | |(1)<---->(0)| |
271 +----------+ +----------+ +------------+
272 trafficgen_1 vnf trafficgen_2
275 Bare-Metal Config pod.yaml
276 --------------------------
277 Before executing Yardstick test cases, make sure that pod.yaml reflects the
278 topology and update all the required fields.::
280 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
292 xe0: # logical name from topology.yaml and vnfd.yaml
294 driver: i40e # default kernel driver
296 local_ip: "152.16.100.20"
297 netmask: "255.255.255.0"
298 local_mac: "00:00:00:00:00:01"
299 xe1: # logical name from topology.yaml and vnfd.yaml
301 driver: i40e # default kernel driver
303 local_ip: "152.16.40.20"
304 netmask: "255.255.255.0"
305 local_mac: "00:00.00:00:00:02"
313 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
315 xe0: # logical name from topology.yaml and vnfd.yaml
317 driver: i40e # default kernel driver
319 local_ip: "152.16.100.19"
320 netmask: "255.255.255.0"
321 local_mac: "00:00:00:00:00:03"
323 xe1: # logical name from topology.yaml and vnfd.yaml
325 driver: i40e # default kernel driver
327 local_ip: "152.16.40.19"
328 netmask: "255.255.255.0"
329 local_mac: "00:00:00:00:00:04"
331 - network: "152.16.100.20"
332 netmask: "255.255.255.0"
333 gateway: "152.16.100.20"
335 - network: "152.16.40.20"
336 netmask: "255.255.255.0"
337 gateway: "152.16.40.20"
340 - network: "0064:ff9b:0:0:0:0:9810:6414"
342 gateway: "0064:ff9b:0:0:0:0:9810:6414"
344 - network: "0064:ff9b:0:0:0:0:9810:2814"
346 gateway: "0064:ff9b:0:0:0:0:9810:2814"
350 Network Service Benchmarking - Standalone Virtualization
351 ========================================================
356 SR-IOV Pre-requisites
357 ^^^^^^^^^^^^^^^^^^^^^
359 On Host, where VM is created:
360 a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
361 Currently this can be done using VXLAN tunnel.
363 Execute the following on host, where VM is created:
365 .. code-block:: console
367 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
369 brctl addif br-int vxlan0
370 ip link set dev vxlan0 up
371 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
372 ip link set dev br-int up
374 .. note:: May be needed to add extra rules to iptable to forward traffic.
376 .. code-block:: console
378 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
379 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
381 Execute the following on a jump host:
383 .. code-block:: console
385 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
386 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
387 ip link set dev vxlan0 up
389 .. note:: Host and jump host are different baremetal servers.
391 b) Modify test case management CIDR.
392 IP addresses IP#1, IP#2 and CIDR must be in the same network.
402 c) Build guest image for VNF to run.
403 Most of the sample test cases in Yardstick are using a guest image called
404 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
405 Yardstick has a tool for building this custom image with SampleVNF.
406 It is necessary to have ``sudo`` rights to use this tool.
408 Also you may need to install several additional packages to use this tool, by
409 following the commands below::
411 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
413 This image can be built using the following command in the directory where Yardstick is installed
415 .. code-block:: console
417 export YARD_IMG_ARCH='amd64'
418 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
420 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
422 for more details refer to chapter :doc:`04-installation`
424 .. note:: VM should be build with static IP and should be accessible from yardstick host.
427 SR-IOV Config pod.yaml describing Topology
428 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
432 .. code-block:: console
434 +--------------------+
440 +--------------------+
441 | VF NIC | | VF NIC |
442 +--------+ +--------+
446 +----------+ +-------------------------+
449 | | (0)<----->(0) | ------ | |
452 | | (n)<----->(n) |------------------ |
453 +----------+ +-------------------------+
458 SR-IOV 3-Node setup - Correlated Traffic
459 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
460 .. code-block:: console
462 +--------------------+
468 +--------------------+
469 | VF NIC | | VF NIC |
470 +--------+ +--------+
474 +----------+ +-------------------------+ +--------------+
477 | | (0)<----->(0) | ------ | | | TG2 |
478 | TG1 | | SUT | | | (UDP Replay) |
480 | | (n)<----->(n) | ------ | (n)<-->(n) | |
481 +----------+ +-------------------------+ +--------------+
482 trafficgen_1 host trafficgen_2
484 Before executing Yardstick test cases, make sure that pod.yaml reflects the
485 topology and update all the required fields.
487 .. code-block:: console
489 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
490 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
492 .. note:: Update all the required fields like ip, user, password, pcis, etc...
494 SR-IOV Config pod_trex.yaml
495 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
506 key_filename: /root/.ssh/id_rsa
508 xe0: # logical name from topology.yaml and vnfd.yaml
510 driver: i40e # default kernel driver
512 local_ip: "152.16.100.20"
513 netmask: "255.255.255.0"
514 local_mac: "00:00:00:00:00:01"
515 xe1: # logical name from topology.yaml and vnfd.yaml
517 driver: i40e # default kernel driver
519 local_ip: "152.16.40.20"
520 netmask: "255.255.255.0"
521 local_mac: "00:00.00:00:00:02"
523 SR-IOV Config host_sriov.yaml
524 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
536 SR-IOV testcase update:
537 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
539 Update "contexts" section
540 """""""""""""""""""""""""
547 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
548 - type: StandaloneSriov
549 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
553 images: "/var/lib/libvirt/images/ubuntu.qcow2"
559 user: "" # update VM username
560 password: "" # update password
565 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
572 phy_port: "0000:05:00.0"
574 cidr: '152.16.100.10/24'
575 gateway_ip: '152.16.100.20'
577 phy_port: "0000:05:00.1"
579 cidr: '152.16.40.10/24'
580 gateway_ip: '152.16.100.20'
587 OVS-DPDK Pre-requisites
588 ^^^^^^^^^^^^^^^^^^^^^^^
590 On Host, where VM is created:
591 a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
592 Currently this can be done using VXLAN tunnel.
594 Execute the following on host, where VM is created:
596 .. code-block:: console
598 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
600 brctl addif br-int vxlan0
601 ip link set dev vxlan0 up
602 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
603 ip link set dev br-int up
605 .. note:: May be needed to add extra rules to iptable to forward traffic.
607 .. code-block:: console
609 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
610 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
612 Execute the following on a jump host:
614 .. code-block:: console
616 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
617 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
618 ip link set dev vxlan0 up
620 .. note:: Host and jump host are different baremetal servers.
622 b) Modify test case management CIDR.
623 IP addresses IP#1, IP#2 and CIDR must be in the same network.
633 c) Build guest image for VNF to run.
634 Most of the sample test cases in Yardstick are using a guest image called
635 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
636 Yardstick has a tool for building this custom image with SampleVNF.
637 It is necessary to have ``sudo`` rights to use this tool.
639 Also you may need to install several additional packages to use this tool, by
640 following the commands below::
642 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
644 This image can be built using the following command in the directory where Yardstick is installed::
646 export YARD_IMG_ARCH='amd64'
647 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
648 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
650 for more details refer to chapter :doc:`04-installation`
652 .. note:: VM should be build with static IP and should be accessible from yardstick host.
654 c) OVS & DPDK version.
655 - OVS 2.7 and DPDK 16.11.1 above version is supported
657 d) Setup OVS/DPDK on host.
658 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
661 OVS-DPDK Config pod.yaml describing Topology
662 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
664 OVS-DPDK 2-Node setup
665 ^^^^^^^^^^^^^^^^^^^^^
668 .. code-block:: console
670 +--------------------+
676 +--------------------+
677 | virtio | | virtio |
678 +--------+ +--------+
682 +--------+ +--------+
683 | vHOST0 | | vHOST1 |
684 +----------+ +-------------------------+
687 | | (0)<----->(0) | ------ | |
690 | | (n)<----->(n) |------------------ |
691 +----------+ +-------------------------+
695 OVS-DPDK 3-Node setup - Correlated Traffic
696 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
698 .. code-block:: console
700 +--------------------+
706 +--------------------+
707 | virtio | | virtio |
708 +--------+ +--------+
712 +--------+ +--------+
713 | vHOST0 | | vHOST1 |
714 +----------+ +-------------------------+ +------------+
717 | | (0)<----->(0) | ------ | | | TG2 |
718 | TG1 | | SUT | | |(UDP Replay)|
719 | | | (ovs-dpdk) | | | |
720 | | (n)<----->(n) | ------ |(n)<-->(n)| |
721 +----------+ +-------------------------+ +------------+
722 trafficgen_1 host trafficgen_2
725 Before executing Yardstick test cases, make sure that pod.yaml reflects the
726 topology and update all the required fields.
728 .. code-block:: console
730 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
731 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
733 .. note:: Update all the required fields like ip, user, password, pcis, etc...
735 OVS-DPDK Config pod_trex.yaml
736 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
748 xe0: # logical name from topology.yaml and vnfd.yaml
750 driver: i40e # default kernel driver
752 local_ip: "152.16.100.20"
753 netmask: "255.255.255.0"
754 local_mac: "00:00:00:00:00:01"
755 xe1: # logical name from topology.yaml and vnfd.yaml
757 driver: i40e # default kernel driver
759 local_ip: "152.16.40.20"
760 netmask: "255.255.255.0"
761 local_mac: "00:00.00:00:00:02"
763 OVS-DPDK Config host_ovs.yaml
764 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
776 ovs_dpdk testcase update:
777 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
779 Update "contexts" section
780 """""""""""""""""""""""""
787 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
788 - type: StandaloneOvsDpdk
790 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
804 images: "/var/lib/libvirt/images/ubuntu.qcow2"
810 user: "" # update VM username
811 password: "" # update password
816 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
823 phy_port: "0000:05:00.0"
825 cidr: '152.16.100.10/24'
826 gateway_ip: '152.16.100.20'
828 phy_port: "0000:05:00.1"
830 cidr: '152.16.40.10/24'
831 gateway_ip: '152.16.100.20'
834 Network Service Benchmarking - OpenStack with SR-IOV support
835 ============================================================
837 This section describes how to run a Sample VNF test case, using Heat context,
838 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
839 DevStack, with SR-IOV support.
842 Single node OpenStack setup with external TG
843 --------------------------------------------
845 .. code-block:: console
847 +----------------------------+
848 |OpenStack(DevStack) |
850 | +--------------------+ |
856 | +--------+ +--------+ |
857 | | VF NIC | | VF NIC | |
858 | +-----+--+--+----+---+ |
861 +----------+ +---------+----------+-------+
865 | TG | (PF0)<----->(PF0) +---------+ | |
867 | | (PF1)<----->(PF1) +--------------------+ |
869 +----------+ +----------------------------+
873 Host pre-configuration
874 ^^^^^^^^^^^^^^^^^^^^^^
876 .. warning:: The following configuration requires sudo access to the system. Make
877 sure that your user have the access.
879 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
880 disable this extension by default.
882 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
883 config file ``/etc/default/grub``.
885 For the Intel platform:
890 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
893 For the AMD platform:
898 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
901 Update the grub configuration file and restart the system:
903 .. warning:: The following command will reboot the system.
910 Make sure the extension has been enabled:
914 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
916 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
917 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
918 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
919 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
920 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
921 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
922 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
923 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
925 Setup system proxy (if needed). Add the following configuration into the
926 ``/etc/environment`` file:
928 .. note:: The proxy server name/port and IPs should be changed according to
929 actual/current proxy configuration in the lab.
933 export http_proxy=http://proxy.company.com:port
934 export https_proxy=http://proxy.company.com:port
935 export ftp_proxy=http://proxy.company.com:port
936 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
937 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
943 sudo -EH apt-get update
944 sudo -EH apt-get upgrade
945 sudo -EH apt-get dist-upgrade
947 Install dependencies needed for the DevStack
951 sudo -EH apt-get install python
952 sudo -EH apt-get install python-dev
953 sudo -EH apt-get install python-pip
955 Setup SR-IOV ports on the host:
957 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
958 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
959 interface names should be changed according to the HW environment used for
964 sudo ip link set dev enp24s0f0 up
965 sudo ip link set dev enp24s0f1 up
966 sudo ip link set dev enp24s0f3 up
969 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
970 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
973 DevStack installation
974 ^^^^^^^^^^^^^^^^^^^^^
976 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
977 documentation to install OpenStack on a host. Please note, that stable
978 ``pike`` branch of devstack repo should be used during the installation.
979 The required `local.conf`` configuration file are described below.
981 DevStack configuration file:
983 .. note:: Update the devstack configuration file by replacing angluar brackets
984 with a short description inside.
986 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
987 commands to get device and vendor id of the virtual function (VF).
989 .. literalinclude:: code/single-devstack-local.conf
992 Start the devstack installation on a host.
995 TG host configuration
996 ^^^^^^^^^^^^^^^^^^^^^
998 Yardstick automatically install and configure Trex traffic generator on TG
999 host based on provided POD file (see below). Anyway, it's recommended to check
1000 the compatibility of the installed NIC on the TG server with software Trex using
1001 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
1004 Run the Sample VNF test case
1005 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1007 There is an example of Sample VNF test case ready to be executed in an
1008 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1009 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1011 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1014 Create pod file for TG in the yardstick repo folder located in the yardstick
1017 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1018 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1019 command to get the PF PCI address for ``vpci`` field.
1021 .. literalinclude:: code/single-yardstick-pod.conf
1024 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1025 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1026 context using steps described in `NS testing - using yardstick CLI`_ section.
1029 Multi node OpenStack TG and VNF setup (two nodes)
1030 -------------------------------------------------
1032 .. code-block:: console
1034 +----------------------------+ +----------------------------+
1035 |OpenStack(DevStack) | |OpenStack(DevStack) |
1037 | +--------------------+ | | +--------------------+ |
1038 | |sample-VNF VM | | | |sample-VNF VM | |
1040 | | TG | | | | DUT | |
1041 | | trafficgen_1 | | | | (VNF) | |
1043 | +--------+ +--------+ | | +--------+ +--------+ |
1044 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1045 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1048 +--------+-----------+-------+ +---------+----------+-------+
1049 | VF0 VF1 | | VF0 VF1 |
1051 | | SUT2 | | | | SUT1 | |
1052 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1054 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1056 +----------------------------+ +----------------------------+
1057 host2 (compute) host1 (controller)
1060 Controller/Compute pre-configuration
1061 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1063 Pre-configuration of the controller and compute hosts are the same as
1064 described in `Host pre-configuration`_ section. Follow the steps in the section.
1067 DevStack configuration
1068 ^^^^^^^^^^^^^^^^^^^^^^
1070 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
1071 documentation to install OpenStack on a host. Please note, that stable
1072 ``pike`` branch of devstack repo should be used during the installation.
1073 The required `local.conf`` configuration file are described below.
1075 .. note:: Update the devstack configuration files by replacing angluar brackets
1076 with a short description inside.
1078 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1079 commands to get device and vendor id of the virtual function (VF).
1081 DevStack configuration file for controller host:
1083 .. literalinclude:: code/multi-devstack-controller-local.conf
1086 DevStack configuration file for compute host:
1088 .. literalinclude:: code/multi-devstack-compute-local.conf
1091 Start the devstack installation on the controller and compute hosts.
1094 Run the sample vFW TC
1095 ^^^^^^^^^^^^^^^^^^^^^
1097 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1100 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1101 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1102 context using steps described in `NS testing - using yardstick CLI`_ section
1103 and the following yardtick command line arguments:
1107 yardstick -d task start --task-args='{"provider": "sriov"}' \
1108 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1111 Enabling other Traffic generator
1112 ================================
1117 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1118 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1119 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1120 ``<IxOS version>Linux64.bin.tar.gz``
1121 If the installation was not done inside the container, after installing
1122 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1123 sure you can run this cmd inside the yardstick container. Usually user is
1124 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1125 ``/usr/bin/ixiapython<ver>`` inside the container.
1127 2. Update ``pod_ixia.yaml`` file with ixia details.
1129 .. code-block:: console
1131 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1133 Config ``pod_ixia.yaml``
1135 .. literalinclude:: code/pod_ixia.yaml
1138 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1140 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1141 You will also need to configure the IxLoad machine to start the IXIA
1142 IxosTclServer. This can be started like so:
1144 * Connect to the IxLoad machine using RDP
1146 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1148 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1150 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1152 5. Execute testcase in samplevnf folder e.g.
1153 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1158 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1159 installed as part of the requirements of the project.
1161 1. Update ``pod_ixia.yaml`` file with ixia details.
1163 .. code-block:: console
1165 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1167 Config pod_ixia.yaml
1169 .. literalinclude:: code/pod_ixia.yaml
1172 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1174 2. Start IxNetwork TCL Server
1175 You will also need to configure the IxNetwork machine to start the IXIA
1176 IxNetworkTclServer. This can be started like so:
1178 * Connect to the IxNetwork machine using RDP
1180 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1181 (or ``IxNetworkApiServer``)
1183 3. Execute testcase in samplevnf folder e.g.
1184 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1189 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1190 to be preinstalled and properly configured.
1194 32-bit Java installation is required for the Spirent Landslide TCL API.
1196 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1199 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1200 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1201 section of installation instructions.
1203 - LsApi (Tcl API module)
1205 Follow Landslide documentation for detailed instructions on Linux
1206 installation of Tcl API and its dependencies
1207 ``http://TAS_HOST_IP/tclapiinstall.html``.
1208 For working with LsApi Python wrapper only steps 1-5 are required.
1210 .. note:: After installation make sure your API home path is included in
1211 ``PYTHONPATH`` environment variable.
1214 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1215 For LsApi module to initialize correctly following lines (184-186) in
1218 .. code-block:: python
1220 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1222 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1224 should be changed to:
1226 .. code-block:: python
1228 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1229 if not ldpath == '':
1230 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1232 .. note:: The Spirent landslide TCL software package needs to be updated in case
1233 the user upgrades to a new version of Spirent landslide software.