1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
7 Convention for heading levels in Yardstick documentation:
9 ======= Heading 0 (reserved for the title in a document)
15 Avoid deeper levels because they do not render well.
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
28 The steps needed to run Yardstick with NSB testing are:
30 * Install Yardstick (NSB Testing).
31 * Setup/reference ``pod.yaml`` describing Test topology
32 * Create/reference the test configuration yaml file.
38 Refer to :doc:`04-installation` for more information on Yardstick
41 Several prerequisites are needed for Yardstick (VNF testing):
43 * Python Modules: pyzmq, pika.
54 Hardware & Software Ingredients
55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
59 ======= ===================
61 ======= ===================
65 kernel 4.4.0-34-generic
67 ======= ===================
69 Boot and BIOS settings:
71 ============= =================================================
72 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
73 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
74 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
75 iommu=on iommu=pt intel_iommu=on
76 Note: nohz_full and rcu_nocbs is to disable Linux
78 BIOS CPU Power and Performance Policy <Performance>
81 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
82 Hyper-Threading Technology (If supported) Enabled
83 Virtualization Techology Enabled
84 Intel(R) VT for Direct I/O Enabled
87 ============= =================================================
89 Install Yardstick (NSB Testing)
90 -------------------------------
92 Download the source code and check out the latest stable branch::
94 .. code-block:: console
96 git clone https://gerrit.opnfv.org/gerrit/yardstick
98 # Switch to latest stable branch
99 git checkout stable/gambia
101 Configure the network proxy, either using the environment variables or setting
102 the global environment file.
108 http_proxy='http://proxy.company.com:port'
109 https_proxy='http://proxy.company.com:port'
111 .. code-block:: console
113 export http_proxy='http://proxy.company.com:port'
114 export https_proxy='http://proxy.company.com:port'
116 Modify the Yardstick installation inventory, used by Ansible::
118 cat ./ansible/install-inventory.ini
120 localhost ansible_connection=local
122 [yardstick-standalone]
123 yardstick-standalone-node ansible_host=192.168.1.2
124 yardstick-standalone-node-2 ansible_host=192.168.1.3
126 # section below is only due backward compatibility.
127 # it will be removed later
137 SSH access without password needs to be configured for all your nodes
138 defined in ``yardstick-install-inventory.ini`` file.
139 If you want to use password authentication you need to install ``sshpass``::
141 sudo -EH apt-get install sshpass
143 To execute an installation for a BareMetal or a Standalone context::
148 To execute an installation for an OpenStack context::
150 ./nsb_setup.sh <path to admin-openrc.sh>
152 The above commands will set up Docker with the latest Yardstick code. To
155 docker exec -it yardstick bash
157 It will also automatically download all the packages needed for NSB Testing
158 setup. Refer chapter :doc:`04-installation` for more on Docker
160 **Install Yardstick using Docker (recommended)**
162 Another way to execute an installation for a Bare-Metal or a Standalone context
163 is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
169 .. code-block:: console
171 +----------+ +----------+
177 +----------+ +----------+
181 Environment parameters and credentials
182 --------------------------------------
184 Configure yardstick.conf
185 ^^^^^^^^^^^^^^^^^^^^^^^^
187 If you did not run ``yardstick env influxdb`` inside the container to generate
188 ``yardstick.conf``, then create the config file manually (run inside the
191 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
192 vi /etc/yardstick/yardstick.conf
194 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
199 dispatcher = influxdb
201 [dispatcher_influxdb]
203 target = http://{YOUR_IP_HERE}:8086
209 trex_path=/opt/nsb_bin/trex/scripts
210 bin_path=/opt/nsb_bin
211 trex_client_lib=/opt/nsb_bin/trex_client/stl
213 Run Yardstick - Network Service Testcases
214 -----------------------------------------
216 NS testing - using yardstick CLI
217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
219 See :doc:`04-installation`
221 Connect to the Yardstick container::
223 docker exec -it yardstick /bin/bash
225 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
226 source /etc/yardstick/openstack.creds
228 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
231 export EXTERNAL_NETWORK="<openstack public network>"
233 Finally, you should be able to run the testcase::
235 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
237 Network Service Benchmarking - Bare-Metal
238 -----------------------------------------
240 Bare-Metal Config pod.yaml describing Topology
241 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
243 Bare-Metal 2-Node setup
244 +++++++++++++++++++++++
245 .. code-block:: console
247 +----------+ +----------+
253 +----------+ +----------+
256 Bare-Metal 3-Node setup - Correlated Traffic
257 ++++++++++++++++++++++++++++++++++++++++++++
258 .. code-block:: console
260 +----------+ +----------+ +------------+
263 | | (0)----->(0) | | | UDP |
264 | TG1 | | DUT | | Replay |
266 | | | |(1)<---->(0)| |
267 +----------+ +----------+ +------------+
268 trafficgen_1 vnf trafficgen_2
271 Bare-Metal Config pod.yaml
272 ^^^^^^^^^^^^^^^^^^^^^^^^^^
273 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
274 topology and update all the required fields.::
276 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
288 xe0: # logical name from topology.yaml and vnfd.yaml
290 driver: i40e # default kernel driver
292 local_ip: "152.16.100.20"
293 netmask: "255.255.255.0"
294 local_mac: "00:00:00:00:00:01"
295 xe1: # logical name from topology.yaml and vnfd.yaml
297 driver: i40e # default kernel driver
299 local_ip: "152.16.40.20"
300 netmask: "255.255.255.0"
301 local_mac: "00:00.00:00:00:02"
309 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
311 xe0: # logical name from topology.yaml and vnfd.yaml
313 driver: i40e # default kernel driver
315 local_ip: "152.16.100.19"
316 netmask: "255.255.255.0"
317 local_mac: "00:00:00:00:00:03"
319 xe1: # logical name from topology.yaml and vnfd.yaml
321 driver: i40e # default kernel driver
323 local_ip: "152.16.40.19"
324 netmask: "255.255.255.0"
325 local_mac: "00:00:00:00:00:04"
327 - network: "152.16.100.20"
328 netmask: "255.255.255.0"
329 gateway: "152.16.100.20"
331 - network: "152.16.40.20"
332 netmask: "255.255.255.0"
333 gateway: "152.16.40.20"
336 - network: "0064:ff9b:0:0:0:0:9810:6414"
338 gateway: "0064:ff9b:0:0:0:0:9810:6414"
340 - network: "0064:ff9b:0:0:0:0:9810:2814"
342 gateway: "0064:ff9b:0:0:0:0:9810:2814"
346 Standalone Virtualization
347 -------------------------
352 SR-IOV Pre-requisites
353 +++++++++++++++++++++
355 On Host, where VM is created:
356 a) Create and configure a bridge named ``br-int`` for VM to connect to
357 external network. Currently this can be done using VXLAN tunnel.
359 Execute the following on host, where VM is created::
361 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
363 brctl addif br-int vxlan0
364 ip link set dev vxlan0 up
365 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
366 ip link set dev br-int up
368 .. note:: You may need to add extra rules to iptable to forward traffic.
370 .. code-block:: console
372 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
373 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
375 Execute the following on a jump host:
377 .. code-block:: console
379 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
380 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
381 ip link set dev vxlan0 up
383 .. note:: Host and jump host are different baremetal servers.
385 b) Modify test case management CIDR.
386 IP addresses IP#1, IP#2 and CIDR must be in the same network.
396 c) Build guest image for VNF to run.
397 Most of the sample test cases in Yardstick are using a guest image called
398 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
399 Yardstick has a tool for building this custom image with SampleVNF.
400 It is necessary to have ``sudo`` rights to use this tool.
402 Also you may need to install several additional packages to use this tool, by
403 following the commands below::
405 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
407 This image can be built using the following command in the directory where
408 Yardstick is installed::
410 export YARD_IMG_ARCH='amd64'
411 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
413 For instructions on generating a cloud image using Ansible, refer to
414 :doc:`04-installation`.
416 for more details refer to chapter :doc:`04-installation`
418 .. note:: VM should be build with static IP and be accessible from the
422 SR-IOV Config pod.yaml describing Topology
423 ++++++++++++++++++++++++++++++++++++++++++
427 .. code-block:: console
429 +--------------------+
435 +--------------------+
436 | VF NIC | | VF NIC |
437 +--------+ +--------+
441 +----------+ +-------------------------+
444 | | (0)<----->(0) | ------ SUT | |
446 | | (n)<----->(n) | ----------------- |
448 +----------+ +-------------------------+
453 SR-IOV 3-Node setup - Correlated Traffic
454 ++++++++++++++++++++++++++++++++++++++++
455 .. code-block:: console
457 +--------------------+
463 +--------------------+
464 | VF NIC | | VF NIC |
465 +--------+ +--------+
469 +----------+ +---------------------+ +--------------+
472 | | (0)<----->(0) |----- | | | TG2 |
473 | TG1 | | SUT | | | (UDP Replay) |
475 | | (n)<----->(n) | -----| (n)<-->(n) | |
476 +----------+ +---------------------+ +--------------+
477 trafficgen_1 host trafficgen_2
479 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
480 topology and update all the required fields.
482 .. code-block:: console
484 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
485 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
487 .. note:: Update all the required fields like ip, user, password, pcis, etc...
489 SR-IOV Config pod_trex.yaml
490 +++++++++++++++++++++++++++
501 key_filename: /root/.ssh/id_rsa
503 xe0: # logical name from topology.yaml and vnfd.yaml
505 driver: i40e # default kernel driver
507 local_ip: "152.16.100.20"
508 netmask: "255.255.255.0"
509 local_mac: "00:00:00:00:00:01"
510 xe1: # logical name from topology.yaml and vnfd.yaml
512 driver: i40e # default kernel driver
514 local_ip: "152.16.40.20"
515 netmask: "255.255.255.0"
516 local_mac: "00:00.00:00:00:02"
518 SR-IOV Config host_sriov.yaml
519 +++++++++++++++++++++++++++++
531 SR-IOV testcase update:
532 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
534 Update contexts section
535 '''''''''''''''''''''''
542 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
543 - type: StandaloneSriov
544 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
548 images: "/var/lib/libvirt/images/ubuntu.qcow2"
554 user: "" # update VM username
555 password: "" # update password
560 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
567 phy_port: "0000:05:00.0"
569 cidr: '152.16.100.10/24'
570 gateway_ip: '152.16.100.20'
572 phy_port: "0000:05:00.1"
574 cidr: '152.16.40.10/24'
575 gateway_ip: '152.16.100.20'
581 OVS-DPDK Pre-requisites
582 +++++++++++++++++++++++
584 On Host, where VM is created:
585 a) Create and configure a bridge named ``br-int`` for VM to connect to
586 external network. Currently this can be done using VXLAN tunnel.
588 Execute the following on host, where VM is created:
590 .. code-block:: console
592 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
594 brctl addif br-int vxlan0
595 ip link set dev vxlan0 up
596 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
597 ip link set dev br-int up
599 .. note:: May be needed to add extra rules to iptable to forward traffic.
601 .. code-block:: console
603 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
604 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
606 Execute the following on a jump host:
608 .. code-block:: console
610 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
611 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
612 ip link set dev vxlan0 up
614 .. note:: Host and jump host are different baremetal servers.
616 b) Modify test case management CIDR.
617 IP addresses IP#1, IP#2 and CIDR must be in the same network.
627 c) Build guest image for VNF to run.
628 Most of the sample test cases in Yardstick are using a guest image called
629 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
630 Yardstick has a tool for building this custom image with SampleVNF.
631 It is necessary to have ``sudo`` rights to use this tool.
633 You may need to install several additional packages to use this tool, by
634 following the commands below::
636 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
638 This image can be built using the following command in the directory where
639 Yardstick is installed::
641 export YARD_IMG_ARCH='amd64'
642 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
643 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
645 for more details refer to chapter :doc:`04-installation`
647 .. note:: VM should be build with static IP and should be accessible from
650 3. OVS & DPDK version.
651 * OVS 2.7 and DPDK 16.11.1 above version is supported
653 4. Setup `OVS-DPDK`_ on host.
656 OVS-DPDK Config pod.yaml describing Topology
657 ++++++++++++++++++++++++++++++++++++++++++++
659 OVS-DPDK 2-Node setup
660 +++++++++++++++++++++
662 .. code-block:: console
664 +--------------------+
670 +--------------------+
671 | virtio | | virtio |
672 +--------+ +--------+
676 +--------+ +--------+
677 | vHOST0 | | vHOST1 |
678 +----------+ +-------------------------+
681 | | (0)<----->(0) | ------ | |
684 | | (n)<----->(n) |------------------ |
685 +----------+ +-------------------------+
689 OVS-DPDK 3-Node setup - Correlated Traffic
690 ++++++++++++++++++++++++++++++++++++++++++
692 .. code-block:: console
694 +--------------------+
700 +--------------------+
701 | virtio | | virtio |
702 +--------+ +--------+
706 +--------+ +--------+
707 | vHOST0 | | vHOST1 |
708 +----------+ +-------------------------+ +------------+
711 | | (0)<----->(0) | ------ | | | TG2 |
712 | TG1 | | SUT | | |(UDP Replay)|
713 | | | (ovs-dpdk) | | | |
714 | | (n)<----->(n) | ------ |(n)<-->(n)| |
715 +----------+ +-------------------------+ +------------+
716 trafficgen_1 host trafficgen_2
719 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
720 the topology and update all the required fields::
722 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
723 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
725 .. note:: Update all the required fields like ip, user, password, pcis, etc...
727 OVS-DPDK Config pod_trex.yaml
728 +++++++++++++++++++++++++++++
740 xe0: # logical name from topology.yaml and vnfd.yaml
742 driver: i40e # default kernel driver
744 local_ip: "152.16.100.20"
745 netmask: "255.255.255.0"
746 local_mac: "00:00:00:00:00:01"
747 xe1: # logical name from topology.yaml and vnfd.yaml
749 driver: i40e # default kernel driver
751 local_ip: "152.16.40.20"
752 netmask: "255.255.255.0"
753 local_mac: "00:00.00:00:00:02"
755 OVS-DPDK Config host_ovs.yaml
756 +++++++++++++++++++++++++++++
768 ovs_dpdk testcase update:
769 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
771 Update contexts section
772 '''''''''''''''''''''''
779 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
780 - type: StandaloneOvsDpdk
782 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
796 images: "/var/lib/libvirt/images/ubuntu.qcow2"
802 user: "" # update VM username
803 password: "" # update password
808 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
815 phy_port: "0000:05:00.0"
817 cidr: '152.16.100.10/24'
818 gateway_ip: '152.16.100.20'
820 phy_port: "0000:05:00.1"
822 cidr: '152.16.40.10/24'
823 gateway_ip: '152.16.100.20'
826 OpenStack with SR-IOV support
827 -----------------------------
829 This section describes how to run a Sample VNF test case, using Heat context,
830 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
831 DevStack, with SR-IOV support.
834 Single node OpenStack with external TG
835 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
837 .. code-block:: console
839 +----------------------------+
840 |OpenStack(DevStack) |
842 | +--------------------+ |
848 | +--------+ +--------+ |
849 | | VF NIC | | VF NIC | |
850 | +-----+--+--+----+---+ |
853 +----------+ +---------+----------+-------+
857 | TG | (PF0)<----->(PF0) +---------+ | |
859 | | (PF1)<----->(PF1) +--------------------+ |
861 +----------+ +----------------------------+
865 Host pre-configuration
866 ++++++++++++++++++++++
868 .. warning:: The following configuration requires sudo access to the system.
869 Make sure that your user have the access.
871 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
872 manufacturers disable this extension by default.
874 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
875 config file ``/etc/default/grub``.
877 For the Intel platform::
880 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
883 For the AMD platform::
886 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
889 Update the grub configuration file and restart the system:
891 .. warning:: The following command will reboot the system.
898 Make sure the extension has been enabled::
900 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
902 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
903 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
904 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
905 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
906 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
907 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
908 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
909 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
911 .. TODO: Refer to the yardstick installation guide for proxy set up
913 Setup system proxy (if needed). Add the following configuration into the
914 ``/etc/environment`` file:
916 .. note:: The proxy server name/port and IPs should be changed according to
917 actual/current proxy configuration in the lab.
921 export http_proxy=http://proxy.company.com:port
922 export https_proxy=http://proxy.company.com:port
923 export ftp_proxy=http://proxy.company.com:port
924 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
925 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
931 sudo -EH apt-get update
932 sudo -EH apt-get upgrade
933 sudo -EH apt-get dist-upgrade
935 Install dependencies needed for DevStack
939 sudo -EH apt-get install python python-dev python-pip
941 Setup SR-IOV ports on the host:
943 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
944 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
945 interface names should be changed according to the HW environment used for
950 sudo ip link set dev enp24s0f0 up
951 sudo ip link set dev enp24s0f1 up
952 sudo ip link set dev enp24s0f3 up
955 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
956 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
959 DevStack installation
960 +++++++++++++++++++++
962 If you want to try out NSB, but don't have OpenStack set-up, you can use
963 `Devstack`_ to install OpenStack on a host. Please note, that the
964 ``stable/pike`` branch of devstack repo should be used during the installation.
965 The required ``local.conf`` configuration file are described below.
967 DevStack configuration file:
969 .. note:: Update the devstack configuration file by replacing angluar brackets
970 with a short description inside.
972 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
973 commands to get device and vendor id of the virtual function (VF).
975 .. literalinclude:: code/single-devstack-local.conf
978 Start the devstack installation on a host.
980 TG host configuration
981 +++++++++++++++++++++
983 Yardstick automatically installs and configures Trex traffic generator on TG
984 host based on provided POD file (see below). Anyway, it's recommended to check
985 the compatibility of the installed NIC on the TG server with software Trex
986 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
988 Run the Sample VNF test case
989 ++++++++++++++++++++++++++++
991 There is an example of Sample VNF test case ready to be executed in an
992 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
993 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
995 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
998 Create pod file for TG in the yardstick repo folder located in the yardstick
1001 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1002 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1003 command to get the PF PCI address for ``vpci`` field.
1005 .. literalinclude:: code/single-yardstick-pod.conf
1008 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1009 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1010 context using steps described in `NS testing - using yardstick CLI`_ section.
1013 Multi node OpenStack TG and VNF setup (two nodes)
1014 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1016 .. code-block:: console
1018 +----------------------------+ +----------------------------+
1019 |OpenStack(DevStack) | |OpenStack(DevStack) |
1021 | +--------------------+ | | +--------------------+ |
1022 | |sample-VNF VM | | | |sample-VNF VM | |
1024 | | TG | | | | DUT | |
1025 | | trafficgen_1 | | | | (VNF) | |
1027 | +--------+ +--------+ | | +--------+ +--------+ |
1028 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1029 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1032 +--------+-----------+-------+ +---------+----------+-------+
1033 | VF0 VF1 | | VF0 VF1 |
1035 | | SUT2 | | | | SUT1 | |
1036 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1038 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1040 +----------------------------+ +----------------------------+
1041 host2 (compute) host1 (controller)
1044 Controller/Compute pre-configuration
1045 ++++++++++++++++++++++++++++++++++++
1047 Pre-configuration of the controller and compute hosts are the same as
1048 described in `Host pre-configuration`_ section.
1050 DevStack configuration
1051 ++++++++++++++++++++++
1053 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1054 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1055 devstack repo should be used during the installation.
1057 .. note:: Update the devstack configuration files by replacing angluar brackets
1058 with a short description inside.
1060 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1061 commands to get device and vendor id of the virtual function (VF).
1063 DevStack configuration file for controller host:
1065 .. literalinclude:: code/multi-devstack-controller-local.conf
1068 DevStack configuration file for compute host:
1070 .. literalinclude:: code/multi-devstack-compute-local.conf
1073 Start the devstack installation on the controller and compute hosts.
1075 Run the sample vFW TC
1076 +++++++++++++++++++++
1078 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1081 Run the sample vFW RFC2544 SR-IOV test case
1082 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1083 in the heat context using steps described in
1084 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1089 yardstick -d task start --task-args='{"provider": "sriov"}' \
1090 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1093 Enabling other Traffic generators
1094 ---------------------------------
1099 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1100 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1101 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1102 ``<IxOS version>Linux64.bin.tar.gz``
1103 If the installation was not done inside the container, after installing
1104 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1105 sure you can run this cmd inside the yardstick container. Usually user is
1106 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1107 ``/usr/bin/ixiapython<ver>`` inside the container.
1109 2. Update ``pod_ixia.yaml`` file with ixia details.
1111 .. code-block:: console
1113 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1114 etc/yardstick/nodes/pod_ixia.yaml
1116 Config ``pod_ixia.yaml``
1118 .. literalinclude:: code/pod_ixia.yaml
1121 for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1122 for ovs-dpdk/sriov configuration
1124 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1125 You will also need to configure the IxLoad machine to start the IXIA
1126 IxosTclServer. This can be started like so:
1128 * Connect to the IxLoad machine using RDP
1130 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1132 ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1134 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1136 5. Execute testcase in samplevnf folder e.g.
1137 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1142 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1143 installed as part of the requirements of the project.
1145 1. Update ``pod_ixia.yaml`` file with ixia details.
1147 .. code-block:: console
1149 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1150 etc/yardstick/nodes/pod_ixia.yaml
1152 Configure ``pod_ixia.yaml``
1154 .. literalinclude:: code/pod_ixia.yaml
1157 for sriov/ovs_dpdk pod files, please refer to above
1158 `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1160 2. Start IxNetwork TCL Server
1161 You will also need to configure the IxNetwork machine to start the IXIA
1162 IxNetworkTclServer. This can be started like so:
1164 * Connect to the IxNetwork machine using RDP
1166 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1167 (or ``IxNetworkApiServer``)
1169 3. Execute testcase in samplevnf folder e.g.
1170 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1175 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1176 to be preinstalled and properly configured.
1180 32-bit Java installation is required for the Spirent Landslide TCL API.
1182 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1185 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1186 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1187 section of installation instructions.
1189 - LsApi (Tcl API module)
1191 Follow Landslide documentation for detailed instructions on Linux
1192 installation of Tcl API and its dependencies
1193 ``http://TAS_HOST_IP/tclapiinstall.html``.
1194 For working with LsApi Python wrapper only steps 1-5 are required.
1196 .. note:: After installation make sure your API home path is included in
1197 ``PYTHONPATH`` environment variable.
1200 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1201 For LsApi module to initialize correctly following lines (184-186) in
1204 .. code-block:: python
1206 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1208 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1210 should be changed to:
1212 .. code-block:: python
1214 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1215 if not ldpath == '':
1216 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1218 .. note:: The Spirent landslide TCL software package needs to be updated in case
1219 the user upgrades to a new version of Spirent landslide software.