1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
7 Convention for heading levels in Yardstick documentation:
9 ======= Heading 0 (reserved for the title in a document)
15 Avoid deeper levels because they do not render well.
17 =====================================
18 Yardstick - NSB Testing -Installation
19 =====================================
24 The Network Service Benchmarking (NSB) extends the yardstick framework to do
25 VNF characterization and benchmarking in three different execution
26 environments viz., bare metal i.e. native Linux environment, standalone virtual
27 environment and managed virtualized environment (e.g. Open stack etc.).
28 It also brings in the capability to interact with external traffic generators
29 both hardware & software based for triggering and validating the traffic
30 according to user defined profiles.
32 The steps needed to run Yardstick with NSB testing are:
34 * Install Yardstick (NSB Testing).
35 * Setup/Reference pod.yaml describing Test topology
36 * Create/Reference the test configuration yaml file.
43 Refer chapter Yardstick Installation for more information on yardstick
46 Several prerequisites are needed for Yardstick (VNF testing):
48 * Python Modules: pyzmq, pika.
59 Hardware & Software Ingredients
60 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
65 ======= ===================
67 ======= ===================
71 kernel 4.4.0-34-generic
73 ======= ===================
75 Boot and BIOS settings:
78 ============= =================================================
79 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
80 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
81 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
82 iommu=on iommu=pt intel_iommu=on
83 Note: nohz_full and rcu_nocbs is to disable Linux
85 BIOS CPU Power and Performance Policy <Performance>
88 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
89 Hyper-Threading Technology (If supported) Enabled
90 Virtualization Techology Enabled
91 Intel(R) VT for Direct I/O Enabled
94 ============= =================================================
98 Install Yardstick (NSB Testing)
99 -------------------------------
101 Download the source code and install Yardstick from it
103 .. code-block:: console
105 git clone https://gerrit.opnfv.org/gerrit/yardstick
109 # Switch to latest stable branch
110 # git checkout <tag or stable branch>
111 git checkout stable/euphrates
113 Configure the network proxy, either using the environment variables or setting
114 the global environment file:
119 http_proxy='http://proxy.company.com:port'
120 https_proxy='http://proxy.company.com:port'
122 .. code-block:: console
124 export http_proxy='http://proxy.company.com:port'
125 export https_proxy='http://proxy.company.com:port'
127 The last step is to modify the Yardstick installation inventory, used by
132 cat ./ansible/install-inventory.ini
134 localhost ansible_connection=local
136 [yardstick-standalone]
137 yardstick-standalone-node ansible_host=192.168.1.2
138 yardstick-standalone-node-2 ansible_host=192.168.1.3
140 # section below is only due backward compatibility.
141 # it will be removed later
151 SSH access without password needs to be configured for all your nodes defined in
152 ``install-inventory.ini`` file.
153 If you want to use password authentication you need to install sshpass
155 .. code-block:: console
157 sudo -EH apt-get install sshpass
159 To execute an installation for a Bare-Metal or a Standalone context:
161 .. code-block:: console
166 To execute an installation for an OpenStack context:
168 .. code-block:: console
170 ./nsb_setup.sh <path to admin-openrc.sh>
172 Above command setup docker with latest yardstick code. To execute
174 .. code-block:: console
176 docker exec -it yardstick bash
178 It will also automatically download all the packages needed for NSB Testing
179 setup. Refer chapter :doc:`04-installation` for more on docker
180 **Install Yardstick using Docker (recommended)**
182 Another way to execute an installation for a Bare-Metal or a Standalone context
183 is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
189 .. code-block:: console
191 +----------+ +----------+
197 +----------+ +----------+
201 Environment parameters and credentials
202 --------------------------------------
204 Config yardstick conf
205 ~~~~~~~~~~~~~~~~~~~~~
207 If user did not run 'yardstick env influxdb' inside the container, which will
208 generate correct ``yardstick.conf``, then create the config file manually (run
209 inside the container):
212 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
213 vi /etc/yardstick/yardstick.conf
215 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
221 dispatcher = file, influxdb
223 [dispatcher_influxdb]
225 target = http://{YOUR_IP_HERE}:8086
231 trex_path=/opt/nsb_bin/trex/scripts
232 bin_path=/opt/nsb_bin
233 trex_client_lib=/opt/nsb_bin/trex_client/stl
235 Run Yardstick - Network Service Testcases
236 -----------------------------------------
239 NS testing - using yardstick CLI
240 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
242 See :doc:`04-installation`
244 .. code-block:: console
247 docker exec -it yardstick /bin/bash
248 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
249 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
250 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
252 Network Service Benchmarking - Bare-Metal
253 -----------------------------------------
255 Bare-Metal Config pod.yaml describing Topology
256 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
258 Bare-Metal 2-Node setup
259 +++++++++++++++++++++++
260 .. code-block:: console
262 +----------+ +----------+
268 +----------+ +----------+
271 Bare-Metal 3-Node setup - Correlated Traffic
272 ++++++++++++++++++++++++++++++++++++++++++++
273 .. code-block:: console
275 +----------+ +----------+ +------------+
278 | | (0)----->(0) | | | UDP |
279 | TG1 | | DUT | | Replay |
281 | | | |(1)<---->(0)| |
282 +----------+ +----------+ +------------+
283 trafficgen_1 vnf trafficgen_2
286 Bare-Metal Config pod.yaml
287 ~~~~~~~~~~~~~~~~~~~~~~~~~~
288 Before executing Yardstick test cases, make sure that pod.yaml reflects the
289 topology and update all the required fields.::
291 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
303 xe0: # logical name from topology.yaml and vnfd.yaml
305 driver: i40e # default kernel driver
307 local_ip: "152.16.100.20"
308 netmask: "255.255.255.0"
309 local_mac: "00:00:00:00:00:01"
310 xe1: # logical name from topology.yaml and vnfd.yaml
312 driver: i40e # default kernel driver
314 local_ip: "152.16.40.20"
315 netmask: "255.255.255.0"
316 local_mac: "00:00.00:00:00:02"
324 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
326 xe0: # logical name from topology.yaml and vnfd.yaml
328 driver: i40e # default kernel driver
330 local_ip: "152.16.100.19"
331 netmask: "255.255.255.0"
332 local_mac: "00:00:00:00:00:03"
334 xe1: # logical name from topology.yaml and vnfd.yaml
336 driver: i40e # default kernel driver
338 local_ip: "152.16.40.19"
339 netmask: "255.255.255.0"
340 local_mac: "00:00:00:00:00:04"
342 - network: "152.16.100.20"
343 netmask: "255.255.255.0"
344 gateway: "152.16.100.20"
346 - network: "152.16.40.20"
347 netmask: "255.255.255.0"
348 gateway: "152.16.40.20"
351 - network: "0064:ff9b:0:0:0:0:9810:6414"
353 gateway: "0064:ff9b:0:0:0:0:9810:6414"
355 - network: "0064:ff9b:0:0:0:0:9810:2814"
357 gateway: "0064:ff9b:0:0:0:0:9810:2814"
361 Network Service Benchmarking - Standalone Virtualization
362 --------------------------------------------------------
367 SR-IOV Pre-requisites
368 +++++++++++++++++++++
370 On Host, where VM is created:
371 a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
372 Currently this can be done using VXLAN tunnel.
374 Execute the following on host, where VM is created:
376 .. code-block:: console
378 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
380 brctl addif br-int vxlan0
381 ip link set dev vxlan0 up
382 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
383 ip link set dev br-int up
385 .. note:: May be needed to add extra rules to iptable to forward traffic.
387 .. code-block:: console
389 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
390 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
392 Execute the following on a jump host:
394 .. code-block:: console
396 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
397 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
398 ip link set dev vxlan0 up
400 .. note:: Host and jump host are different baremetal servers.
402 b) Modify test case management CIDR.
403 IP addresses IP#1, IP#2 and CIDR must be in the same network.
413 c) Build guest image for VNF to run.
414 Most of the sample test cases in Yardstick are using a guest image called
415 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
416 Yardstick has a tool for building this custom image with SampleVNF.
417 It is necessary to have ``sudo`` rights to use this tool.
419 Also you may need to install several additional packages to use this tool, by
420 following the commands below::
422 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
424 This image can be built using the following command in the directory where Yardstick is installed
426 .. code-block:: console
428 export YARD_IMG_ARCH='amd64'
429 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
431 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
433 for more details refer to chapter :doc:`04-installation`
435 .. note:: VM should be build with static IP and should be accessible from yardstick host.
438 SR-IOV Config pod.yaml describing Topology
439 ++++++++++++++++++++++++++++++++++++++++++
443 .. code-block:: console
445 +--------------------+
451 +--------------------+
452 | VF NIC | | VF NIC |
453 +--------+ +--------+
457 +----------+ +-------------------------+
460 | | (0)<----->(0) | ------ | |
463 | | (n)<----->(n) |------------------ |
464 +----------+ +-------------------------+
469 SR-IOV 3-Node setup - Correlated Traffic
470 ++++++++++++++++++++++++++++++++++++++++
471 .. code-block:: console
473 +--------------------+
479 +--------------------+
480 | VF NIC | | VF NIC |
481 +--------+ +--------+
485 +----------+ +-------------------------+ +--------------+
488 | | (0)<----->(0) | ------ | | | TG2 |
489 | TG1 | | SUT | | | (UDP Replay) |
491 | | (n)<----->(n) | ------ | (n)<-->(n) | |
492 +----------+ +-------------------------+ +--------------+
493 trafficgen_1 host trafficgen_2
495 Before executing Yardstick test cases, make sure that pod.yaml reflects the
496 topology and update all the required fields.
498 .. code-block:: console
500 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
501 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
503 .. note:: Update all the required fields like ip, user, password, pcis, etc...
505 SR-IOV Config pod_trex.yaml
506 +++++++++++++++++++++++++++
517 key_filename: /root/.ssh/id_rsa
519 xe0: # logical name from topology.yaml and vnfd.yaml
521 driver: i40e # default kernel driver
523 local_ip: "152.16.100.20"
524 netmask: "255.255.255.0"
525 local_mac: "00:00:00:00:00:01"
526 xe1: # logical name from topology.yaml and vnfd.yaml
528 driver: i40e # default kernel driver
530 local_ip: "152.16.40.20"
531 netmask: "255.255.255.0"
532 local_mac: "00:00.00:00:00:02"
534 SR-IOV Config host_sriov.yaml
535 +++++++++++++++++++++++++++++
547 SR-IOV testcase update:
548 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
550 Update "contexts" section
551 '''''''''''''''''''''''''
558 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
559 - type: StandaloneSriov
560 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
564 images: "/var/lib/libvirt/images/ubuntu.qcow2"
570 user: "" # update VM username
571 password: "" # update password
576 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
583 phy_port: "0000:05:00.0"
585 cidr: '152.16.100.10/24'
586 gateway_ip: '152.16.100.20'
588 phy_port: "0000:05:00.1"
590 cidr: '152.16.40.10/24'
591 gateway_ip: '152.16.100.20'
598 OVS-DPDK Pre-requisites
599 ~~~~~~~~~~~~~~~~~~~~~~~
601 On Host, where VM is created:
602 a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
603 Currently this can be done using VXLAN tunnel.
605 Execute the following on host, where VM is created:
607 .. code-block:: console
609 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
611 brctl addif br-int vxlan0
612 ip link set dev vxlan0 up
613 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
614 ip link set dev br-int up
616 .. note:: May be needed to add extra rules to iptable to forward traffic.
618 .. code-block:: console
620 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
621 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
623 Execute the following on a jump host:
625 .. code-block:: console
627 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
628 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
629 ip link set dev vxlan0 up
631 .. note:: Host and jump host are different baremetal servers.
633 b) Modify test case management CIDR.
634 IP addresses IP#1, IP#2 and CIDR must be in the same network.
644 c) Build guest image for VNF to run.
645 Most of the sample test cases in Yardstick are using a guest image called
646 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
647 Yardstick has a tool for building this custom image with SampleVNF.
648 It is necessary to have ``sudo`` rights to use this tool.
650 Also you may need to install several additional packages to use this tool, by
651 following the commands below::
653 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
655 This image can be built using the following command in the directory where Yardstick is installed::
657 export YARD_IMG_ARCH='amd64'
658 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
659 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
661 for more details refer to chapter :doc:`04-installation`
663 .. note:: VM should be build with static IP and should be accessible from yardstick host.
665 c) OVS & DPDK version.
666 - OVS 2.7 and DPDK 16.11.1 above version is supported
668 d) Setup OVS/DPDK on host.
669 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
672 OVS-DPDK Config pod.yaml describing Topology
673 ++++++++++++++++++++++++++++++++++++++++++++
675 OVS-DPDK 2-Node setup
676 +++++++++++++++++++++
678 .. code-block:: console
680 +--------------------+
686 +--------------------+
687 | virtio | | virtio |
688 +--------+ +--------+
692 +--------+ +--------+
693 | vHOST0 | | vHOST1 |
694 +----------+ +-------------------------+
697 | | (0)<----->(0) | ------ | |
700 | | (n)<----->(n) |------------------ |
701 +----------+ +-------------------------+
705 OVS-DPDK 3-Node setup - Correlated Traffic
706 ++++++++++++++++++++++++++++++++++++++++++
708 .. code-block:: console
710 +--------------------+
716 +--------------------+
717 | virtio | | virtio |
718 +--------+ +--------+
722 +--------+ +--------+
723 | vHOST0 | | vHOST1 |
724 +----------+ +-------------------------+ +------------+
727 | | (0)<----->(0) | ------ | | | TG2 |
728 | TG1 | | SUT | | |(UDP Replay)|
729 | | | (ovs-dpdk) | | | |
730 | | (n)<----->(n) | ------ |(n)<-->(n)| |
731 +----------+ +-------------------------+ +------------+
732 trafficgen_1 host trafficgen_2
735 Before executing Yardstick test cases, make sure that pod.yaml reflects the
736 topology and update all the required fields.
738 .. code-block:: console
740 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
741 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
743 .. note:: Update all the required fields like ip, user, password, pcis, etc...
745 OVS-DPDK Config pod_trex.yaml
746 +++++++++++++++++++++++++++++
758 xe0: # logical name from topology.yaml and vnfd.yaml
760 driver: i40e # default kernel driver
762 local_ip: "152.16.100.20"
763 netmask: "255.255.255.0"
764 local_mac: "00:00:00:00:00:01"
765 xe1: # logical name from topology.yaml and vnfd.yaml
767 driver: i40e # default kernel driver
769 local_ip: "152.16.40.20"
770 netmask: "255.255.255.0"
771 local_mac: "00:00.00:00:00:02"
773 OVS-DPDK Config host_ovs.yaml
774 +++++++++++++++++++++++++++++
786 ovs_dpdk testcase update:
787 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
789 Update "contexts" section
790 '''''''''''''''''''''''''
797 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
798 - type: StandaloneOvsDpdk
800 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
814 images: "/var/lib/libvirt/images/ubuntu.qcow2"
820 user: "" # update VM username
821 password: "" # update password
826 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
833 phy_port: "0000:05:00.0"
835 cidr: '152.16.100.10/24'
836 gateway_ip: '152.16.100.20'
838 phy_port: "0000:05:00.1"
840 cidr: '152.16.40.10/24'
841 gateway_ip: '152.16.100.20'
844 Network Service Benchmarking - OpenStack with SR-IOV support
845 ------------------------------------------------------------
847 This section describes how to run a Sample VNF test case, using Heat context,
848 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
849 DevStack, with SR-IOV support.
852 Single node OpenStack setup with external TG
853 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
855 .. code-block:: console
857 +----------------------------+
858 |OpenStack(DevStack) |
860 | +--------------------+ |
866 | +--------+ +--------+ |
867 | | VF NIC | | VF NIC | |
868 | +-----+--+--+----+---+ |
871 +----------+ +---------+----------+-------+
875 | TG | (PF0)<----->(PF0) +---------+ | |
877 | | (PF1)<----->(PF1) +--------------------+ |
879 +----------+ +----------------------------+
883 Host pre-configuration
884 ++++++++++++++++++++++
886 .. warning:: The following configuration requires sudo access to the system. Make
887 sure that your user have the access.
889 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
890 disable this extension by default.
892 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
893 config file ``/etc/default/grub``.
895 For the Intel platform:
900 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
903 For the AMD platform:
908 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
911 Update the grub configuration file and restart the system:
913 .. warning:: The following command will reboot the system.
920 Make sure the extension has been enabled:
924 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
926 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
927 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
928 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
929 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
930 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
931 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
932 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
933 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
935 Setup system proxy (if needed). Add the following configuration into the
936 ``/etc/environment`` file:
938 .. note:: The proxy server name/port and IPs should be changed according to
939 actual/current proxy configuration in the lab.
943 export http_proxy=http://proxy.company.com:port
944 export https_proxy=http://proxy.company.com:port
945 export ftp_proxy=http://proxy.company.com:port
946 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
947 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
953 sudo -EH apt-get update
954 sudo -EH apt-get upgrade
955 sudo -EH apt-get dist-upgrade
957 Install dependencies needed for the DevStack
961 sudo -EH apt-get install python
962 sudo -EH apt-get install python-dev
963 sudo -EH apt-get install python-pip
965 Setup SR-IOV ports on the host:
967 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
968 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
969 interface names should be changed according to the HW environment used for
974 sudo ip link set dev enp24s0f0 up
975 sudo ip link set dev enp24s0f1 up
976 sudo ip link set dev enp24s0f3 up
979 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
980 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
983 DevStack installation
984 +++++++++++++++++++++
986 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
987 documentation to install OpenStack on a host. Please note, that stable
988 ``pike`` branch of devstack repo should be used during the installation.
989 The required `local.conf`` configuration file are described below.
991 DevStack configuration file:
993 .. note:: Update the devstack configuration file by replacing angluar brackets
994 with a short description inside.
996 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
997 commands to get device and vendor id of the virtual function (VF).
999 .. literalinclude:: code/single-devstack-local.conf
1002 Start the devstack installation on a host.
1005 TG host configuration
1006 +++++++++++++++++++++
1008 Yardstick automatically install and configure Trex traffic generator on TG
1009 host based on provided POD file (see below). Anyway, it's recommended to check
1010 the compatibility of the installed NIC on the TG server with software Trex using
1011 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
1014 Run the Sample VNF test case
1015 ++++++++++++++++++++++++++++
1017 There is an example of Sample VNF test case ready to be executed in an
1018 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1019 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1021 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1024 Create pod file for TG in the yardstick repo folder located in the yardstick
1027 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1028 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1029 command to get the PF PCI address for ``vpci`` field.
1031 .. literalinclude:: code/single-yardstick-pod.conf
1034 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1035 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1036 context using steps described in `NS testing - using yardstick CLI`_ section.
1039 Multi node OpenStack TG and VNF setup (two nodes)
1040 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1042 .. code-block:: console
1044 +----------------------------+ +----------------------------+
1045 |OpenStack(DevStack) | |OpenStack(DevStack) |
1047 | +--------------------+ | | +--------------------+ |
1048 | |sample-VNF VM | | | |sample-VNF VM | |
1050 | | TG | | | | DUT | |
1051 | | trafficgen_1 | | | | (VNF) | |
1053 | +--------+ +--------+ | | +--------+ +--------+ |
1054 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1055 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1058 +--------+-----------+-------+ +---------+----------+-------+
1059 | VF0 VF1 | | VF0 VF1 |
1061 | | SUT2 | | | | SUT1 | |
1062 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1064 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1066 +----------------------------+ +----------------------------+
1067 host2 (compute) host1 (controller)
1070 Controller/Compute pre-configuration
1071 ++++++++++++++++++++++++++++++++++++
1073 Pre-configuration of the controller and compute hosts are the same as
1074 described in `Host pre-configuration`_ section. Follow the steps in the section.
1077 DevStack configuration
1078 ++++++++++++++++++++++
1080 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
1081 documentation to install OpenStack on a host. Please note, that stable
1082 ``pike`` branch of devstack repo should be used during the installation.
1083 The required `local.conf`` configuration file are described below.
1085 .. note:: Update the devstack configuration files by replacing angluar brackets
1086 with a short description inside.
1088 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1089 commands to get device and vendor id of the virtual function (VF).
1091 DevStack configuration file for controller host:
1093 .. literalinclude:: code/multi-devstack-controller-local.conf
1096 DevStack configuration file for compute host:
1098 .. literalinclude:: code/multi-devstack-compute-local.conf
1101 Start the devstack installation on the controller and compute hosts.
1104 Run the sample vFW TC
1105 +++++++++++++++++++++
1107 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1110 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1111 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1112 context using steps described in `NS testing - using yardstick CLI`_ section
1113 and the following yardtick command line arguments:
1117 yardstick -d task start --task-args='{"provider": "sriov"}' \
1118 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1121 Enabling other Traffic generator
1122 --------------------------------
1127 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1128 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1129 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1130 ``<IxOS version>Linux64.bin.tar.gz``
1131 If the installation was not done inside the container, after installing
1132 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1133 sure you can run this cmd inside the yardstick container. Usually user is
1134 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1135 ``/usr/bin/ixiapython<ver>`` inside the container.
1137 2. Update ``pod_ixia.yaml`` file with ixia details.
1139 .. code-block:: console
1141 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1143 Config ``pod_ixia.yaml``
1145 .. literalinclude:: code/pod_ixia.yaml
1148 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1150 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1151 You will also need to configure the IxLoad machine to start the IXIA
1152 IxosTclServer. This can be started like so:
1154 * Connect to the IxLoad machine using RDP
1156 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1158 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1160 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1162 5. Execute testcase in samplevnf folder e.g.
1163 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1168 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1169 installed as part of the requirements of the project.
1171 1. Update ``pod_ixia.yaml`` file with ixia details.
1173 .. code-block:: console
1175 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1177 Config pod_ixia.yaml
1179 .. literalinclude:: code/pod_ixia.yaml
1182 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1184 2. Start IxNetwork TCL Server
1185 You will also need to configure the IxNetwork machine to start the IXIA
1186 IxNetworkTclServer. This can be started like so:
1188 * Connect to the IxNetwork machine using RDP
1190 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1191 (or ``IxNetworkApiServer``)
1193 3. Execute testcase in samplevnf folder e.g.
1194 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1199 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1200 to be preinstalled and properly configured.
1204 32-bit Java installation is required for the Spirent Landslide TCL API.
1206 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1209 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1210 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1211 section of installation instructions.
1213 - LsApi (Tcl API module)
1215 Follow Landslide documentation for detailed instructions on Linux
1216 installation of Tcl API and its dependencies
1217 ``http://TAS_HOST_IP/tclapiinstall.html``.
1218 For working with LsApi Python wrapper only steps 1-5 are required.
1220 .. note:: After installation make sure your API home path is included in
1221 ``PYTHONPATH`` environment variable.
1224 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1225 For LsApi module to initialize correctly following lines (184-186) in
1228 .. code-block:: python
1230 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1232 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1234 should be changed to:
1236 .. code-block:: python
1238 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1239 if not ldpath == '':
1240 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1242 .. note:: The Spirent landslide TCL software package needs to be updated in case
1243 the user upgrades to a new version of Spirent landslide software.