1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
21 The steps needed to run Yardstick with NSB testing are:
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
32 Refer chapter Yardstick Installation for more information on yardstick
35 Several prerequisites are needed for Yardstick (VNF testing):
37 * Python Modules: pyzmq, pika.
48 Hardware & Software Ingredients
49 -------------------------------
54 ======= ===================
56 ======= ===================
60 kernel 4.4.0-34-generic
62 ======= ===================
64 Boot and BIOS settings:
67 ============= =================================================
68 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71 iommu=on iommu=pt intel_iommu=on
72 Note: nohz_full and rcu_nocbs is to disable Linux
74 BIOS CPU Power and Performance Policy <Performance>
77 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78 Hyper-Threading Technology (If supported) Enabled
79 Virtualization Techology Enabled
80 Intel(R) VT for Direct I/O Enabled
83 ============= =================================================
87 Install Yardstick (NSB Testing)
88 ===============================
90 Download the source code and install Yardstick from it
92 .. code-block:: console
94 git clone https://gerrit.opnfv.org/gerrit/yardstick
98 # Switch to latest stable branch
99 # git checkout <tag or stable branch>
100 git checkout stable/euphrates
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
108 http_proxy='http://proxy.company.com:port'
109 https_proxy='http://proxy.company.com:port'
111 .. code-block:: console
113 export http_proxy='http://proxy.company.com:port'
114 export https_proxy='http://proxy.company.com:port'
116 The last step is to modify the Yardstick installation inventory, used by
121 cat ./ansible/install-inventory.ini
123 localhost ansible_connection=local
125 [yardstick-standalone]
126 yardstick-standalone-node ansible_host=192.168.1.2
127 yardstick-standalone-node-2 ansible_host=192.168.1.3
129 # section below is only due backward compatibility.
130 # it will be removed later
140 SSH access without password needs to be configured for all your nodes defined in
141 ``install-inventory.ini`` file.
142 If you want to use password authentication you need to install sshpass
144 .. code-block:: console
146 sudo -EH apt-get install sshpass
148 To execute an installation for a Bare-Metal or a Standalone context:
150 .. code-block:: console
155 To execute an installation for an OpenStack context:
157 .. code-block:: console
159 ./nsb_setup.sh <path to admin-openrc.sh>
161 Above command setup docker with latest yardstick code. To execute
163 .. code-block:: console
165 docker exec -it yardstick bash
167 It will also automatically download all the packages needed for NSB Testing
168 setup. Refer chapter :doc:`04-installation` for more on docker
169 **Install Yardstick using Docker (recommended)**
174 .. code-block:: console
176 +----------+ +----------+
182 +----------+ +----------+
186 Environment parameters and credentials
187 ======================================
189 Config yardstick conf
190 ---------------------
192 If user did not run 'yardstick env influxdb' inside the container, which will
193 generate correct ``yardstick.conf``, then create the config file manually (run
194 inside the container):
197 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
198 vi /etc/yardstick/yardstick.conf
200 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
206 dispatcher = file, influxdb
208 [dispatcher_influxdb]
210 target = http://{YOUR_IP_HERE}:8086
216 trex_path=/opt/nsb_bin/trex/scripts
217 bin_path=/opt/nsb_bin
218 trex_client_lib=/opt/nsb_bin/trex_client/stl
220 Run Yardstick - Network Service Testcases
221 =========================================
224 NS testing - using yardstick CLI
225 --------------------------------
227 See :doc:`04-installation`
229 .. code-block:: console
232 docker exec -it yardstick /bin/bash
233 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
234 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
235 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
237 Network Service Benchmarking - Bare-Metal
238 =========================================
240 Bare-Metal Config pod.yaml describing Topology
241 ----------------------------------------------
243 Bare-Metal 2-Node setup
244 ^^^^^^^^^^^^^^^^^^^^^^^
245 .. code-block:: console
247 +----------+ +----------+
253 +----------+ +----------+
256 Bare-Metal 3-Node setup - Correlated Traffic
257 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
258 .. code-block:: console
260 +----------+ +----------+ +------------+
263 | | (0)----->(0) | | | UDP |
264 | TG1 | | DUT | | Replay |
266 | | | |(1)<---->(0)| |
267 +----------+ +----------+ +------------+
268 trafficgen_1 vnf trafficgen_2
271 Bare-Metal Config pod.yaml
272 --------------------------
273 Before executing Yardstick test cases, make sure that pod.yaml reflects the
274 topology and update all the required fields.::
276 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
288 xe0: # logical name from topology.yaml and vnfd.yaml
290 driver: i40e # default kernel driver
292 local_ip: "152.16.100.20"
293 netmask: "255.255.255.0"
294 local_mac: "00:00:00:00:00:01"
295 xe1: # logical name from topology.yaml and vnfd.yaml
297 driver: i40e # default kernel driver
299 local_ip: "152.16.40.20"
300 netmask: "255.255.255.0"
301 local_mac: "00:00.00:00:00:02"
309 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
311 xe0: # logical name from topology.yaml and vnfd.yaml
313 driver: i40e # default kernel driver
315 local_ip: "152.16.100.19"
316 netmask: "255.255.255.0"
317 local_mac: "00:00:00:00:00:03"
319 xe1: # logical name from topology.yaml and vnfd.yaml
321 driver: i40e # default kernel driver
323 local_ip: "152.16.40.19"
324 netmask: "255.255.255.0"
325 local_mac: "00:00:00:00:00:04"
327 - network: "152.16.100.20"
328 netmask: "255.255.255.0"
329 gateway: "152.16.100.20"
331 - network: "152.16.40.20"
332 netmask: "255.255.255.0"
333 gateway: "152.16.40.20"
336 - network: "0064:ff9b:0:0:0:0:9810:6414"
338 gateway: "0064:ff9b:0:0:0:0:9810:6414"
340 - network: "0064:ff9b:0:0:0:0:9810:2814"
342 gateway: "0064:ff9b:0:0:0:0:9810:2814"
346 Network Service Benchmarking - Standalone Virtualization
347 ========================================================
352 SR-IOV Pre-requisites
353 ^^^^^^^^^^^^^^^^^^^^^
355 On Host, where VM is created:
356 a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
357 Currently this can be done using VXLAN tunnel.
359 Execute the following on host, where VM is created:
361 .. code-block:: console
363 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
365 brctl addif br-int vxlan0
366 ip link set dev vxlan0 up
367 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
368 ip link set dev br-int up
370 .. note:: May be needed to add extra rules to iptable to forward traffic.
372 .. code-block:: console
374 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
375 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
377 Execute the following on a jump host:
379 .. code-block:: console
381 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
382 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
383 ip link set dev vxlan0 up
385 .. note:: Host and jump host are different baremetal servers.
387 b) Modify test case management CIDR.
388 IP addresses IP#1, IP#2 and CIDR must be in the same network.
398 c) Build guest image for VNF to run.
399 Most of the sample test cases in Yardstick are using a guest image called
400 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
401 Yardstick has a tool for building this custom image with SampleVNF.
402 It is necessary to have ``sudo`` rights to use this tool.
404 Also you may need to install several additional packages to use this tool, by
405 following the commands below::
407 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
409 This image can be built using the following command in the directory where Yardstick is installed
411 .. code-block:: console
413 export YARD_IMG_ARCH='amd64'
414 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
416 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
418 for more details refer to chapter :doc:`04-installation`
420 .. note:: VM should be build with static IP and should be accessible from yardstick host.
423 SR-IOV Config pod.yaml describing Topology
424 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
428 .. code-block:: console
430 +--------------------+
436 +--------------------+
437 | VF NIC | | VF NIC |
438 +--------+ +--------+
442 +----------+ +-------------------------+
445 | | (0)<----->(0) | ------ | |
448 | | (n)<----->(n) |------------------ |
449 +----------+ +-------------------------+
454 SR-IOV 3-Node setup - Correlated Traffic
455 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
456 .. code-block:: console
458 +--------------------+
464 +--------------------+
465 | VF NIC | | VF NIC |
466 +--------+ +--------+
470 +----------+ +-------------------------+ +--------------+
473 | | (0)<----->(0) | ------ | | | TG2 |
474 | TG1 | | SUT | | | (UDP Replay) |
476 | | (n)<----->(n) | ------ | (n)<-->(n) | |
477 +----------+ +-------------------------+ +--------------+
478 trafficgen_1 host trafficgen_2
480 Before executing Yardstick test cases, make sure that pod.yaml reflects the
481 topology and update all the required fields.
483 .. code-block:: console
485 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
486 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
488 .. note:: Update all the required fields like ip, user, password, pcis, etc...
490 SR-IOV Config pod_trex.yaml
491 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
502 key_filename: /root/.ssh/id_rsa
504 xe0: # logical name from topology.yaml and vnfd.yaml
506 driver: i40e # default kernel driver
508 local_ip: "152.16.100.20"
509 netmask: "255.255.255.0"
510 local_mac: "00:00:00:00:00:01"
511 xe1: # logical name from topology.yaml and vnfd.yaml
513 driver: i40e # default kernel driver
515 local_ip: "152.16.40.20"
516 netmask: "255.255.255.0"
517 local_mac: "00:00.00:00:00:02"
519 SR-IOV Config host_sriov.yaml
520 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
532 SR-IOV testcase update:
533 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
535 Update "contexts" section
536 """""""""""""""""""""""""
543 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
544 - type: StandaloneSriov
545 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
549 images: "/var/lib/libvirt/images/ubuntu.qcow2"
555 user: "" # update VM username
556 password: "" # update password
561 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
568 phy_port: "0000:05:00.0"
570 cidr: '152.16.100.10/24'
571 gateway_ip: '152.16.100.20'
573 phy_port: "0000:05:00.1"
575 cidr: '152.16.40.10/24'
576 gateway_ip: '152.16.100.20'
583 OVS-DPDK Pre-requisites
584 ^^^^^^^^^^^^^^^^^^^^^^^
586 On Host, where VM is created:
587 a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
588 Currently this can be done using VXLAN tunnel.
590 Execute the following on host, where VM is created:
592 .. code-block:: console
594 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
596 brctl addif br-int vxlan0
597 ip link set dev vxlan0 up
598 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
599 ip link set dev br-int up
601 .. note:: May be needed to add extra rules to iptable to forward traffic.
603 .. code-block:: console
605 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
606 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
608 Execute the following on a jump host:
610 .. code-block:: console
612 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
613 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
614 ip link set dev vxlan0 up
616 .. note:: Host and jump host are different baremetal servers.
618 b) Modify test case management CIDR.
619 IP addresses IP#1, IP#2 and CIDR must be in the same network.
629 c) Build guest image for VNF to run.
630 Most of the sample test cases in Yardstick are using a guest image called
631 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
632 Yardstick has a tool for building this custom image with SampleVNF.
633 It is necessary to have ``sudo`` rights to use this tool.
635 Also you may need to install several additional packages to use this tool, by
636 following the commands below::
638 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
640 This image can be built using the following command in the directory where Yardstick is installed::
642 export YARD_IMG_ARCH='amd64'
643 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
644 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
646 for more details refer to chapter :doc:`04-installation`
648 .. note:: VM should be build with static IP and should be accessible from yardstick host.
650 c) OVS & DPDK version.
651 - OVS 2.7 and DPDK 16.11.1 above version is supported
653 d) Setup OVS/DPDK on host.
654 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
657 OVS-DPDK Config pod.yaml describing Topology
658 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
660 OVS-DPDK 2-Node setup
661 ^^^^^^^^^^^^^^^^^^^^^
664 .. code-block:: console
666 +--------------------+
672 +--------------------+
673 | virtio | | virtio |
674 +--------+ +--------+
678 +--------+ +--------+
679 | vHOST0 | | vHOST1 |
680 +----------+ +-------------------------+
683 | | (0)<----->(0) | ------ | |
686 | | (n)<----->(n) |------------------ |
687 +----------+ +-------------------------+
691 OVS-DPDK 3-Node setup - Correlated Traffic
692 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
694 .. code-block:: console
696 +--------------------+
702 +--------------------+
703 | virtio | | virtio |
704 +--------+ +--------+
708 +--------+ +--------+
709 | vHOST0 | | vHOST1 |
710 +----------+ +-------------------------+ +------------+
713 | | (0)<----->(0) | ------ | | | TG2 |
714 | TG1 | | SUT | | |(UDP Replay)|
715 | | | (ovs-dpdk) | | | |
716 | | (n)<----->(n) | ------ |(n)<-->(n)| |
717 +----------+ +-------------------------+ +------------+
718 trafficgen_1 host trafficgen_2
721 Before executing Yardstick test cases, make sure that pod.yaml reflects the
722 topology and update all the required fields.
724 .. code-block:: console
726 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
727 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
729 .. note:: Update all the required fields like ip, user, password, pcis, etc...
731 OVS-DPDK Config pod_trex.yaml
732 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
744 xe0: # logical name from topology.yaml and vnfd.yaml
746 driver: i40e # default kernel driver
748 local_ip: "152.16.100.20"
749 netmask: "255.255.255.0"
750 local_mac: "00:00:00:00:00:01"
751 xe1: # logical name from topology.yaml and vnfd.yaml
753 driver: i40e # default kernel driver
755 local_ip: "152.16.40.20"
756 netmask: "255.255.255.0"
757 local_mac: "00:00.00:00:00:02"
759 OVS-DPDK Config host_ovs.yaml
760 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
772 ovs_dpdk testcase update:
773 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
775 Update "contexts" section
776 """""""""""""""""""""""""
783 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
784 - type: StandaloneOvsDpdk
786 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
800 images: "/var/lib/libvirt/images/ubuntu.qcow2"
806 user: "" # update VM username
807 password: "" # update password
812 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
819 phy_port: "0000:05:00.0"
821 cidr: '152.16.100.10/24'
822 gateway_ip: '152.16.100.20'
824 phy_port: "0000:05:00.1"
826 cidr: '152.16.40.10/24'
827 gateway_ip: '152.16.100.20'
830 Network Service Benchmarking - OpenStack with SR-IOV support
831 ============================================================
833 This section describes how to run a Sample VNF test case, using Heat context,
834 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
835 DevStack, with SR-IOV support.
838 Single node OpenStack setup with external TG
839 --------------------------------------------
841 .. code-block:: console
843 +----------------------------+
844 |OpenStack(DevStack) |
846 | +--------------------+ |
852 | +--------+ +--------+ |
853 | | VF NIC | | VF NIC | |
854 | +-----+--+--+----+---+ |
857 +----------+ +---------+----------+-------+
861 | TG | (PF0)<----->(PF0) +---------+ | |
863 | | (PF1)<----->(PF1) +--------------------+ |
865 +----------+ +----------------------------+
869 Host pre-configuration
870 ^^^^^^^^^^^^^^^^^^^^^^
872 .. warning:: The following configuration requires sudo access to the system. Make
873 sure that your user have the access.
875 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
876 disable this extension by default.
878 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
879 config file ``/etc/default/grub``.
881 For the Intel platform:
886 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
889 For the AMD platform:
894 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
897 Update the grub configuration file and restart the system:
899 .. warning:: The following command will reboot the system.
906 Make sure the extension has been enabled:
910 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
912 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
913 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
914 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
915 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
916 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
917 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
918 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
919 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
921 Setup system proxy (if needed). Add the following configuration into the
922 ``/etc/environment`` file:
924 .. note:: The proxy server name/port and IPs should be changed according to
925 actuall/current proxy configuration in the lab.
929 export http_proxy=http://proxy.company.com:port
930 export https_proxy=http://proxy.company.com:port
931 export ftp_proxy=http://proxy.company.com:port
932 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
933 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
939 sudo -EH apt-get update
940 sudo -EH apt-get upgrade
941 sudo -EH apt-get dist-upgrade
943 Install dependencies needed for the DevStack
947 sudo -EH apt-get install python
948 sudo -EH apt-get install python-dev
949 sudo -EH apt-get install python-pip
951 Setup SR-IOV ports on the host:
953 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
954 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
955 interface names should be changed according to the HW environment used for
960 sudo ip link set dev enp24s0f0 up
961 sudo ip link set dev enp24s0f1 up
962 sudo ip link set dev enp24s0f3 up
965 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
966 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
969 DevStack installation
970 ^^^^^^^^^^^^^^^^^^^^^
972 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
973 documentation to install OpenStack on a host. Please note, that stable
974 ``pike`` branch of devstack repo should be used during the installation.
975 The required `local.conf`` configuration file are described below.
977 DevStack configuration file:
979 .. note:: Update the devstack configuration file by replacing angluar brackets
980 with a short description inside.
982 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
983 commands to get device and vendor id of the virtual function (VF).
985 .. literalinclude:: code/single-devstack-local.conf
988 Start the devstack installation on a host.
991 TG host configuration
992 ^^^^^^^^^^^^^^^^^^^^^
994 Yardstick automatically install and configure Trex traffic generator on TG
995 host based on provided POD file (see below). Anyway, it's recommended to check
996 the compatibility of the installed NIC on the TG server with software Trex using
997 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
1000 Run the Sample VNF test case
1001 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1003 There is an example of Sample VNF test case ready to be executed in an
1004 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1005 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1007 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1010 Create pod file for TG in the yardstick repo folder located in the yardstick
1013 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1014 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1015 command to get the PF PCI address for ``vpci`` field.
1017 .. literalinclude:: code/single-yardstick-pod.conf
1020 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1021 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1022 context using steps described in `NS testing - using yardstick CLI`_ section.
1025 Multi node OpenStack TG and VNF setup (two nodes)
1026 -------------------------------------------------
1028 .. code-block:: console
1030 +----------------------------+ +----------------------------+
1031 |OpenStack(DevStack) | |OpenStack(DevStack) |
1033 | +--------------------+ | | +--------------------+ |
1034 | |sample-VNF VM | | | |sample-VNF VM | |
1036 | | TG | | | | DUT | |
1037 | | trafficgen_1 | | | | (VNF) | |
1039 | +--------+ +--------+ | | +--------+ +--------+ |
1040 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1041 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1044 +--------+-----------+-------+ +---------+----------+-------+
1045 | VF0 VF1 | | VF0 VF1 |
1047 | | SUT2 | | | | SUT1 | |
1048 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1050 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1052 +----------------------------+ +----------------------------+
1053 host2 (compute) host1 (controller)
1056 Controller/Compute pre-configuration
1057 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1059 Pre-configuration of the controller and compute hosts are the same as
1060 described in `Host pre-configuration`_ section. Follow the steps in the section.
1063 DevStack configuration
1064 ^^^^^^^^^^^^^^^^^^^^^^
1066 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
1067 documentation to install OpenStack on a host. Please note, that stable
1068 ``pike`` branch of devstack repo should be used during the installation.
1069 The required `local.conf`` configuration file are described below.
1071 .. note:: Update the devstack configuration files by replacing angluar brackets
1072 with a short description inside.
1074 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1075 commands to get device and vendor id of the virtual function (VF).
1077 DevStack configuration file for controller host:
1079 .. literalinclude:: code/multi-devstack-controller-local.conf
1082 DevStack configuration file for compute host:
1084 .. literalinclude:: code/multi-devstack-compute-local.conf
1087 Start the devstack installation on the controller and compute hosts.
1090 Run the sample vFW TC
1091 ^^^^^^^^^^^^^^^^^^^^^
1093 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1096 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1097 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1098 context using steps described in `NS testing - using yardstick CLI`_ section
1099 and the following yardtick command line arguments:
1103 yardstick -d task start --task-args='{"provider": "sriov"}' \
1104 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1107 Enabling other Traffic generator
1108 ================================
1113 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1114 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1115 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1116 ``<IxOS version>Linux64.bin.tar.gz``
1117 If the installation was not done inside the container, after installing
1118 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1119 sure you can run this cmd inside the yardstick container. Usually user is
1120 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1121 ``/usr/bin/ixiapython<ver>`` inside the container.
1123 2. Update ``pod_ixia.yaml`` file with ixia details.
1125 .. code-block:: console
1127 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1129 Config ``pod_ixia.yaml``
1131 .. literalinclude:: code/pod_ixia.yaml
1134 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1136 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1137 You will also need to configure the IxLoad machine to start the IXIA
1138 IxosTclServer. This can be started like so:
1140 * Connect to the IxLoad machine using RDP
1142 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1144 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1146 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1148 5. Execute testcase in samplevnf folder e.g.
1149 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1154 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1155 installed as part of the requirements of the project.
1157 1. Update ``pod_ixia.yaml`` file with ixia details.
1159 .. code-block:: console
1161 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1163 Config pod_ixia.yaml
1165 .. literalinclude:: code/pod_ixia.yaml
1168 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1170 2. Start IxNetwork TCL Server
1171 You will also need to configure the IxNetwork machine to start the IXIA
1172 IxNetworkTclServer. This can be started like so:
1174 * Connect to the IxNetwork machine using RDP
1176 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1177 (or ``IxNetworkApiServer``)
1179 3. Execute testcase in samplevnf folder e.g.
1180 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``