1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
21 The steps needed to run Yardstick with NSB testing are:
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
32 Refer chapter Yardstick Installation for more information on yardstick
35 Several prerequisites are needed for Yardstick (VNF testing):
37 * Python Modules: pyzmq, pika.
48 Hardware & Software Ingredients
49 -------------------------------
54 ======= ===================
56 ======= ===================
60 kernel 4.4.0-34-generic
62 ======= ===================
64 Boot and BIOS settings:
67 ============= =================================================
68 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71 iommu=on iommu=pt intel_iommu=on
72 Note: nohz_full and rcu_nocbs is to disable Linux
74 BIOS CPU Power and Performance Policy <Performance>
77 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78 Hyper-Threading Technology (If supported) Enabled
79 Virtualization Techology Enabled
80 Intel(R) VT for Direct I/O Enabled
83 ============= =================================================
87 Install Yardstick (NSB Testing)
88 ===============================
90 Download the source code and install Yardstick from it
92 .. code-block:: console
94 git clone https://gerrit.opnfv.org/gerrit/yardstick
98 # Switch to latest stable branch
99 # git checkout <tag or stable branch>
100 git checkout stable/euphrates
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
108 http_proxy='http://proxy.company.com:port'
109 https_proxy='http://proxy.company.com:port'
111 .. code-block:: console
113 export http_proxy='http://proxy.company.com:port'
114 export https_proxy='http://proxy.company.com:port'
116 The last step is to modify the Yardstick installation inventory, used by
121 cat ./ansible/yardstick-install-inventory.ini
123 localhost ansible_connection=local
125 [yardstick-standalone]
126 yardstick-standalone-node ansible_host=192.168.1.2
127 yardstick-standalone-node-2 ansible_host=192.168.1.3
129 # section below is only due backward compatibility.
130 # it will be removed later
139 To execute an installation for a Bare-Metal or a Standalone context:
141 .. code-block:: console
146 To execute an installation for an OpenStack context:
148 .. code-block:: console
150 ./nsb_setup.sh <path to admin-openrc.sh>
152 Above command setup docker with latest yardstick code. To execute
154 .. code-block:: console
156 docker exec -it yardstick bash
158 It will also automatically download all the packages needed for NSB Testing
159 setup. Refer chapter :doc:`04-installation` for more on docker
160 **Install Yardstick using Docker (recommended)**
165 .. code-block:: console
167 +----------+ +----------+
173 +----------+ +----------+
177 Environment parameters and credentials
178 ======================================
180 Config yardstick conf
181 ---------------------
183 If user did not run 'yardstick env influxdb' inside the container, which will
184 generate correct ``yardstick.conf``, then create the config file manually (run
185 inside the container):
188 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
189 vi /etc/yardstick/yardstick.conf
191 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
197 dispatcher = file, influxdb
199 [dispatcher_influxdb]
201 target = http://{YOUR_IP_HERE}:8086
207 trex_path=/opt/nsb_bin/trex/scripts
208 bin_path=/opt/nsb_bin
209 trex_client_lib=/opt/nsb_bin/trex_client/stl
211 Run Yardstick - Network Service Testcases
212 =========================================
215 NS testing - using yardstick CLI
216 --------------------------------
218 See :doc:`04-installation`
220 .. code-block:: console
223 docker exec -it yardstick /bin/bash
224 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
225 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
226 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
228 Network Service Benchmarking - Bare-Metal
229 =========================================
231 Bare-Metal Config pod.yaml describing Topology
232 ----------------------------------------------
234 Bare-Metal 2-Node setup
235 ^^^^^^^^^^^^^^^^^^^^^^^
236 .. code-block:: console
238 +----------+ +----------+
244 +----------+ +----------+
247 Bare-Metal 3-Node setup - Correlated Traffic
248 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
249 .. code-block:: console
251 +----------+ +----------+ +------------+
254 | | (0)----->(0) | | | UDP |
255 | TG1 | | DUT | | Replay |
257 | | | |(1)<---->(0)| |
258 +----------+ +----------+ +------------+
259 trafficgen_1 vnf trafficgen_2
262 Bare-Metal Config pod.yaml
263 --------------------------
264 Before executing Yardstick test cases, make sure that pod.yaml reflects the
265 topology and update all the required fields.::
267 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
279 xe0: # logical name from topology.yaml and vnfd.yaml
281 driver: i40e # default kernel driver
283 local_ip: "152.16.100.20"
284 netmask: "255.255.255.0"
285 local_mac: "00:00:00:00:00:01"
286 xe1: # logical name from topology.yaml and vnfd.yaml
288 driver: i40e # default kernel driver
290 local_ip: "152.16.40.20"
291 netmask: "255.255.255.0"
292 local_mac: "00:00.00:00:00:02"
300 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
302 xe0: # logical name from topology.yaml and vnfd.yaml
304 driver: i40e # default kernel driver
306 local_ip: "152.16.100.19"
307 netmask: "255.255.255.0"
308 local_mac: "00:00:00:00:00:03"
310 xe1: # logical name from topology.yaml and vnfd.yaml
312 driver: i40e # default kernel driver
314 local_ip: "152.16.40.19"
315 netmask: "255.255.255.0"
316 local_mac: "00:00:00:00:00:04"
318 - network: "152.16.100.20"
319 netmask: "255.255.255.0"
320 gateway: "152.16.100.20"
322 - network: "152.16.40.20"
323 netmask: "255.255.255.0"
324 gateway: "152.16.40.20"
327 - network: "0064:ff9b:0:0:0:0:9810:6414"
329 gateway: "0064:ff9b:0:0:0:0:9810:6414"
331 - network: "0064:ff9b:0:0:0:0:9810:2814"
333 gateway: "0064:ff9b:0:0:0:0:9810:2814"
337 Network Service Benchmarking - Standalone Virtualization
338 ========================================================
343 SR-IOV Pre-requisites
344 ^^^^^^^^^^^^^^^^^^^^^
347 a) Create a bridge for VM to connect to external network
349 .. code-block:: console
352 brctl addif br-int <interface_name> #This interface is connected to internet
354 b) Build guest image for VNF to run.
355 Most of the sample test cases in Yardstick are using a guest image called
356 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
357 Yardstick has a tool for building this custom image with samplevnf.
358 It is necessary to have ``sudo`` rights to use this tool.
360 Also you may need to install several additional packages to use this tool, by
361 following the commands below::
363 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
365 This image can be built using the following command in the directory where Yardstick is installed
367 .. code-block:: console
369 export YARD_IMG_ARCH='amd64'
370 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
372 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
374 for more details refer to chapter :doc:`04-installation`
376 .. note:: VM should be build with static IP and should be accessible from yardstick host.
379 SR-IOV Config pod.yaml describing Topology
380 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
384 .. code-block:: console
386 +--------------------+
392 +--------------------+
393 | VF NIC | | VF NIC |
394 +--------+ +--------+
398 +----------+ +-------------------------+
401 | | (0)<----->(0) | ------ | |
404 | | (n)<----->(n) |------------------ |
405 +----------+ +-------------------------+
410 SR-IOV 3-Node setup - Correlated Traffic
411 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
412 .. code-block:: console
414 +--------------------+
420 +--------------------+
421 | VF NIC | | VF NIC |
422 +--------+ +--------+
426 +----------+ +-------------------------+ +--------------+
429 | | (0)<----->(0) | ------ | | | TG2 |
430 | TG1 | | SUT | | | (UDP Replay) |
432 | | (n)<----->(n) | ------ | (n)<-->(n) | |
433 +----------+ +-------------------------+ +--------------+
434 trafficgen_1 host trafficgen_2
436 Before executing Yardstick test cases, make sure that pod.yaml reflects the
437 topology and update all the required fields.
439 .. code-block:: console
441 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
442 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
444 .. note:: Update all the required fields like ip, user, password, pcis, etc...
446 SR-IOV Config pod_trex.yaml
447 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
458 key_filename: /root/.ssh/id_rsa
460 xe0: # logical name from topology.yaml and vnfd.yaml
462 driver: i40e # default kernel driver
464 local_ip: "152.16.100.20"
465 netmask: "255.255.255.0"
466 local_mac: "00:00:00:00:00:01"
467 xe1: # logical name from topology.yaml and vnfd.yaml
469 driver: i40e # default kernel driver
471 local_ip: "152.16.40.20"
472 netmask: "255.255.255.0"
473 local_mac: "00:00.00:00:00:02"
475 SR-IOV Config host_sriov.yaml
476 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
488 SR-IOV testcase update:
489 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
491 Update "contexts" section
492 """""""""""""""""""""""""
499 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
500 - type: StandaloneSriov
501 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
505 images: "/var/lib/libvirt/images/ubuntu.qcow2"
511 user: "" # update VM username
512 password: "" # update password
517 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
524 phy_port: "0000:05:00.0"
526 cidr: '152.16.100.10/24'
527 gateway_ip: '152.16.100.20'
529 phy_port: "0000:05:00.1"
531 cidr: '152.16.40.10/24'
532 gateway_ip: '152.16.100.20'
539 OVS-DPDK Pre-requisites
540 ^^^^^^^^^^^^^^^^^^^^^^^
543 a) Create a bridge for VM to connect to external network
545 .. code-block:: console
548 brctl addif br-int <interface_name> #This interface is connected to internet
550 b) Build guest image for VNF to run.
551 Most of the sample test cases in Yardstick are using a guest image called
552 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
553 Yardstick has a tool for building this custom image with samplevnf.
554 It is necessary to have ``sudo`` rights to use this tool.
556 Also you may need to install several additional packages to use this tool, by
557 following the commands below::
559 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
561 This image can be built using the following command in the directory where Yardstick is installed::
563 export YARD_IMG_ARCH='amd64'
564 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
565 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
567 for more details refer to chapter :doc:`04-installation`
569 .. note:: VM should be build with static IP and should be accessible from yardstick host.
571 c) OVS & DPDK version.
572 - OVS 2.7 and DPDK 16.11.1 above version is supported
574 d) Setup OVS/DPDK on host.
575 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
578 OVS-DPDK Config pod.yaml describing Topology
579 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
581 OVS-DPDK 2-Node setup
582 ^^^^^^^^^^^^^^^^^^^^^
585 .. code-block:: console
587 +--------------------+
593 +--------------------+
594 | virtio | | virtio |
595 +--------+ +--------+
599 +--------+ +--------+
600 | vHOST0 | | vHOST1 |
601 +----------+ +-------------------------+
604 | | (0)<----->(0) | ------ | |
607 | | (n)<----->(n) |------------------ |
608 +----------+ +-------------------------+
612 OVS-DPDK 3-Node setup - Correlated Traffic
613 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
615 .. code-block:: console
617 +--------------------+
623 +--------------------+
624 | virtio | | virtio |
625 +--------+ +--------+
629 +--------+ +--------+
630 | vHOST0 | | vHOST1 |
631 +----------+ +-------------------------+ +------------+
634 | | (0)<----->(0) | ------ | | | TG2 |
635 | TG1 | | SUT | | |(UDP Replay)|
636 | | | (ovs-dpdk) | | | |
637 | | (n)<----->(n) | ------ |(n)<-->(n)| |
638 +----------+ +-------------------------+ +------------+
639 trafficgen_1 host trafficgen_2
642 Before executing Yardstick test cases, make sure that pod.yaml reflects the
643 topology and update all the required fields.
645 .. code-block:: console
647 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
648 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
650 .. note:: Update all the required fields like ip, user, password, pcis, etc...
652 OVS-DPDK Config pod_trex.yaml
653 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
665 xe0: # logical name from topology.yaml and vnfd.yaml
667 driver: i40e # default kernel driver
669 local_ip: "152.16.100.20"
670 netmask: "255.255.255.0"
671 local_mac: "00:00:00:00:00:01"
672 xe1: # logical name from topology.yaml and vnfd.yaml
674 driver: i40e # default kernel driver
676 local_ip: "152.16.40.20"
677 netmask: "255.255.255.0"
678 local_mac: "00:00.00:00:00:02"
680 OVS-DPDK Config host_ovs.yaml
681 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
693 ovs_dpdk testcase update:
694 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
696 Update "contexts" section
697 """""""""""""""""""""""""
704 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
705 - type: StandaloneOvsDpdk
707 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
721 images: "/var/lib/libvirt/images/ubuntu.qcow2"
727 user: "" # update VM username
728 password: "" # update password
733 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
740 phy_port: "0000:05:00.0"
742 cidr: '152.16.100.10/24'
743 gateway_ip: '152.16.100.20'
745 phy_port: "0000:05:00.1"
747 cidr: '152.16.40.10/24'
748 gateway_ip: '152.16.100.20'
751 Network Service Benchmarking - OpenStack with SR-IOV support
752 ============================================================
754 This section describes how to run a Sample VNF test case, using Heat context,
755 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
756 DevStack, with SR-IOV support.
759 Single node OpenStack setup with external TG
760 --------------------------------------------
762 .. code-block:: console
764 +----------------------------+
765 |OpenStack(DevStack) |
767 | +--------------------+ |
773 | +--------+ +--------+ |
774 | | VF NIC | | VF NIC | |
775 | +-----+--+--+----+---+ |
778 +----------+ +---------+----------+-------+
782 | TG | (PF0)<----->(PF0) +---------+ | |
784 | | (PF1)<----->(PF1) +--------------------+ |
786 +----------+ +----------------------------+
790 Host pre-configuration
791 ^^^^^^^^^^^^^^^^^^^^^^
793 .. warning:: The following configuration requires sudo access to the system. Make
794 sure that your user have the access.
796 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
797 disable this extension by default.
799 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
800 config file ``/etc/default/grub``.
802 For the Intel platform:
807 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
810 For the AMD platform:
815 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
818 Update the grub configuration file and restart the system:
820 .. warning:: The following command will reboot the system.
827 Make sure the extension has been enabled:
831 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
833 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
834 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
835 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
836 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
837 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
838 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
839 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
840 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
842 Setup system proxy (if needed). Add the following configuration into the
843 ``/etc/environment`` file:
845 .. note:: The proxy server name/port and IPs should be changed according to
846 actuall/current proxy configuration in the lab.
850 export http_proxy=http://proxy.company.com:port
851 export https_proxy=http://proxy.company.com:port
852 export ftp_proxy=http://proxy.company.com:port
853 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
854 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
860 sudo -EH apt-get update
861 sudo -EH apt-get upgrade
862 sudo -EH apt-get dist-upgrade
864 Install dependencies needed for the DevStack
868 sudo -EH apt-get install python
869 sudo -EH apt-get install python-dev
870 sudo -EH apt-get install python-pip
872 Setup SR-IOV ports on the host:
874 .. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
875 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
876 interface names should be changed according to the HW environment used for
881 sudo ip link set dev enp24s0f0 up
882 sudo ip link set dev enp24s0f1 up
883 sudo ip link set dev enp24s0f3 up
886 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
887 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
890 DevStack installation
891 ^^^^^^^^^^^^^^^^^^^^^
893 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
894 documentation to install OpenStack on a host. Please note, that stable
895 ``pike`` branch of devstack repo should be used during the installation.
896 The required `local.conf`` configuration file are described below.
898 DevStack configuration file:
900 .. note:: Update the devstack configuration file by replacing angluar brackets
901 with a short description inside.
903 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
904 commands to get device and vendor id of the virtual function (VF).
906 .. literalinclude:: code/single-devstack-local.conf
909 Start the devstack installation on a host.
912 TG host configuration
913 ^^^^^^^^^^^^^^^^^^^^^
915 Yardstick automatically install and configure Trex traffic generator on TG
916 host based on provided POD file (see below). Anyway, it's recommended to check
917 the compatibility of the installed NIC on the TG server with software Trex using
918 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
921 Run the Sample VNF test case
922 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
924 There is an example of Sample VNF test case ready to be executed in an
925 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
926 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
928 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
931 Create pod file for TG in the yardstick repo folder located in the yardstick
934 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
935 according to HW environment used for the testing. Use ``lshw -c network -businfo``
936 command to get the PF PCI address for ``vpci`` field.
938 .. literalinclude:: code/single-yardstick-pod.conf
941 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
942 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
943 context using steps described in `NS testing - using yardstick CLI`_ section.
946 Multi node OpenStack TG and VNF setup (two nodes)
947 -------------------------------------------------
949 .. code-block:: console
951 +----------------------------+ +----------------------------+
952 |OpenStack(DevStack) | |OpenStack(DevStack) |
954 | +--------------------+ | | +--------------------+ |
955 | |sample-VNF VM | | | |sample-VNF VM | |
957 | | TG | | | | DUT | |
958 | | trafficgen_1 | | | | (VNF) | |
960 | +--------+ +--------+ | | +--------+ +--------+ |
961 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
962 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
965 +--------+-----------+-------+ +---------+----------+-------+
966 | VF0 VF1 | | VF0 VF1 |
968 | | SUT2 | | | | SUT1 | |
969 | | +-------+ (PF0)<----->(PF0) +---------+ | |
971 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
973 +----------------------------+ +----------------------------+
974 host2 (compute) host1 (controller)
977 Controller/Compute pre-configuration
978 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
980 Pre-configuration of the controller and compute hosts are the same as
981 described in `Host pre-configuration`_ section. Follow the steps in the section.
984 DevStack configuration
985 ^^^^^^^^^^^^^^^^^^^^^^
987 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
988 documentation to install OpenStack on a host. Please note, that stable
989 ``pike`` branch of devstack repo should be used during the installation.
990 The required `local.conf`` configuration file are described below.
992 .. note:: Update the devstack configuration files by replacing angluar brackets
993 with a short description inside.
995 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
996 commands to get device and vendor id of the virtual function (VF).
998 DevStack configuration file for controller host:
1000 .. literalinclude:: code/multi-devstack-controller-local.conf
1003 DevStack configuration file for compute host:
1005 .. literalinclude:: code/multi-devstack-compute-local.conf
1008 Start the devstack installation on the controller and compute hosts.
1011 Run the sample vFW TC
1012 ^^^^^^^^^^^^^^^^^^^^^
1014 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1017 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1018 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1019 context using steps described in `NS testing - using yardstick CLI`_ section
1020 and the following yardtick command line arguments:
1024 yardstick -d task start --task-args='{"provider": "sriov"}' \
1025 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1028 Enabling other Traffic generator
1029 ================================
1034 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1035 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1036 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1037 ``<IxOS version>Linux64.bin.tar.gz``
1038 If the installation was not done inside the container, after installing
1039 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1040 sure you can run this cmd inside the yardstick container. Usually user is
1041 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1042 ``/usr/bin/ixiapython<ver>`` inside the container.
1044 2. Update ``pod_ixia.yaml`` file with ixia details.
1046 .. code-block:: console
1048 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1050 Config ``pod_ixia.yaml``
1052 .. code-block:: yaml
1058 ip: 1.2.1.1 #ixia machine ip
1061 key_filename: /root/.ssh/id_rsa
1063 ixchassis: "1.2.1.7" #ixia chassis ip
1064 tcl_port: "8009" # tcl server port
1065 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1066 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1067 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1068 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1069 dut_result_dir: "/mnt/ixia"
1072 xe0: # logical name from topology.yaml and vnfd.yaml
1073 vpci: "2:5" # Card:port
1076 local_ip: "152.16.100.20"
1077 netmask: "255.255.0.0"
1078 local_mac: "00:98:10:64:14:00"
1079 xe1: # logical name from topology.yaml and vnfd.yaml
1080 vpci: "2:6" # [(Card, port)]
1083 local_ip: "152.40.40.20"
1084 netmask: "255.255.0.0"
1085 local_mac: "00:98:28:28:14:00"
1087 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1089 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1090 You will also need to configure the IxLoad machine to start the IXIA
1091 IxosTclServer. This can be started like so:
1093 * Connect to the IxLoad machine using RDP
1095 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1097 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1099 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1101 5. Execute testcase in samplevnf folder e.g.
1102 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1107 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1108 (Download from ixia support site)
1109 Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1110 2. Update pod_ixia.yaml file with ixia details.
1112 .. code-block:: console
1114 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1116 Config pod_ixia.yaml
1118 .. code-block:: yaml
1124 ip: 1.2.1.1 #ixia machine ip
1127 key_filename: /root/.ssh/id_rsa
1129 ixchassis: "1.2.1.7" #ixia chassis ip
1130 tcl_port: "8009" # tcl server port
1131 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1132 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1133 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1134 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1135 dut_result_dir: "/mnt/ixia"
1138 xe0: # logical name from topology.yaml and vnfd.yaml
1139 vpci: "2:5" # Card:port
1142 local_ip: "152.16.100.20"
1143 netmask: "255.255.0.0"
1144 local_mac: "00:98:10:64:14:00"
1145 xe1: # logical name from topology.yaml and vnfd.yaml
1146 vpci: "2:6" # [(Card, port)]
1149 local_ip: "152.40.40.20"
1150 netmask: "255.255.0.0"
1151 local_mac: "00:98:28:28:14:00"
1153 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1155 3. Start IxNetwork TCL Server
1156 You will also need to configure the IxNetwork machine to start the IXIA
1157 IxNetworkTclServer. This can be started like so:
1159 * Connect to the IxNetwork machine using RDP
1161 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1162 (or ``IxNetworkApiServer``)
1164 4. Execute testcase in samplevnf folder e.g.
1165 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``