1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
6 Yardstick - NSB Testing -Installation
7 =====================================
12 The Network Service Benchmarking (NSB) extends the yardstick framework to do
13 VNF characterization and benchmarking in three different execution
14 environments viz., bare metal i.e. native Linux environment, standalone virtual
15 environment and managed virtualized environment (e.g. Open stack etc.).
16 It also brings in the capability to interact with external traffic generators
17 both hardware & software based for triggering and validating the traffic
18 according to user defined profiles.
20 The steps needed to run Yardstick with NSB testing are:
22 * Install Yardstick (NSB Testing).
23 * Setup/Reference pod.yaml describing Test topology
24 * Create/Reference the test configuration yaml file.
31 Refer chapter Yardstick Installation for more information on yardstick
34 Several prerequisites are needed for Yardstick(VNF testing):
36 - Python Modules: pyzmq, pika.
56 Hardware & Software Ingredients
57 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
62 +-----------+--------------------+
63 | Item | Description |
64 +-----------+--------------------+
66 +-----------+--------------------+
68 +-----------+--------------------+
69 | OS | Ubuntu 16.04.3 LTS |
70 +-----------+--------------------+
71 | kernel | 4.4.0-34-generic |
72 +-----------+--------------------+
74 +-----------+--------------------+
76 Boot and BIOS settings:
79 +------------------+---------------------------------------------------+
80 | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
81 | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
82 | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
83 | | iommu=on iommu=pt intel_iommu=on |
84 | | Note: nohz_full and rcu_nocbs is to disable Linux |
85 | | kernel interrupts |
86 +------------------+---------------------------------------------------+
87 |BIOS | CPU Power and Performance Policy <Performance> |
88 | | CPU C-state Disabled |
89 | | CPU P-state Disabled |
90 | | Enhanced IntelĀ® SpeedstepĀ® Tech Disabled |
91 | | Hyper-Threading Technology (If supported) Enabled |
92 | | Virtualization Techology Enabled |
93 | | Intel(R) VT for Direct I/O Enabled |
94 | | Coherency Enabled |
95 | | Turbo Boost Disabled |
96 +------------------+---------------------------------------------------+
100 Install Yardstick (NSB Testing)
101 -------------------------------
103 Download the source code and install Yardstick from it
105 .. code-block:: console
107 git clone https://gerrit.opnfv.org/gerrit/yardstick
111 # Switch to latest stable branch
112 # git checkout <tag or stable branch>
113 git checkout stable/euphrates
115 Configure the network proxy, either using the environment variables or setting
116 the global environment file:
121 http_proxy='http://proxy.company.com:port'
122 https_proxy='http://proxy.company.com:port'
124 .. code-block:: console
126 export http_proxy='http://proxy.company.com:port'
127 export https_proxy='http://proxy.company.com:port'
129 The last step is to modify the Yardstick installation inventory, used by
134 cat ./ansible/yardstick-install-inventory.ini
136 localhost ansible_connection=local
138 [yardstick-standalone]
139 yardstick-standalone-node ansible_host=192.168.1.2
140 yardstick-standalone-node-2 ansible_host=192.168.1.3
142 # section below is only due backward compatibility.
143 # it will be removed later
152 To execute an installation for a Bare-Metal or a Standalone context:
154 .. code-block:: console
159 To execute an installation for an OpenStack context:
161 .. code-block:: console
163 ./nsb_setup.sh <path to admin-openrc.sh>
165 Above command setup docker with latest yardstick code. To execute
167 .. code-block:: console
169 docker exec -it yardstick bash
171 It will also automatically download all the packages needed for NSB Testing setup.
172 Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
177 .. code-block:: console
179 +----------+ +----------+
185 +----------+ +----------+
189 Environment parameters and credentials
190 --------------------------------------
192 Config yardstick conf
193 ^^^^^^^^^^^^^^^^^^^^^
195 If user did not run 'yardstick env influxdb' inside the container, which will generate
196 correct yardstick.conf, then create the config file manually (run inside the container):
198 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
199 vi /etc/yardstick/yardstick.conf
201 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
207 dispatcher = file, influxdb
209 [dispatcher_influxdb]
211 target = http://{YOUR_IP_HERE}:8086
217 trex_path=/opt/nsb_bin/trex/scripts
218 bin_path=/opt/nsb_bin
219 trex_client_lib=/opt/nsb_bin/trex_client/stl
221 Run Yardstick - Network Service Testcases
222 -----------------------------------------
225 NS testing - using yardstick CLI
226 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
228 See :doc:`04-installation`
230 .. code-block:: console
233 docker exec -it yardstick /bin/bash
234 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
235 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
236 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
238 Network Service Benchmarking - Bare-Metal
239 -----------------------------------------
241 Bare-Metal Config pod.yaml describing Topology
242 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
244 Bare-Metal 2-Node setup:
245 ########################
246 .. code-block:: console
248 +----------+ +----------+
254 +----------+ +----------+
257 Bare-Metal 3-Node setup - Correlated Traffic:
258 #############################################
259 .. code-block:: console
261 +----------+ +----------+ +------------+
264 | | (0)----->(0) | | | UDP |
265 | TG1 | | DUT | | Replay |
267 | | | |(1)<---->(0)| |
268 +----------+ +----------+ +------------+
269 trafficgen_1 vnf trafficgen_2
272 Bare-Metal Config pod.yaml
273 ^^^^^^^^^^^^^^^^^^^^^^^^^^
274 Before executing Yardstick test cases, make sure that pod.yaml reflects the
275 topology and update all the required fields.::
277 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
289 xe0: # logical name from topology.yaml and vnfd.yaml
291 driver: i40e # default kernel driver
293 local_ip: "152.16.100.20"
294 netmask: "255.255.255.0"
295 local_mac: "00:00:00:00:00:01"
296 xe1: # logical name from topology.yaml and vnfd.yaml
298 driver: i40e # default kernel driver
300 local_ip: "152.16.40.20"
301 netmask: "255.255.255.0"
302 local_mac: "00:00.00:00:00:02"
310 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
312 xe0: # logical name from topology.yaml and vnfd.yaml
314 driver: i40e # default kernel driver
316 local_ip: "152.16.100.19"
317 netmask: "255.255.255.0"
318 local_mac: "00:00:00:00:00:03"
320 xe1: # logical name from topology.yaml and vnfd.yaml
322 driver: i40e # default kernel driver
324 local_ip: "152.16.40.19"
325 netmask: "255.255.255.0"
326 local_mac: "00:00:00:00:00:04"
328 - network: "152.16.100.20"
329 netmask: "255.255.255.0"
330 gateway: "152.16.100.20"
332 - network: "152.16.40.20"
333 netmask: "255.255.255.0"
334 gateway: "152.16.40.20"
337 - network: "0064:ff9b:0:0:0:0:9810:6414"
339 gateway: "0064:ff9b:0:0:0:0:9810:6414"
341 - network: "0064:ff9b:0:0:0:0:9810:2814"
343 gateway: "0064:ff9b:0:0:0:0:9810:2814"
347 Network Service Benchmarking - Standalone Virtualization
348 --------------------------------------------------------
353 SR-IOV Pre-requisites
354 #####################
357 a) Create a bridge for VM to connect to external network
359 .. code-block:: console
362 brctl addif br-int <interface_name> #This interface is connected to internet
364 b) Build guest image for VNF to run.
365 Most of the sample test cases in Yardstick are using a guest image called
366 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
367 Yardstick has a tool for building this custom image with samplevnf.
368 It is necessary to have ``sudo`` rights to use this tool.
370 Also you may need to install several additional packages to use this tool, by
371 following the commands below::
373 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
375 This image can be built using the following command in the directory where Yardstick is installed
377 .. code-block:: console
379 export YARD_IMG_ARCH='amd64'
380 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
382 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
384 for more details refer to chapter :doc:`04-installation`
386 .. note:: VM should be build with static IP and should be accessible from yardstick host.
389 SR-IOV Config pod.yaml describing Topology
390 ##########################################
394 .. code-block:: console
396 +--------------------+
402 +--------------------+
403 | VF NIC | | VF NIC |
404 +--------+ +--------+
408 +----------+ +-------------------------+
411 | | (0)<----->(0) | ------ | |
414 | | (n)<----->(n) |------------------ |
415 +----------+ +-------------------------+
420 SR-IOV 3-Node setup - Correlated Traffic
421 ########################################
422 .. code-block:: console
424 +--------------------+
430 +--------------------+
431 | VF NIC | | VF NIC |
432 +--------+ +--------+
436 +----------+ +-------------------------+ +--------------+
439 | | (0)<----->(0) | ------ | | | TG2 |
440 | TG1 | | SUT | | | (UDP Replay) |
442 | | (n)<----->(n) | ------ | (n)<-->(n) | |
443 +----------+ +-------------------------+ +--------------+
444 trafficgen_1 host trafficgen_2
446 Before executing Yardstick test cases, make sure that pod.yaml reflects the
447 topology and update all the required fields.
449 .. code-block:: console
451 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
452 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
454 .. note:: Update all the required fields like ip, user, password, pcis, etc...
456 SR-IOV Config pod_trex.yaml
457 ###########################
468 key_filename: /root/.ssh/id_rsa
470 xe0: # logical name from topology.yaml and vnfd.yaml
472 driver: i40e # default kernel driver
474 local_ip: "152.16.100.20"
475 netmask: "255.255.255.0"
476 local_mac: "00:00:00:00:00:01"
477 xe1: # logical name from topology.yaml and vnfd.yaml
479 driver: i40e # default kernel driver
481 local_ip: "152.16.40.20"
482 netmask: "255.255.255.0"
483 local_mac: "00:00.00:00:00:02"
485 SR-IOV Config host_sriov.yaml
486 #############################
498 SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
500 Update "contexts" section
501 """""""""""""""""""""""""
508 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
509 - type: StandaloneSriov
510 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
514 images: "/var/lib/libvirt/images/ubuntu.qcow2"
520 user: "" # update VM username
521 password: "" # update password
526 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
533 phy_port: "0000:05:00.0"
535 cidr: '152.16.100.10/24'
536 gateway_ip: '152.16.100.20'
538 phy_port: "0000:05:00.1"
540 cidr: '152.16.40.10/24'
541 gateway_ip: '152.16.100.20'
548 OVS-DPDK Pre-requisites
549 #######################
552 a) Create a bridge for VM to connect to external network
554 .. code-block:: console
557 brctl addif br-int <interface_name> #This interface is connected to internet
559 b) Build guest image for VNF to run.
560 Most of the sample test cases in Yardstick are using a guest image called
561 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
562 Yardstick has a tool for building this custom image with samplevnf.
563 It is necessary to have ``sudo`` rights to use this tool.
565 Also you may need to install several additional packages to use this tool, by
566 following the commands below::
568 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
570 This image can be built using the following command in the directory where Yardstick is installed::
572 export YARD_IMG_ARCH='amd64'
573 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
574 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
576 for more details refer to chapter :doc:`04-installation`
578 .. note:: VM should be build with static IP and should be accessible from yardstick host.
580 c) OVS & DPDK version.
581 - OVS 2.7 and DPDK 16.11.1 above version is supported
583 d) Setup OVS/DPDK on host.
584 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
587 OVS-DPDK Config pod.yaml describing Topology
588 ############################################
590 OVS-DPDK 2-Node setup:
591 ######################
594 .. code-block:: console
596 +--------------------+
602 +--------------------+
603 | virtio | | virtio |
604 +--------+ +--------+
608 +--------+ +--------+
609 | vHOST0 | | vHOST1 |
610 +----------+ +-------------------------+
613 | | (0)<----->(0) | ------ | |
616 | | (n)<----->(n) |------------------ |
617 +----------+ +-------------------------+
621 OVS-DPDK 3-Node setup - Correlated Traffic
622 ##########################################
624 .. code-block:: console
626 +--------------------+
632 +--------------------+
633 | virtio | | virtio |
634 +--------+ +--------+
638 +--------+ +--------+
639 | vHOST0 | | vHOST1 |
640 +----------+ +-------------------------+ +------------+
643 | | (0)<----->(0) | ------ | | | TG2 |
644 | TG1 | | SUT | | |(UDP Replay)|
645 | | | (ovs-dpdk) | | | |
646 | | (n)<----->(n) | ------ |(n)<-->(n)| |
647 +----------+ +-------------------------+ +------------+
648 trafficgen_1 host trafficgen_2
651 Before executing Yardstick test cases, make sure that pod.yaml reflects the
652 topology and update all the required fields.
654 .. code-block:: console
656 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
657 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
659 .. note:: Update all the required fields like ip, user, password, pcis, etc...
661 OVS-DPDK Config pod_trex.yaml
662 #############################
674 xe0: # logical name from topology.yaml and vnfd.yaml
676 driver: i40e # default kernel driver
678 local_ip: "152.16.100.20"
679 netmask: "255.255.255.0"
680 local_mac: "00:00:00:00:00:01"
681 xe1: # logical name from topology.yaml and vnfd.yaml
683 driver: i40e # default kernel driver
685 local_ip: "152.16.40.20"
686 netmask: "255.255.255.0"
687 local_mac: "00:00.00:00:00:02"
689 OVS-DPDK Config host_ovs.yaml
690 #############################
702 ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
704 Update "contexts" section
705 """""""""""""""""""""""""
712 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
713 - type: StandaloneOvsDpdk
715 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
729 images: "/var/lib/libvirt/images/ubuntu.qcow2"
735 user: "" # update VM username
736 password: "" # update password
741 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
748 phy_port: "0000:05:00.0"
750 cidr: '152.16.100.10/24'
751 gateway_ip: '152.16.100.20'
753 phy_port: "0000:05:00.1"
755 cidr: '152.16.40.10/24'
756 gateway_ip: '152.16.100.20'
759 Network Service Benchmarking - OpenStack with SR-IOV support
760 ------------------------------------------------------------
762 This section describes how to run a Sample VNF test case, using Heat context,
763 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
764 DevStack, with SR-IOV support.
767 Single node OpenStack setup with external TG
768 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
770 .. code-block:: console
772 +----------------------------+
773 |OpenStack(DevStack) |
775 | +--------------------+ |
781 | +--------+ +--------+ |
782 | | VF NIC | | VF NIC | |
783 | +-----+--+--+----+---+ |
786 +----------+ +---------+----------+-------+
790 | TG | (PF0)<----->(PF0) +---------+ | |
792 | | (PF1)<----->(PF1) +--------------------+ |
794 +----------+ +----------------------------+
798 Host pre-configuration
799 ######################
801 .. warning:: The following configuration requires sudo access to the system. Make
802 sure that your user have the access.
804 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
805 disable this extension by default.
807 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
808 config file ``/etc/default/grub``.
810 For the Intel platform:
815 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
818 For the AMD platform:
823 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
826 Update the grub configuration file and restart the system:
828 .. warning:: The following command will reboot the system.
835 Make sure the extension has been enabled:
839 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
841 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
842 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
843 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
844 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
845 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
846 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
847 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
848 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
850 Setup system proxy (if needed). Add the following configuration into the
851 ``/etc/environment`` file:
853 .. note:: The proxy server name/port and IPs should be changed according to
854 actuall/current proxy configuration in the lab.
858 export http_proxy=http://proxy.company.com:port
859 export https_proxy=http://proxy.company.com:port
860 export ftp_proxy=http://proxy.company.com:port
861 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
862 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
868 sudo -EH apt-get update
869 sudo -EH apt-get upgrade
870 sudo -EH apt-get dist-upgrade
872 Install dependencies needed for the DevStack
876 sudo -EH apt-get install python
877 sudo -EH apt-get install python-dev
878 sudo -EH apt-get install python-pip
880 Setup SR-IOV ports on the host:
882 .. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
883 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
884 interface names should be changed according to the HW environment used for
889 sudo ip link set dev enp24s0f0 up
890 sudo ip link set dev enp24s0f1 up
891 sudo ip link set dev enp24s0f3 up
894 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
895 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
898 DevStack installation
899 #####################
901 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
902 documentation to install OpenStack on a host. Please note, that stable
903 ``pike`` branch of devstack repo should be used during the installation.
904 The required `local.conf`` configuration file are described below.
906 DevStack configuration file:
908 .. note:: Update the devstack configuration file by replacing angluar brackets
909 with a short description inside.
911 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
912 commands to get device and vendor id of the virtual function (VF).
914 .. literalinclude:: code/single-devstack-local.conf
917 Start the devstack installation on a host.
920 TG host configuration
921 #####################
923 Yardstick automatically install and configure Trex traffic generator on TG
924 host based on provided POD file (see below). Anyway, it's recommended to check
925 the compatibility of the installed NIC on the TG server with software Trex using
926 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
929 Run the Sample VNF test case
930 ############################
932 There is an example of Sample VNF test case ready to be executed in an
933 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
934 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
936 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
939 Create pod file for TG in the yardstick repo folder located in the yardstick
942 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
943 according to HW environment used for the testing. Use ``lshw -c network -businfo``
944 command to get the PF PCI address for ``vpci`` field.
946 .. literalinclude:: code/single-yardstick-pod.conf
949 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
950 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
951 context using steps described in `NS testing - using yardstick CLI`_ section.
954 Multi node OpenStack TG and VNF setup (two nodes)
955 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
957 .. code-block:: console
959 +----------------------------+ +----------------------------+
960 |OpenStack(DevStack) | |OpenStack(DevStack) |
962 | +--------------------+ | | +--------------------+ |
963 | |sample-VNF VM | | | |sample-VNF VM | |
965 | | TG | | | | DUT | |
966 | | trafficgen_1 | | | | (VNF) | |
968 | +--------+ +--------+ | | +--------+ +--------+ |
969 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
970 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
973 +--------+-----------+-------+ +---------+----------+-------+
974 | VF0 VF1 | | VF0 VF1 |
976 | | SUT2 | | | | SUT1 | |
977 | | +-------+ (PF0)<----->(PF0) +---------+ | |
979 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
981 +----------------------------+ +----------------------------+
982 host2 (compute) host1 (controller)
985 Controller/Compute pre-configuration
986 ####################################
988 Pre-configuration of the controller and compute hosts are the same as
989 described in `Host pre-configuration`_ section. Follow the steps in the section.
992 DevStack configuration
993 ######################
995 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
996 documentation to install OpenStack on a host. Please note, that stable
997 ``pike`` branch of devstack repo should be used during the installation.
998 The required `local.conf`` configuration file are described below.
1000 .. note:: Update the devstack configuration files by replacing angluar brackets
1001 with a short description inside.
1003 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1004 commands to get device and vendor id of the virtual function (VF).
1006 DevStack configuration file for controller host:
1008 .. literalinclude:: code/multi-devstack-controller-local.conf
1011 DevStack configuration file for compute host:
1013 .. literalinclude:: code/multi-devstack-compute-local.conf
1016 Start the devstack installation on the controller and compute hosts.
1019 Run the sample vFW TC
1020 #####################
1022 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1025 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1026 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1027 context using steps described in `NS testing - using yardstick CLI`_ section
1028 and the following yardtick command line arguments:
1032 yardstick -d task start --task-args='{"provider": "sriov"}' \
1033 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1036 Enabling other Traffic generator
1037 --------------------------------
1042 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS
1043 version>Linux64.bin.tar.gz`` (Download from ixia support site)
1044 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
1045 If the installation was not done inside the container, after installing the IXIA client,
1046 check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
1047 inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
1048 to /usr/bin/ixiapython<ver> inside the container.
1050 2. Update pod_ixia.yaml file with ixia details.
1052 .. code-block:: console
1054 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1056 Config pod_ixia.yaml
1058 .. code-block:: yaml
1065 ip: 1.2.1.1 #ixia machine ip
1068 key_filename: /root/.ssh/id_rsa
1070 ixchassis: "1.2.1.7" #ixia chassis ip
1071 tcl_port: "8009" # tcl server port
1072 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1073 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1074 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1075 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1076 dut_result_dir: "/mnt/ixia"
1079 xe0: # logical name from topology.yaml and vnfd.yaml
1080 vpci: "2:5" # Card:port
1083 local_ip: "152.16.100.20"
1084 netmask: "255.255.0.0"
1085 local_mac: "00:98:10:64:14:00"
1086 xe1: # logical name from topology.yaml and vnfd.yaml
1087 vpci: "2:6" # [(Card, port)]
1090 local_ip: "152.40.40.20"
1091 netmask: "255.255.0.0"
1092 local_mac: "00:98:28:28:14:00"
1094 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1096 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1097 You will also need to configure the IxLoad machine to start the IXIA
1098 IxosTclServer. This can be started like so:
1100 - Connect to the IxLoad machine using RDP
1102 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1104 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1106 4. Create a folder "Results" in c:\ and share the folder on the network.
1108 5. execute testcase in samplevnf folder.
1109 eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1114 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
1115 Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1116 2. Update pod_ixia.yaml file with ixia details.
1118 .. code-block:: console
1120 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1122 Config pod_ixia.yaml
1124 .. code-block:: yaml
1130 ip: 1.2.1.1 #ixia machine ip
1133 key_filename: /root/.ssh/id_rsa
1135 ixchassis: "1.2.1.7" #ixia chassis ip
1136 tcl_port: "8009" # tcl server port
1137 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1138 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1139 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1140 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1141 dut_result_dir: "/mnt/ixia"
1144 xe0: # logical name from topology.yaml and vnfd.yaml
1145 vpci: "2:5" # Card:port
1148 local_ip: "152.16.100.20"
1149 netmask: "255.255.0.0"
1150 local_mac: "00:98:10:64:14:00"
1151 xe1: # logical name from topology.yaml and vnfd.yaml
1152 vpci: "2:6" # [(Card, port)]
1155 local_ip: "152.40.40.20"
1156 netmask: "255.255.0.0"
1157 local_mac: "00:98:28:28:14:00"
1159 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1161 3. Start IxNetwork TCL Server
1162 You will also need to configure the IxNetwork machine to start the IXIA
1163 IxNetworkTclServer. This can be started like so:
1165 - Connect to the IxNetwork machine using RDP
1166 - Go to: ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
1168 4. execute testcase in samplevnf folder.
1169 eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``