1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
21 The steps needed to run Yardstick with NSB testing are:
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
32 Refer chapter Yardstick Installation for more information on yardstick
35 Several prerequisites are needed for Yardstick (VNF testing):
37 * Python Modules: pyzmq, pika.
48 Hardware & Software Ingredients
49 -------------------------------
54 ======= ===================
56 ======= ===================
60 kernel 4.4.0-34-generic
62 ======= ===================
64 Boot and BIOS settings:
67 ============= =================================================
68 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71 iommu=on iommu=pt intel_iommu=on
72 Note: nohz_full and rcu_nocbs is to disable Linux
74 BIOS CPU Power and Performance Policy <Performance>
77 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78 Hyper-Threading Technology (If supported) Enabled
79 Virtualization Techology Enabled
80 Intel(R) VT for Direct I/O Enabled
83 ============= =================================================
87 Install Yardstick (NSB Testing)
88 ===============================
90 Download the source code and install Yardstick from it
92 .. code-block:: console
94 git clone https://gerrit.opnfv.org/gerrit/yardstick
98 # Switch to latest stable branch
99 # git checkout <tag or stable branch>
100 git checkout stable/euphrates
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
108 http_proxy='http://proxy.company.com:port'
109 https_proxy='http://proxy.company.com:port'
111 .. code-block:: console
113 export http_proxy='http://proxy.company.com:port'
114 export https_proxy='http://proxy.company.com:port'
116 The last step is to modify the Yardstick installation inventory, used by
121 cat ./ansible/yardstick-install-inventory.ini
123 localhost ansible_connection=local
125 [yardstick-standalone]
126 yardstick-standalone-node ansible_host=192.168.1.2
127 yardstick-standalone-node-2 ansible_host=192.168.1.3
129 # section below is only due backward compatibility.
130 # it will be removed later
140 SSH access without password needs to be configured for all your nodes defined in
141 ``yardstick-install-inventory.ini`` file.
142 If you want to use password authentication you need to install sshpass
144 .. code-block:: console
146 sudo -EH apt-get install sshpass
148 To execute an installation for a Bare-Metal or a Standalone context:
150 .. code-block:: console
155 To execute an installation for an OpenStack context:
157 .. code-block:: console
159 ./nsb_setup.sh <path to admin-openrc.sh>
161 Above command setup docker with latest yardstick code. To execute
163 .. code-block:: console
165 docker exec -it yardstick bash
167 It will also automatically download all the packages needed for NSB Testing
168 setup. Refer chapter :doc:`04-installation` for more on docker
169 **Install Yardstick using Docker (recommended)**
174 .. code-block:: console
176 +----------+ +----------+
182 +----------+ +----------+
186 Environment parameters and credentials
187 ======================================
189 Config yardstick conf
190 ---------------------
192 If user did not run 'yardstick env influxdb' inside the container, which will
193 generate correct ``yardstick.conf``, then create the config file manually (run
194 inside the container):
197 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
198 vi /etc/yardstick/yardstick.conf
200 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
206 dispatcher = file, influxdb
208 [dispatcher_influxdb]
210 target = http://{YOUR_IP_HERE}:8086
216 trex_path=/opt/nsb_bin/trex/scripts
217 bin_path=/opt/nsb_bin
218 trex_client_lib=/opt/nsb_bin/trex_client/stl
220 Run Yardstick - Network Service Testcases
221 =========================================
224 NS testing - using yardstick CLI
225 --------------------------------
227 See :doc:`04-installation`
229 .. code-block:: console
232 docker exec -it yardstick /bin/bash
233 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
234 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
235 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
237 Network Service Benchmarking - Bare-Metal
238 =========================================
240 Bare-Metal Config pod.yaml describing Topology
241 ----------------------------------------------
243 Bare-Metal 2-Node setup
244 ^^^^^^^^^^^^^^^^^^^^^^^
245 .. code-block:: console
247 +----------+ +----------+
253 +----------+ +----------+
256 Bare-Metal 3-Node setup - Correlated Traffic
257 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
258 .. code-block:: console
260 +----------+ +----------+ +------------+
263 | | (0)----->(0) | | | UDP |
264 | TG1 | | DUT | | Replay |
266 | | | |(1)<---->(0)| |
267 +----------+ +----------+ +------------+
268 trafficgen_1 vnf trafficgen_2
271 Bare-Metal Config pod.yaml
272 --------------------------
273 Before executing Yardstick test cases, make sure that pod.yaml reflects the
274 topology and update all the required fields.::
276 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
288 xe0: # logical name from topology.yaml and vnfd.yaml
290 driver: i40e # default kernel driver
292 local_ip: "152.16.100.20"
293 netmask: "255.255.255.0"
294 local_mac: "00:00:00:00:00:01"
295 xe1: # logical name from topology.yaml and vnfd.yaml
297 driver: i40e # default kernel driver
299 local_ip: "152.16.40.20"
300 netmask: "255.255.255.0"
301 local_mac: "00:00.00:00:00:02"
309 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
311 xe0: # logical name from topology.yaml and vnfd.yaml
313 driver: i40e # default kernel driver
315 local_ip: "152.16.100.19"
316 netmask: "255.255.255.0"
317 local_mac: "00:00:00:00:00:03"
319 xe1: # logical name from topology.yaml and vnfd.yaml
321 driver: i40e # default kernel driver
323 local_ip: "152.16.40.19"
324 netmask: "255.255.255.0"
325 local_mac: "00:00:00:00:00:04"
327 - network: "152.16.100.20"
328 netmask: "255.255.255.0"
329 gateway: "152.16.100.20"
331 - network: "152.16.40.20"
332 netmask: "255.255.255.0"
333 gateway: "152.16.40.20"
336 - network: "0064:ff9b:0:0:0:0:9810:6414"
338 gateway: "0064:ff9b:0:0:0:0:9810:6414"
340 - network: "0064:ff9b:0:0:0:0:9810:2814"
342 gateway: "0064:ff9b:0:0:0:0:9810:2814"
346 Network Service Benchmarking - Standalone Virtualization
347 ========================================================
352 SR-IOV Pre-requisites
353 ^^^^^^^^^^^^^^^^^^^^^
356 a) Create a bridge for VM to connect to external network
358 .. code-block:: console
361 brctl addif br-int <interface_name> #This interface is connected to internet
363 b) Build guest image for VNF to run.
364 Most of the sample test cases in Yardstick are using a guest image called
365 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
366 Yardstick has a tool for building this custom image with samplevnf.
367 It is necessary to have ``sudo`` rights to use this tool.
369 Also you may need to install several additional packages to use this tool, by
370 following the commands below::
372 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
374 This image can be built using the following command in the directory where Yardstick is installed
376 .. code-block:: console
378 export YARD_IMG_ARCH='amd64'
379 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
381 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
383 for more details refer to chapter :doc:`04-installation`
385 .. note:: VM should be build with static IP and should be accessible from yardstick host.
388 SR-IOV Config pod.yaml describing Topology
389 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
393 .. code-block:: console
395 +--------------------+
401 +--------------------+
402 | VF NIC | | VF NIC |
403 +--------+ +--------+
407 +----------+ +-------------------------+
410 | | (0)<----->(0) | ------ | |
413 | | (n)<----->(n) |------------------ |
414 +----------+ +-------------------------+
419 SR-IOV 3-Node setup - Correlated Traffic
420 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
421 .. code-block:: console
423 +--------------------+
429 +--------------------+
430 | VF NIC | | VF NIC |
431 +--------+ +--------+
435 +----------+ +-------------------------+ +--------------+
438 | | (0)<----->(0) | ------ | | | TG2 |
439 | TG1 | | SUT | | | (UDP Replay) |
441 | | (n)<----->(n) | ------ | (n)<-->(n) | |
442 +----------+ +-------------------------+ +--------------+
443 trafficgen_1 host trafficgen_2
445 Before executing Yardstick test cases, make sure that pod.yaml reflects the
446 topology and update all the required fields.
448 .. code-block:: console
450 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
451 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
453 .. note:: Update all the required fields like ip, user, password, pcis, etc...
455 SR-IOV Config pod_trex.yaml
456 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
467 key_filename: /root/.ssh/id_rsa
469 xe0: # logical name from topology.yaml and vnfd.yaml
471 driver: i40e # default kernel driver
473 local_ip: "152.16.100.20"
474 netmask: "255.255.255.0"
475 local_mac: "00:00:00:00:00:01"
476 xe1: # logical name from topology.yaml and vnfd.yaml
478 driver: i40e # default kernel driver
480 local_ip: "152.16.40.20"
481 netmask: "255.255.255.0"
482 local_mac: "00:00.00:00:00:02"
484 SR-IOV Config host_sriov.yaml
485 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
497 SR-IOV testcase update:
498 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
500 Update "contexts" section
501 """""""""""""""""""""""""
508 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
509 - type: StandaloneSriov
510 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
514 images: "/var/lib/libvirt/images/ubuntu.qcow2"
520 user: "" # update VM username
521 password: "" # update password
526 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
533 phy_port: "0000:05:00.0"
535 cidr: '152.16.100.10/24'
536 gateway_ip: '152.16.100.20'
538 phy_port: "0000:05:00.1"
540 cidr: '152.16.40.10/24'
541 gateway_ip: '152.16.100.20'
548 OVS-DPDK Pre-requisites
549 ^^^^^^^^^^^^^^^^^^^^^^^
552 a) Create a bridge for VM to connect to external network
554 .. code-block:: console
557 brctl addif br-int <interface_name> #This interface is connected to internet
559 b) Build guest image for VNF to run.
560 Most of the sample test cases in Yardstick are using a guest image called
561 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
562 Yardstick has a tool for building this custom image with samplevnf.
563 It is necessary to have ``sudo`` rights to use this tool.
565 Also you may need to install several additional packages to use this tool, by
566 following the commands below::
568 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
570 This image can be built using the following command in the directory where Yardstick is installed::
572 export YARD_IMG_ARCH='amd64'
573 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
574 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
576 for more details refer to chapter :doc:`04-installation`
578 .. note:: VM should be build with static IP and should be accessible from yardstick host.
580 c) OVS & DPDK version.
581 - OVS 2.7 and DPDK 16.11.1 above version is supported
583 d) Setup OVS/DPDK on host.
584 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
587 OVS-DPDK Config pod.yaml describing Topology
588 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
590 OVS-DPDK 2-Node setup
591 ^^^^^^^^^^^^^^^^^^^^^
594 .. code-block:: console
596 +--------------------+
602 +--------------------+
603 | virtio | | virtio |
604 +--------+ +--------+
608 +--------+ +--------+
609 | vHOST0 | | vHOST1 |
610 +----------+ +-------------------------+
613 | | (0)<----->(0) | ------ | |
616 | | (n)<----->(n) |------------------ |
617 +----------+ +-------------------------+
621 OVS-DPDK 3-Node setup - Correlated Traffic
622 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
624 .. code-block:: console
626 +--------------------+
632 +--------------------+
633 | virtio | | virtio |
634 +--------+ +--------+
638 +--------+ +--------+
639 | vHOST0 | | vHOST1 |
640 +----------+ +-------------------------+ +------------+
643 | | (0)<----->(0) | ------ | | | TG2 |
644 | TG1 | | SUT | | |(UDP Replay)|
645 | | | (ovs-dpdk) | | | |
646 | | (n)<----->(n) | ------ |(n)<-->(n)| |
647 +----------+ +-------------------------+ +------------+
648 trafficgen_1 host trafficgen_2
651 Before executing Yardstick test cases, make sure that pod.yaml reflects the
652 topology and update all the required fields.
654 .. code-block:: console
656 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
657 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
659 .. note:: Update all the required fields like ip, user, password, pcis, etc...
661 OVS-DPDK Config pod_trex.yaml
662 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
674 xe0: # logical name from topology.yaml and vnfd.yaml
676 driver: i40e # default kernel driver
678 local_ip: "152.16.100.20"
679 netmask: "255.255.255.0"
680 local_mac: "00:00:00:00:00:01"
681 xe1: # logical name from topology.yaml and vnfd.yaml
683 driver: i40e # default kernel driver
685 local_ip: "152.16.40.20"
686 netmask: "255.255.255.0"
687 local_mac: "00:00.00:00:00:02"
689 OVS-DPDK Config host_ovs.yaml
690 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
702 ovs_dpdk testcase update:
703 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
705 Update "contexts" section
706 """""""""""""""""""""""""
713 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
714 - type: StandaloneOvsDpdk
716 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
730 images: "/var/lib/libvirt/images/ubuntu.qcow2"
736 user: "" # update VM username
737 password: "" # update password
742 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
749 phy_port: "0000:05:00.0"
751 cidr: '152.16.100.10/24'
752 gateway_ip: '152.16.100.20'
754 phy_port: "0000:05:00.1"
756 cidr: '152.16.40.10/24'
757 gateway_ip: '152.16.100.20'
760 Network Service Benchmarking - OpenStack with SR-IOV support
761 ============================================================
763 This section describes how to run a Sample VNF test case, using Heat context,
764 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
765 DevStack, with SR-IOV support.
768 Single node OpenStack setup with external TG
769 --------------------------------------------
771 .. code-block:: console
773 +----------------------------+
774 |OpenStack(DevStack) |
776 | +--------------------+ |
782 | +--------+ +--------+ |
783 | | VF NIC | | VF NIC | |
784 | +-----+--+--+----+---+ |
787 +----------+ +---------+----------+-------+
791 | TG | (PF0)<----->(PF0) +---------+ | |
793 | | (PF1)<----->(PF1) +--------------------+ |
795 +----------+ +----------------------------+
799 Host pre-configuration
800 ^^^^^^^^^^^^^^^^^^^^^^
802 .. warning:: The following configuration requires sudo access to the system. Make
803 sure that your user have the access.
805 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
806 disable this extension by default.
808 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
809 config file ``/etc/default/grub``.
811 For the Intel platform:
816 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
819 For the AMD platform:
824 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
827 Update the grub configuration file and restart the system:
829 .. warning:: The following command will reboot the system.
836 Make sure the extension has been enabled:
840 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
842 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
843 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
844 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
845 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
846 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
847 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
848 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
849 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
851 Setup system proxy (if needed). Add the following configuration into the
852 ``/etc/environment`` file:
854 .. note:: The proxy server name/port and IPs should be changed according to
855 actuall/current proxy configuration in the lab.
859 export http_proxy=http://proxy.company.com:port
860 export https_proxy=http://proxy.company.com:port
861 export ftp_proxy=http://proxy.company.com:port
862 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
863 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
869 sudo -EH apt-get update
870 sudo -EH apt-get upgrade
871 sudo -EH apt-get dist-upgrade
873 Install dependencies needed for the DevStack
877 sudo -EH apt-get install python
878 sudo -EH apt-get install python-dev
879 sudo -EH apt-get install python-pip
881 Setup SR-IOV ports on the host:
883 .. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
884 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
885 interface names should be changed according to the HW environment used for
890 sudo ip link set dev enp24s0f0 up
891 sudo ip link set dev enp24s0f1 up
892 sudo ip link set dev enp24s0f3 up
895 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
896 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
899 DevStack installation
900 ^^^^^^^^^^^^^^^^^^^^^
902 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
903 documentation to install OpenStack on a host. Please note, that stable
904 ``pike`` branch of devstack repo should be used during the installation.
905 The required `local.conf`` configuration file are described below.
907 DevStack configuration file:
909 .. note:: Update the devstack configuration file by replacing angluar brackets
910 with a short description inside.
912 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
913 commands to get device and vendor id of the virtual function (VF).
915 .. literalinclude:: code/single-devstack-local.conf
918 Start the devstack installation on a host.
921 TG host configuration
922 ^^^^^^^^^^^^^^^^^^^^^
924 Yardstick automatically install and configure Trex traffic generator on TG
925 host based on provided POD file (see below). Anyway, it's recommended to check
926 the compatibility of the installed NIC on the TG server with software Trex using
927 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
930 Run the Sample VNF test case
931 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
933 There is an example of Sample VNF test case ready to be executed in an
934 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
935 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
937 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
940 Create pod file for TG in the yardstick repo folder located in the yardstick
943 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
944 according to HW environment used for the testing. Use ``lshw -c network -businfo``
945 command to get the PF PCI address for ``vpci`` field.
947 .. literalinclude:: code/single-yardstick-pod.conf
950 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
951 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
952 context using steps described in `NS testing - using yardstick CLI`_ section.
955 Multi node OpenStack TG and VNF setup (two nodes)
956 -------------------------------------------------
958 .. code-block:: console
960 +----------------------------+ +----------------------------+
961 |OpenStack(DevStack) | |OpenStack(DevStack) |
963 | +--------------------+ | | +--------------------+ |
964 | |sample-VNF VM | | | |sample-VNF VM | |
966 | | TG | | | | DUT | |
967 | | trafficgen_1 | | | | (VNF) | |
969 | +--------+ +--------+ | | +--------+ +--------+ |
970 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
971 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
974 +--------+-----------+-------+ +---------+----------+-------+
975 | VF0 VF1 | | VF0 VF1 |
977 | | SUT2 | | | | SUT1 | |
978 | | +-------+ (PF0)<----->(PF0) +---------+ | |
980 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
982 +----------------------------+ +----------------------------+
983 host2 (compute) host1 (controller)
986 Controller/Compute pre-configuration
987 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
989 Pre-configuration of the controller and compute hosts are the same as
990 described in `Host pre-configuration`_ section. Follow the steps in the section.
993 DevStack configuration
994 ^^^^^^^^^^^^^^^^^^^^^^
996 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
997 documentation to install OpenStack on a host. Please note, that stable
998 ``pike`` branch of devstack repo should be used during the installation.
999 The required `local.conf`` configuration file are described below.
1001 .. note:: Update the devstack configuration files by replacing angluar brackets
1002 with a short description inside.
1004 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1005 commands to get device and vendor id of the virtual function (VF).
1007 DevStack configuration file for controller host:
1009 .. literalinclude:: code/multi-devstack-controller-local.conf
1012 DevStack configuration file for compute host:
1014 .. literalinclude:: code/multi-devstack-compute-local.conf
1017 Start the devstack installation on the controller and compute hosts.
1020 Run the sample vFW TC
1021 ^^^^^^^^^^^^^^^^^^^^^
1023 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1026 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1027 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1028 context using steps described in `NS testing - using yardstick CLI`_ section
1029 and the following yardtick command line arguments:
1033 yardstick -d task start --task-args='{"provider": "sriov"}' \
1034 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1037 Enabling other Traffic generator
1038 ================================
1043 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1044 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1045 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1046 ``<IxOS version>Linux64.bin.tar.gz``
1047 If the installation was not done inside the container, after installing
1048 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1049 sure you can run this cmd inside the yardstick container. Usually user is
1050 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1051 ``/usr/bin/ixiapython<ver>`` inside the container.
1053 2. Update ``pod_ixia.yaml`` file with ixia details.
1055 .. code-block:: console
1057 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1059 Config ``pod_ixia.yaml``
1061 .. code-block:: yaml
1067 ip: 1.2.1.1 #ixia machine ip
1070 key_filename: /root/.ssh/id_rsa
1072 ixchassis: "1.2.1.7" #ixia chassis ip
1073 tcl_port: "8009" # tcl server port
1074 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1075 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1076 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1077 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1078 dut_result_dir: "/mnt/ixia"
1081 xe0: # logical name from topology.yaml and vnfd.yaml
1082 vpci: "2:5" # Card:port
1085 local_ip: "152.16.100.20"
1086 netmask: "255.255.0.0"
1087 local_mac: "00:98:10:64:14:00"
1088 xe1: # logical name from topology.yaml and vnfd.yaml
1089 vpci: "2:6" # [(Card, port)]
1092 local_ip: "152.40.40.20"
1093 netmask: "255.255.0.0"
1094 local_mac: "00:98:28:28:14:00"
1096 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1098 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1099 You will also need to configure the IxLoad machine to start the IXIA
1100 IxosTclServer. This can be started like so:
1102 * Connect to the IxLoad machine using RDP
1104 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1106 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1108 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1110 5. Execute testcase in samplevnf folder e.g.
1111 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1116 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1117 (Download from ixia support site)
1118 Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1119 2. Update pod_ixia.yaml file with ixia details.
1121 .. code-block:: console
1123 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1125 Config pod_ixia.yaml
1127 .. code-block:: yaml
1133 ip: 1.2.1.1 #ixia machine ip
1136 key_filename: /root/.ssh/id_rsa
1138 ixchassis: "1.2.1.7" #ixia chassis ip
1139 tcl_port: "8009" # tcl server port
1140 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1141 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1142 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1143 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1144 dut_result_dir: "/mnt/ixia"
1147 xe0: # logical name from topology.yaml and vnfd.yaml
1148 vpci: "2:5" # Card:port
1151 local_ip: "152.16.100.20"
1152 netmask: "255.255.0.0"
1153 local_mac: "00:98:10:64:14:00"
1154 xe1: # logical name from topology.yaml and vnfd.yaml
1155 vpci: "2:6" # [(Card, port)]
1158 local_ip: "152.40.40.20"
1159 netmask: "255.255.0.0"
1160 local_mac: "00:98:28:28:14:00"
1162 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1164 3. Start IxNetwork TCL Server
1165 You will also need to configure the IxNetwork machine to start the IXIA
1166 IxNetworkTclServer. This can be started like so:
1168 * Connect to the IxNetwork machine using RDP
1170 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1171 (or ``IxNetworkApiServer``)
1173 4. Execute testcase in samplevnf folder e.g.
1174 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``