1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
6 Yardstick - NSB Testing -Installation
7 =====================================
12 The Network Service Benchmarking (NSB) extends the yardstick framework to do
13 VNF characterization and benchmarking in three different execution
14 environments viz., bare metal i.e. native Linux environment, standalone virtual
15 environment and managed virtualized environment (e.g. Open stack etc.).
16 It also brings in the capability to interact with external traffic generators
17 both hardware & software based for triggering and validating the traffic
18 according to user defined profiles.
20 The steps needed to run Yardstick with NSB testing are:
22 * Install Yardstick (NSB Testing).
23 * Setup/Reference pod.yaml describing Test topology
24 * Create/Reference the test configuration yaml file.
31 Refer chapter Yardstick Installation for more information on yardstick
34 Several prerequisites are needed for Yardstick(VNF testing):
36 - Python Modules: pyzmq, pika.
56 Hardware & Software Ingredients
57 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
62 +-----------+--------------------+
63 | Item | Description |
64 +-----------+--------------------+
66 +-----------+--------------------+
68 +-----------+--------------------+
69 | OS | Ubuntu 16.04.3 LTS |
70 +-----------+--------------------+
71 | kernel | 4.4.0-34-generic |
72 +-----------+--------------------+
74 +-----------+--------------------+
76 Boot and BIOS settings:
79 +------------------+---------------------------------------------------+
80 | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
81 | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
82 | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
83 | | iommu=on iommu=pt intel_iommu=on |
84 | | Note: nohz_full and rcu_nocbs is to disable Linux |
85 | | kernel interrupts |
86 +------------------+---------------------------------------------------+
87 |BIOS | CPU Power and Performance Policy <Performance> |
88 | | CPU C-state Disabled |
89 | | CPU P-state Disabled |
90 | | Enhanced IntelĀ® SpeedstepĀ® Tech Disabled |
91 | | Hyper-Threading Technology (If supported) Enabled |
92 | | Virtualization Techology Enabled |
93 | | Intel(R) VT for Direct I/O Enabled |
94 | | Coherency Enabled |
95 | | Turbo Boost Disabled |
96 +------------------+---------------------------------------------------+
100 Install Yardstick (NSB Testing)
101 -------------------------------
103 Download the source code and install Yardstick from it
105 .. code-block:: console
107 git clone https://gerrit.opnfv.org/gerrit/yardstick
111 # Switch to latest stable branch
112 # git checkout <tag or stable branch>
113 git checkout stable/euphrates
115 Configure the network proxy, either using the environment variables or setting
116 the global environment file:
120 http_proxy='http://proxy.company.com:port'
121 https_proxy='http://proxy.company.com:port'
123 .. code-block:: console
124 export http_proxy='http://proxy.company.com:port'
125 export https_proxy='http://proxy.company.com:port'
127 The last step is to modify the Yardstick installation inventory, used by
131 cat ./ansible/yardstick-install-inventory.ini
133 localhost ansible_connection=local
135 [yardstick-standalone]
136 yardstick-standalone-node ansible_host=192.168.1.2
137 yardstick-standalone-node-2 ansible_host=192.168.1.3
139 # section below is only due backward compatibility.
140 # it will be removed later
149 To execute an installation for a Bare-Metal or a Standalone context:
151 .. code-block:: console
156 To execute an installation for an OpenStack context:
158 .. code-block:: console
160 ./nsb_setup.sh <path to admin-openrc.sh>
162 Above command setup docker with latest yardstick code. To execute
164 .. code-block:: console
166 docker exec -it yardstick bash
168 It will also automatically download all the packages needed for NSB Testing setup.
169 Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
174 .. code-block:: console
176 +----------+ +----------+
182 +----------+ +----------+
186 Environment parameters and credentials
187 --------------------------------------
189 Config yardstick conf
190 ^^^^^^^^^^^^^^^^^^^^^
192 If user did not run 'yardstick env influxdb' inside the container, which will generate
193 correct yardstick.conf, then create the config file manually (run inside the container):
195 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
196 vi /etc/yardstick/yardstick.conf
198 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
204 dispatcher = file, influxdb
206 [dispatcher_influxdb]
208 target = http://{YOUR_IP_HERE}:8086
214 trex_path=/opt/nsb_bin/trex/scripts
215 bin_path=/opt/nsb_bin
216 trex_client_lib=/opt/nsb_bin/trex_client/stl
218 Run Yardstick - Network Service Testcases
219 -----------------------------------------
222 NS testing - using yardstick CLI
223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
225 See :doc:`04-installation`
227 .. code-block:: console
230 docker exec -it yardstick /bin/bash
231 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
232 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
233 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
235 Network Service Benchmarking - Bare-Metal
236 -----------------------------------------
238 Bare-Metal Config pod.yaml describing Topology
239 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
241 Bare-Metal 2-Node setup:
242 ########################
243 .. code-block:: console
245 +----------+ +----------+
251 +----------+ +----------+
254 Bare-Metal 3-Node setup - Correlated Traffic:
255 #############################################
256 .. code-block:: console
258 +----------+ +----------+ +------------+
261 | | (0)----->(0) | | | UDP |
262 | TG1 | | DUT | | Replay |
264 | | | |(1)<---->(0)| |
265 +----------+ +----------+ +------------+
266 trafficgen_1 vnf trafficgen_2
269 Bare-Metal Config pod.yaml
270 ^^^^^^^^^^^^^^^^^^^^^^^^^^
271 Before executing Yardstick test cases, make sure that pod.yaml reflects the
272 topology and update all the required fields.::
274 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
286 xe0: # logical name from topology.yaml and vnfd.yaml
288 driver: i40e # default kernel driver
290 local_ip: "152.16.100.20"
291 netmask: "255.255.255.0"
292 local_mac: "00:00:00:00:00:01"
293 xe1: # logical name from topology.yaml and vnfd.yaml
295 driver: i40e # default kernel driver
297 local_ip: "152.16.40.20"
298 netmask: "255.255.255.0"
299 local_mac: "00:00.00:00:00:02"
307 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
309 xe0: # logical name from topology.yaml and vnfd.yaml
311 driver: i40e # default kernel driver
313 local_ip: "152.16.100.19"
314 netmask: "255.255.255.0"
315 local_mac: "00:00:00:00:00:03"
317 xe1: # logical name from topology.yaml and vnfd.yaml
319 driver: i40e # default kernel driver
321 local_ip: "152.16.40.19"
322 netmask: "255.255.255.0"
323 local_mac: "00:00:00:00:00:04"
325 - network: "152.16.100.20"
326 netmask: "255.255.255.0"
327 gateway: "152.16.100.20"
329 - network: "152.16.40.20"
330 netmask: "255.255.255.0"
331 gateway: "152.16.40.20"
334 - network: "0064:ff9b:0:0:0:0:9810:6414"
336 gateway: "0064:ff9b:0:0:0:0:9810:6414"
338 - network: "0064:ff9b:0:0:0:0:9810:2814"
340 gateway: "0064:ff9b:0:0:0:0:9810:2814"
344 Network Service Benchmarking - Standalone Virtualization
345 --------------------------------------------------------
350 SR-IOV Pre-requisites
351 #####################
354 a) Create a bridge for VM to connect to external network
356 .. code-block:: console
359 brctl addif br-int <interface_name> #This interface is connected to internet
361 b) Build guest image for VNF to run.
362 Most of the sample test cases in Yardstick are using a guest image called
363 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
364 Yardstick has a tool for building this custom image with samplevnf.
365 It is necessary to have ``sudo`` rights to use this tool.
367 Also you may need to install several additional packages to use this tool, by
368 following the commands below::
370 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
372 This image can be built using the following command in the directory where Yardstick is installed
374 .. code-block:: console
376 export YARD_IMG_ARCH='amd64'
377 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
379 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
381 for more details refer to chapter :doc:`04-installation`
383 .. note:: VM should be build with static IP and should be accessible from yardstick host.
386 SR-IOV Config pod.yaml describing Topology
387 ##########################################
391 .. code-block:: console
393 +--------------------+
399 +--------------------+
400 | VF NIC | | VF NIC |
401 +--------+ +--------+
405 +----------+ +-------------------------+
408 | | (0)<----->(0) | ------ | |
411 | | (n)<----->(n) |------------------ |
412 +----------+ +-------------------------+
417 SR-IOV 3-Node setup - Correlated Traffic
418 ########################################
419 .. code-block:: console
421 +--------------------+
427 +--------------------+
428 | VF NIC | | VF NIC |
429 +--------+ +--------+
433 +----------+ +-------------------------+ +--------------+
436 | | (0)<----->(0) | ------ | | | TG2 |
437 | TG1 | | SUT | | | (UDP Replay) |
439 | | (n)<----->(n) | ------ | (n)<-->(n) | |
440 +----------+ +-------------------------+ +--------------+
441 trafficgen_1 host trafficgen_2
443 Before executing Yardstick test cases, make sure that pod.yaml reflects the
444 topology and update all the required fields.
446 .. code-block:: console
448 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
449 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
451 .. note:: Update all the required fields like ip, user, password, pcis, etc...
453 SR-IOV Config pod_trex.yaml
454 ###########################
465 key_filename: /root/.ssh/id_rsa
467 xe0: # logical name from topology.yaml and vnfd.yaml
469 driver: i40e # default kernel driver
471 local_ip: "152.16.100.20"
472 netmask: "255.255.255.0"
473 local_mac: "00:00:00:00:00:01"
474 xe1: # logical name from topology.yaml and vnfd.yaml
476 driver: i40e # default kernel driver
478 local_ip: "152.16.40.20"
479 netmask: "255.255.255.0"
480 local_mac: "00:00.00:00:00:02"
482 SR-IOV Config host_sriov.yaml
483 #############################
495 SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
497 Update "contexts" section
498 """""""""""""""""""""""""
505 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
506 - type: StandaloneSriov
507 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
511 images: "/var/lib/libvirt/images/ubuntu.qcow2"
517 user: "" # update VM username
518 password: "" # update password
523 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
530 phy_port: "0000:05:00.0"
532 cidr: '152.16.100.10/24'
533 gateway_ip: '152.16.100.20'
535 phy_port: "0000:05:00.1"
537 cidr: '152.16.40.10/24'
538 gateway_ip: '152.16.100.20'
545 OVS-DPDK Pre-requisites
546 #######################
549 a) Create a bridge for VM to connect to external network
551 .. code-block:: console
554 brctl addif br-int <interface_name> #This interface is connected to internet
556 b) Build guest image for VNF to run.
557 Most of the sample test cases in Yardstick are using a guest image called
558 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
559 Yardstick has a tool for building this custom image with samplevnf.
560 It is necessary to have ``sudo`` rights to use this tool.
562 Also you may need to install several additional packages to use this tool, by
563 following the commands below::
565 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
567 This image can be built using the following command in the directory where Yardstick is installed::
569 export YARD_IMG_ARCH='amd64'
570 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
571 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
573 for more details refer to chapter :doc:`04-installation`
575 .. note:: VM should be build with static IP and should be accessible from yardstick host.
577 c) OVS & DPDK version.
578 - OVS 2.7 and DPDK 16.11.1 above version is supported
580 d) Setup OVS/DPDK on host.
581 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
584 OVS-DPDK Config pod.yaml describing Topology
585 ############################################
587 OVS-DPDK 2-Node setup:
588 ######################
591 .. code-block:: console
593 +--------------------+
599 +--------------------+
600 | virtio | | virtio |
601 +--------+ +--------+
605 +--------+ +--------+
606 | vHOST0 | | vHOST1 |
607 +----------+ +-------------------------+
610 | | (0)<----->(0) | ------ | |
613 | | (n)<----->(n) |------------------ |
614 +----------+ +-------------------------+
618 OVS-DPDK 3-Node setup - Correlated Traffic
619 ##########################################
621 .. code-block:: console
623 +--------------------+
629 +--------------------+
630 | virtio | | virtio |
631 +--------+ +--------+
635 +--------+ +--------+
636 | vHOST0 | | vHOST1 |
637 +----------+ +-------------------------+ +------------+
640 | | (0)<----->(0) | ------ | | | TG2 |
641 | TG1 | | SUT | | |(UDP Replay)|
642 | | | (ovs-dpdk) | | | |
643 | | (n)<----->(n) | ------ |(n)<-->(n)| |
644 +----------+ +-------------------------+ +------------+
645 trafficgen_1 host trafficgen_2
648 Before executing Yardstick test cases, make sure that pod.yaml reflects the
649 topology and update all the required fields.
651 .. code-block:: console
653 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
654 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
656 .. note:: Update all the required fields like ip, user, password, pcis, etc...
658 OVS-DPDK Config pod_trex.yaml
659 #############################
671 xe0: # logical name from topology.yaml and vnfd.yaml
673 driver: i40e # default kernel driver
675 local_ip: "152.16.100.20"
676 netmask: "255.255.255.0"
677 local_mac: "00:00:00:00:00:01"
678 xe1: # logical name from topology.yaml and vnfd.yaml
680 driver: i40e # default kernel driver
682 local_ip: "152.16.40.20"
683 netmask: "255.255.255.0"
684 local_mac: "00:00.00:00:00:02"
686 OVS-DPDK Config host_ovs.yaml
687 #############################
699 ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
701 Update "contexts" section
702 """""""""""""""""""""""""
709 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
710 - type: StandaloneOvsDpdk
712 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
726 images: "/var/lib/libvirt/images/ubuntu.qcow2"
732 user: "" # update VM username
733 password: "" # update password
738 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
745 phy_port: "0000:05:00.0"
747 cidr: '152.16.100.10/24'
748 gateway_ip: '152.16.100.20'
750 phy_port: "0000:05:00.1"
752 cidr: '152.16.40.10/24'
753 gateway_ip: '152.16.100.20'
756 Enabling other Traffic generator
757 --------------------------------
762 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
763 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
764 If the installation was not done inside the container, after installing the IXIA client,
765 check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
766 inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
767 to /usr/bin/ixiapython<ver> inside the container.
769 2. Update pod_ixia.yaml file with ixia details.
771 .. code-block:: console
773 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
784 ip: 1.2.1.1 #ixia machine ip
787 key_filename: /root/.ssh/id_rsa
789 ixchassis: "1.2.1.7" #ixia chassis ip
790 tcl_port: "8009" # tcl server port
791 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
792 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
793 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
794 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
795 dut_result_dir: "/mnt/ixia"
798 xe0: # logical name from topology.yaml and vnfd.yaml
799 vpci: "2:5" # Card:port
802 local_ip: "152.16.100.20"
803 netmask: "255.255.0.0"
804 local_mac: "00:98:10:64:14:00"
805 xe1: # logical name from topology.yaml and vnfd.yaml
806 vpci: "2:6" # [(Card, port)]
809 local_ip: "152.40.40.20"
810 netmask: "255.255.0.0"
811 local_mac: "00:98:28:28:14:00"
813 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
815 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
816 You will also need to configure the IxLoad machine to start the IXIA
817 IxosTclServer. This can be started like so:
819 - Connect to the IxLoad machine using RDP
821 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
823 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
825 4. Create a folder "Results" in c:\ and share the folder on the network.
827 5. execute testcase in samplevnf folder.
828 eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
833 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
834 Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
835 2. Update pod_ixia.yaml file with ixia details.
837 .. code-block:: console
839 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
849 ip: 1.2.1.1 #ixia machine ip
852 key_filename: /root/.ssh/id_rsa
854 ixchassis: "1.2.1.7" #ixia chassis ip
855 tcl_port: "8009" # tcl server port
856 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
857 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
858 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
859 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
860 dut_result_dir: "/mnt/ixia"
863 xe0: # logical name from topology.yaml and vnfd.yaml
864 vpci: "2:5" # Card:port
867 local_ip: "152.16.100.20"
868 netmask: "255.255.0.0"
869 local_mac: "00:98:10:64:14:00"
870 xe1: # logical name from topology.yaml and vnfd.yaml
871 vpci: "2:6" # [(Card, port)]
874 local_ip: "152.40.40.20"
875 netmask: "255.255.0.0"
876 local_mac: "00:98:28:28:14:00"
878 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
880 3. Start IxNetwork TCL Server
881 You will also need to configure the IxNetwork machine to start the IXIA
882 IxNetworkTclServer. This can be started like so:
884 - Connect to the IxNetwork machine using RDP
885 - Go to: ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
887 4. execute testcase in samplevnf folder.
888 eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``