1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
6 Yardstick - NSB Testing -Installation
7 =====================================
12 The Network Service Benchmarking (NSB) extends the yardstick framework to do
13 VNF characterization and benchmarking in three different execution
14 environments viz., bare metal i.e. native Linux environment, standalone virtual
15 environment and managed virtualized environment (e.g. Open stack etc.).
16 It also brings in the capability to interact with external traffic generators
17 both hardware & software based for triggering and validating the traffic
18 according to user defined profiles.
20 The steps needed to run Yardstick with NSB testing are:
22 * Install Yardstick (NSB Testing).
23 * Setup/Reference pod.yaml describing Test topology
24 * Create/Reference the test configuration yaml file.
31 Refer chapter Yardstick Installation for more information on yardstick
34 Several prerequisites are needed for Yardstick(VNF testing):
36 - Python Modules: pyzmq, pika.
56 Hardware & Software Ingredients
57 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
62 +-----------+--------------------+
63 | Item | Description |
64 +-----------+--------------------+
66 +-----------+--------------------+
68 +-----------+--------------------+
69 | OS | Ubuntu 16.04.3 LTS |
70 +-----------+--------------------+
71 | kernel | 4.4.0-34-generic |
72 +-----------+--------------------+
74 +-----------+--------------------+
76 Boot and BIOS settings:
79 +------------------+---------------------------------------------------+
80 | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
81 | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
82 | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
83 | | iommu=on iommu=pt intel_iommu=on |
84 | | Note: nohz_full and rcu_nocbs is to disable Linux |
85 | | kernel interrupts |
86 +------------------+---------------------------------------------------+
87 |BIOS | CPU Power and Performance Policy <Performance> |
88 | | CPU C-state Disabled |
89 | | CPU P-state Disabled |
90 | | Enhanced IntelĀ® SpeedstepĀ® Tech Disabled |
91 | | Hyper-Threading Technology (If supported) Enabled |
92 | | Virtualization Techology Enabled |
93 | | Intel(R) VT for Direct I/O Enabled |
94 | | Coherency Enabled |
95 | | Turbo Boost Disabled |
96 +------------------+---------------------------------------------------+
100 Install Yardstick (NSB Testing)
101 -------------------------------
103 Download the source code and install Yardstick from it
105 .. code-block:: console
107 git clone https://gerrit.opnfv.org/gerrit/yardstick
111 # Switch to latest stable branch
112 # git checkout <tag or stable branch>
113 git checkout stable/euphrates
115 # For Bare-Metal or Standalone Virtualization
119 ./nsb_setup.sh <path to admin-openrc.sh>
122 Above command setup docker with latest yardstick code. To execute
124 .. code-block:: console
126 docker exec -it yardstick bash
128 It will also automatically download all the packages needed for NSB Testing setup.
129 Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
134 .. code-block:: console
136 +----------+ +----------+
142 +----------+ +----------+
146 Environment parameters and credentials
147 --------------------------------------
149 Config yardstick conf
150 ^^^^^^^^^^^^^^^^^^^^^
152 If user did not run 'yardstick env influxdb' inside the container, which will generate
153 correct yardstick.conf, then create the config file manually (run inside the container):
155 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
156 vi /etc/yardstick/yardstick.conf
158 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
164 dispatcher = file, influxdb
166 [dispatcher_influxdb]
168 target = http://{YOUR_IP_HERE}:8086
174 trex_path=/opt/nsb_bin/trex/scripts
175 bin_path=/opt/nsb_bin
176 trex_client_lib=/opt/nsb_bin/trex_client/stl
178 Run Yardstick - Network Service Testcases
179 -----------------------------------------
182 NS testing - using yardstick CLI
183 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
185 See :doc:`04-installation`
187 .. code-block:: console
190 docker exec -it yardstick /bin/bash
191 source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
192 export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
193 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
195 Network Service Benchmarking - Bare-Metal
196 -----------------------------------------
198 Bare-Metal Config pod.yaml describing Topology
199 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
201 Bare-Metal 2-Node setup:
202 ########################
203 .. code-block:: console
205 +----------+ +----------+
211 +----------+ +----------+
214 Bare-Metal 3-Node setup - Correlated Traffic:
215 #############################################
216 .. code-block:: console
218 +----------+ +----------+ +------------+
221 | | (0)----->(0) | | | UDP |
222 | TG1 | | DUT | | Replay |
224 | | | |(1)<---->(0)| |
225 +----------+ +----------+ +------------+
226 trafficgen_1 vnf trafficgen_2
229 Bare-Metal Config pod.yaml
230 ^^^^^^^^^^^^^^^^^^^^^^^^^^
231 Before executing Yardstick test cases, make sure that pod.yaml reflects the
232 topology and update all the required fields.::
234 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
246 xe0: # logical name from topology.yaml and vnfd.yaml
248 driver: i40e # default kernel driver
250 local_ip: "152.16.100.20"
251 netmask: "255.255.255.0"
252 local_mac: "00:00:00:00:00:01"
253 xe1: # logical name from topology.yaml and vnfd.yaml
255 driver: i40e # default kernel driver
257 local_ip: "152.16.40.20"
258 netmask: "255.255.255.0"
259 local_mac: "00:00.00:00:00:02"
267 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
269 xe0: # logical name from topology.yaml and vnfd.yaml
271 driver: i40e # default kernel driver
273 local_ip: "152.16.100.19"
274 netmask: "255.255.255.0"
275 local_mac: "00:00:00:00:00:03"
277 xe1: # logical name from topology.yaml and vnfd.yaml
279 driver: i40e # default kernel driver
281 local_ip: "152.16.40.19"
282 netmask: "255.255.255.0"
283 local_mac: "00:00:00:00:00:04"
285 - network: "152.16.100.20"
286 netmask: "255.255.255.0"
287 gateway: "152.16.100.20"
289 - network: "152.16.40.20"
290 netmask: "255.255.255.0"
291 gateway: "152.16.40.20"
294 - network: "0064:ff9b:0:0:0:0:9810:6414"
296 gateway: "0064:ff9b:0:0:0:0:9810:6414"
298 - network: "0064:ff9b:0:0:0:0:9810:2814"
300 gateway: "0064:ff9b:0:0:0:0:9810:2814"
304 Network Service Benchmarking - Standalone Virtualization
305 --------------------------------------------------------
310 SR-IOV Pre-requisites
311 #####################
314 a) Create a bridge for VM to connect to external network
316 .. code-block:: console
319 brctl addif br-int <interface_name> #This interface is connected to internet
321 b) Build guest image for VNF to run.
322 Most of the sample test cases in Yardstick are using a guest image called
323 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
324 Yardstick has a tool for building this custom image with samplevnf.
325 It is necessary to have ``sudo`` rights to use this tool.
327 Also you may need to install several additional packages to use this tool, by
328 following the commands below::
330 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
332 This image can be built using the following command in the directory where Yardstick is installed
334 .. code-block:: console
336 export YARD_IMG_ARCH='amd64'
337 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
339 Please use ansible script to generate a cloud image refer to :doc:`04-installation`
341 for more details refer to chapter :doc:`04-installation`
343 .. note:: VM should be build with static IP and should be accessible from yardstick host.
346 SR-IOV Config pod.yaml describing Topology
347 ##########################################
351 .. code-block:: console
353 +--------------------+
359 +--------------------+
360 | VF NIC | | VF NIC |
361 +--------+ +--------+
365 +----------+ +-------------------------+
368 | | (0)<----->(0) | ------ | |
371 | | (n)<----->(n) |------------------ |
372 +----------+ +-------------------------+
377 SR-IOV 3-Node setup - Correlated Traffic
378 ########################################
379 .. code-block:: console
381 +--------------------+
387 +--------------------+
388 | VF NIC | | VF NIC |
389 +--------+ +--------+
393 +----------+ +-------------------------+ +--------------+
396 | | (0)<----->(0) | ------ | | | TG2 |
397 | TG1 | | SUT | | | (UDP Replay) |
399 | | (n)<----->(n) | ------ | (n)<-->(n) | |
400 +----------+ +-------------------------+ +--------------+
401 trafficgen_1 host trafficgen_2
403 Before executing Yardstick test cases, make sure that pod.yaml reflects the
404 topology and update all the required fields.
406 .. code-block:: console
408 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
409 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
411 .. note:: Update all the required fields like ip, user, password, pcis, etc...
413 SR-IOV Config pod_trex.yaml
414 ###########################
425 key_filename: /root/.ssh/id_rsa
427 xe0: # logical name from topology.yaml and vnfd.yaml
429 driver: i40e # default kernel driver
431 local_ip: "152.16.100.20"
432 netmask: "255.255.255.0"
433 local_mac: "00:00:00:00:00:01"
434 xe1: # logical name from topology.yaml and vnfd.yaml
436 driver: i40e # default kernel driver
438 local_ip: "152.16.40.20"
439 netmask: "255.255.255.0"
440 local_mac: "00:00.00:00:00:02"
442 SR-IOV Config host_sriov.yaml
443 #############################
455 SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
457 Update "contexts" section
458 """""""""""""""""""""""""
465 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
466 - type: StandaloneSriov
467 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
471 images: "/var/lib/libvirt/images/ubuntu.qcow2"
477 user: "" # update VM username
478 password: "" # update password
483 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
490 phy_port: "0000:05:00.0"
492 cidr: '152.16.100.10/24'
493 gateway_ip: '152.16.100.20'
495 phy_port: "0000:05:00.1"
497 cidr: '152.16.40.10/24'
498 gateway_ip: '152.16.100.20'
505 OVS-DPDK Pre-requisites
506 #######################
509 a) Create a bridge for VM to connect to external network
511 .. code-block:: console
514 brctl addif br-int <interface_name> #This interface is connected to internet
516 b) Build guest image for VNF to run.
517 Most of the sample test cases in Yardstick are using a guest image called
518 ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
519 Yardstick has a tool for building this custom image with samplevnf.
520 It is necessary to have ``sudo`` rights to use this tool.
522 Also you may need to install several additional packages to use this tool, by
523 following the commands below::
525 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
527 This image can be built using the following command in the directory where Yardstick is installed::
529 export YARD_IMG_ARCH='amd64'
530 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
531 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
533 for more details refer to chapter :doc:`04-installation`
535 .. note:: VM should be build with static IP and should be accessible from yardstick host.
537 c) OVS & DPDK version.
538 - OVS 2.7 and DPDK 16.11.1 above version is supported
540 d) Setup OVS/DPDK on host.
541 Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
544 OVS-DPDK Config pod.yaml describing Topology
545 ############################################
547 OVS-DPDK 2-Node setup:
548 ######################
551 .. code-block:: console
553 +--------------------+
559 +--------------------+
560 | virtio | | virtio |
561 +--------+ +--------+
565 +--------+ +--------+
566 | vHOST0 | | vHOST1 |
567 +----------+ +-------------------------+
570 | | (0)<----->(0) | ------ | |
573 | | (n)<----->(n) |------------------ |
574 +----------+ +-------------------------+
578 OVS-DPDK 3-Node setup - Correlated Traffic
579 ##########################################
581 .. code-block:: console
583 +--------------------+
589 +--------------------+
590 | virtio | | virtio |
591 +--------+ +--------+
595 +--------+ +--------+
596 | vHOST0 | | vHOST1 |
597 +----------+ +-------------------------+ +------------+
600 | | (0)<----->(0) | ------ | | | TG2 |
601 | TG1 | | SUT | | |(UDP Replay)|
602 | | | (ovs-dpdk) | | | |
603 | | (n)<----->(n) | ------ |(n)<-->(n)| |
604 +----------+ +-------------------------+ +------------+
605 trafficgen_1 host trafficgen_2
608 Before executing Yardstick test cases, make sure that pod.yaml reflects the
609 topology and update all the required fields.
611 .. code-block:: console
613 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
614 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
616 .. note:: Update all the required fields like ip, user, password, pcis, etc...
618 OVS-DPDK Config pod_trex.yaml
619 #############################
631 xe0: # logical name from topology.yaml and vnfd.yaml
633 driver: i40e # default kernel driver
635 local_ip: "152.16.100.20"
636 netmask: "255.255.255.0"
637 local_mac: "00:00:00:00:00:01"
638 xe1: # logical name from topology.yaml and vnfd.yaml
640 driver: i40e # default kernel driver
642 local_ip: "152.16.40.20"
643 netmask: "255.255.255.0"
644 local_mac: "00:00.00:00:00:02"
646 OVS-DPDK Config host_ovs.yaml
647 #############################
659 ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
661 Update "contexts" section
662 """""""""""""""""""""""""
669 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
670 - type: StandaloneOvsDpdk
672 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
686 images: "/var/lib/libvirt/images/ubuntu.qcow2"
692 user: "" # update VM username
693 password: "" # update password
698 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
705 phy_port: "0000:05:00.0"
707 cidr: '152.16.100.10/24'
708 gateway_ip: '152.16.100.20'
710 phy_port: "0000:05:00.1"
712 cidr: '152.16.40.10/24'
713 gateway_ip: '152.16.100.20'
716 Enabling other Traffic generator
717 --------------------------------
722 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
723 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
724 If the installation was not done inside the container, after installing the IXIA client,
725 check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
726 inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
727 to /usr/bin/ixiapython<ver> inside the container.
729 2. Update pod_ixia.yaml file with ixia details.
731 .. code-block:: console
733 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
744 ip: 1.2.1.1 #ixia machine ip
747 key_filename: /root/.ssh/id_rsa
749 ixchassis: "1.2.1.7" #ixia chassis ip
750 tcl_port: "8009" # tcl server port
751 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
752 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
753 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
754 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
755 dut_result_dir: "/mnt/ixia"
758 xe0: # logical name from topology.yaml and vnfd.yaml
759 vpci: "2:5" # Card:port
762 local_ip: "152.16.100.20"
763 netmask: "255.255.0.0"
764 local_mac: "00:98:10:64:14:00"
765 xe1: # logical name from topology.yaml and vnfd.yaml
766 vpci: "2:6" # [(Card, port)]
769 local_ip: "152.40.40.20"
770 netmask: "255.255.0.0"
771 local_mac: "00:98:28:28:14:00"
773 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
775 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
776 You will also need to configure the IxLoad machine to start the IXIA
777 IxosTclServer. This can be started like so:
779 - Connect to the IxLoad machine using RDP
781 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
783 ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
785 4. Create a folder "Results" in c:\ and share the folder on the network.
787 5. execute testcase in samplevnf folder.
788 eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
793 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
794 Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
795 2. Update pod_ixia.yaml file with ixia details.
797 .. code-block:: console
799 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
809 ip: 1.2.1.1 #ixia machine ip
812 key_filename: /root/.ssh/id_rsa
814 ixchassis: "1.2.1.7" #ixia chassis ip
815 tcl_port: "8009" # tcl server port
816 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
817 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
818 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
819 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
820 dut_result_dir: "/mnt/ixia"
823 xe0: # logical name from topology.yaml and vnfd.yaml
824 vpci: "2:5" # Card:port
827 local_ip: "152.16.100.20"
828 netmask: "255.255.0.0"
829 local_mac: "00:98:10:64:14:00"
830 xe1: # logical name from topology.yaml and vnfd.yaml
831 vpci: "2:6" # [(Card, port)]
834 local_ip: "152.40.40.20"
835 netmask: "255.255.0.0"
836 local_mac: "00:98:28:28:14:00"
838 for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
840 3. Start IxNetwork TCL Server
841 You will also need to configure the IxNetwork machine to start the IXIA
842 IxNetworkTclServer. This can be started like so:
844 - Connect to the IxNetwork machine using RDP
845 - Go to: ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
847 4. execute testcase in samplevnf folder.
848 eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``