1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
7 Convention for heading levels in Yardstick documentation:
9 ======= Heading 0 (reserved for the title in a document)
15 Avoid deeper levels because they do not render well.
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
28 The steps needed to run Yardstick with NSB testing are:
30 * Install Yardstick (NSB Testing).
31 * Setup/reference ``pod.yaml`` describing Test topology.
32 * Create/reference the test configuration yaml file.
38 Refer to :doc:`04-installation` for more information on Yardstick
41 Several prerequisites are needed for Yardstick (VNF testing):
43 * Python Modules: pyzmq, pika.
54 Hardware & Software Ingredients
55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
59 ======= ===================
61 ======= ===================
65 kernel 4.4.0-34-generic
67 ======= ===================
69 Boot and BIOS settings:
71 ============= =================================================
72 Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
73 hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
74 nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
75 iommu=on iommu=pt intel_iommu=on
76 Note: nohz_full and rcu_nocbs is to disable Linux
78 BIOS CPU Power and Performance Policy <Performance>
81 Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
82 Hyper-Threading Technology (If supported) Enabled
83 Virtualization Techology Enabled
84 Intel(R) VT for Direct I/O Enabled
87 ============= =================================================
89 Install Yardstick (NSB Testing)
90 -------------------------------
92 Yardstick with NSB can be installed using ``nsb_setup.sh``.
93 The ``nsb_setup.sh`` allows to:
95 1. Install Yardstick in specified mode: bare metal or container.
96 Refer :doc:`04-installation`.
97 2. Install package dependencies on remote servers used as traffic generator or
98 sample VNF. Add such servers to ``install-inventory.ini`` file to either
99 ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
100 Configures IOMMU, hugepages, open file limits, CPU isolation, etc.
101 3. Build VM image either nsb or normal. The nsb VM image is used to run
102 Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
103 The normal VM image is used to run Yardstick ping tests in OpenStack context.
104 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
106 Firstly, configure the network proxy, either using the environment variables or
107 setting the global environment file.
111 http_proxy='http://proxy.company.com:port'
112 https_proxy='http://proxy.company.com:port'
114 .. code-block:: console
116 export http_proxy='http://proxy.company.com:port'
117 export https_proxy='http://proxy.company.com:port'
119 Download the source code and check out the latest stable branch
121 .. code-block:: console
123 git clone https://gerrit.opnfv.org/gerrit/yardstick
125 # Switch to latest stable branch
126 git checkout stable/gambia
128 Modify the Yardstick installation inventory used by Ansible::
130 cat ./ansible/install-inventory.ini
132 localhost ansible_connection=local
134 # section below is only due backward compatibility.
135 # it will be removed later
139 [yardstick-standalone]
140 standalone ansible_host=192.168.2.51 ansible_connection=ssh
142 [yardstick-baremetal]
143 baremetal ansible_host=192.168.2.52 ansible_connection=ssh
148 inst_mode_baremetal=baremetal
149 inst_mode_container=container
150 inst_mode_container_pull=container_pull
151 ubuntu_archive={"amd64": "http://archive.ubuntu.com/ubuntu/", "arm64": "http://ports.ubuntu.com/ubuntu-ports/"}
153 ansible_ssh_pass=root # OR ansible_ssh_private_key_file=/root/.ssh/id_rsa
157 Before running ``nsb_setup.sh`` make sure python is installed on servers
158 added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
162 SSH access without password needs to be configured for all your nodes
163 defined in ``install-inventory.ini`` file.
164 If you want to use password authentication you need to install ``sshpass``::
166 sudo -EH apt-get install sshpass
171 A VM image built by other means than Yardstick can be added to OpenStack.
172 Uncomment and set correct path to the VM image in the
173 ``install-inventory.ini`` file::
175 path_to_img=/tmp/workspace/yardstick-image.img
180 CPU isolation can be applied to the remote servers, like:
182 Uncomment and modify accordingly in ``install-inventory.ini`` file.
184 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
185 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
186 installs packages to the servers given in ``yardstick-standalone`` and
187 ``yardstick-baremetal`` host groups.
189 To change default behavior modify parameters for ``install.yaml`` in
190 ``nsb_setup.sh`` file.
192 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
195 To execute an installation for a **BareMetal** or a **Standalone context**::
200 To execute an installation for an **OpenStack** context::
202 ./nsb_setup.sh <path to admin-openrc.sh>
206 The Yardstick VM image (NSB or normal) cannot be built inside a VM.
210 The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
211 Reboot of the servers from ``yardstick-standalone`` or
212 ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
213 required to apply those changes.
215 The above commands will set up Docker with the latest Yardstick code. To
218 docker exec -it yardstick bash
220 It will also automatically download all the packages needed for NSB Testing
221 setup. Refer chapter :doc:`04-installation` for more on Docker
223 **Install Yardstick using Docker (recommended)**
228 .. code-block:: console
230 +----------+ +----------+
236 +----------+ +----------+
240 Environment parameters and credentials
241 --------------------------------------
243 Configure yardstick.conf
244 ^^^^^^^^^^^^^^^^^^^^^^^^
246 If you did not run ``yardstick env influxdb`` inside the container to generate
247 ``yardstick.conf``, then create the config file manually (run inside the
250 cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
251 vi /etc/yardstick/yardstick.conf
253 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
258 dispatcher = influxdb
260 [dispatcher_influxdb]
262 target = http://{YOUR_IP_HERE}:8086
268 trex_path=/opt/nsb_bin/trex/scripts
269 bin_path=/opt/nsb_bin
270 trex_client_lib=/opt/nsb_bin/trex_client/stl
272 Run Yardstick - Network Service Testcases
273 -----------------------------------------
275 NS testing - using yardstick CLI
276 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
278 See :doc:`04-installation`
280 Connect to the Yardstick container::
282 docker exec -it yardstick /bin/bash
284 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
285 source /etc/yardstick/openstack.creds
287 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
290 export EXTERNAL_NETWORK="<openstack public network>"
292 Finally, you should be able to run the testcase::
294 yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
296 Network Service Benchmarking - Bare-Metal
297 -----------------------------------------
299 Bare-Metal Config pod.yaml describing Topology
300 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
302 Bare-Metal 2-Node setup
303 +++++++++++++++++++++++
304 .. code-block:: console
306 +----------+ +----------+
312 +----------+ +----------+
315 Bare-Metal 3-Node setup - Correlated Traffic
316 ++++++++++++++++++++++++++++++++++++++++++++
317 .. code-block:: console
319 +----------+ +----------+ +------------+
322 | | (0)----->(0) | | | UDP |
323 | TG1 | | DUT | | Replay |
325 | | | |(1)<---->(0)| |
326 +----------+ +----------+ +------------+
327 trafficgen_0 vnf trafficgen_1
330 Bare-Metal Config pod.yaml
331 ^^^^^^^^^^^^^^^^^^^^^^^^^^
332 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
333 topology and update all the required fields.::
335 cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
347 xe0: # logical name from topology.yaml and vnfd.yaml
349 driver: i40e # default kernel driver
351 local_ip: "152.16.100.20"
352 netmask: "255.255.255.0"
353 local_mac: "00:00:00:00:00:01"
354 xe1: # logical name from topology.yaml and vnfd.yaml
356 driver: i40e # default kernel driver
358 local_ip: "152.16.40.20"
359 netmask: "255.255.255.0"
360 local_mac: "00:00.00:00:00:02"
368 host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
370 xe0: # logical name from topology.yaml and vnfd.yaml
372 driver: i40e # default kernel driver
374 local_ip: "152.16.100.19"
375 netmask: "255.255.255.0"
376 local_mac: "00:00:00:00:00:03"
378 xe1: # logical name from topology.yaml and vnfd.yaml
380 driver: i40e # default kernel driver
382 local_ip: "152.16.40.19"
383 netmask: "255.255.255.0"
384 local_mac: "00:00:00:00:00:04"
386 - network: "152.16.100.20"
387 netmask: "255.255.255.0"
388 gateway: "152.16.100.20"
390 - network: "152.16.40.20"
391 netmask: "255.255.255.0"
392 gateway: "152.16.40.20"
395 - network: "0064:ff9b:0:0:0:0:9810:6414"
397 gateway: "0064:ff9b:0:0:0:0:9810:6414"
399 - network: "0064:ff9b:0:0:0:0:9810:2814"
401 gateway: "0064:ff9b:0:0:0:0:9810:2814"
405 Standalone Virtualization
406 -------------------------
411 SR-IOV Pre-requisites
412 +++++++++++++++++++++
414 On Host, where VM is created:
415 a) Create and configure a bridge named ``br-int`` for VM to connect to
416 external network. Currently this can be done using VXLAN tunnel.
418 Execute the following on host, where VM is created::
420 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
422 brctl addif br-int vxlan0
423 ip link set dev vxlan0 up
424 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
425 ip link set dev br-int up
427 .. note:: You may need to add extra rules to iptable to forward traffic.
429 .. code-block:: console
431 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
432 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
434 Execute the following on a jump host:
436 .. code-block:: console
438 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
439 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
440 ip link set dev vxlan0 up
442 .. note:: Host and jump host are different baremetal servers.
444 b) Modify test case management CIDR.
445 IP addresses IP#1, IP#2 and CIDR must be in the same network.
455 c) Build guest image for VNF to run.
456 Most of the sample test cases in Yardstick are using a guest image called
457 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
458 Yardstick has a tool for building this custom image with SampleVNF.
459 It is necessary to have ``sudo`` rights to use this tool.
461 Also you may need to install several additional packages to use this tool, by
462 following the commands below::
464 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
466 This image can be built using the following command in the directory where
467 Yardstick is installed::
469 export YARD_IMG_ARCH='amd64'
470 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
472 For instructions on generating a cloud image using Ansible, refer to
473 :doc:`04-installation`.
475 for more details refer to chapter :doc:`04-installation`
477 .. note:: VM should be build with static IP and be accessible from the
481 SR-IOV Config pod.yaml describing Topology
482 ++++++++++++++++++++++++++++++++++++++++++
486 .. code-block:: console
488 +--------------------+
494 +--------------------+
495 | VF NIC | | VF NIC |
496 +--------+ +--------+
500 +----------+ +-------------------------+
503 | | (0)<----->(0) | ------ SUT | |
505 | | (n)<----->(n) | ----------------- |
507 +----------+ +-------------------------+
512 SR-IOV 3-Node setup - Correlated Traffic
513 ++++++++++++++++++++++++++++++++++++++++
514 .. code-block:: console
516 +--------------------+
522 +--------------------+
523 | VF NIC | | VF NIC |
524 +--------+ +--------+
528 +----------+ +---------------------+ +--------------+
531 | | (0)<----->(0) |----- | | | TG2 |
532 | TG1 | | SUT | | | (UDP Replay) |
534 | | (n)<----->(n) | -----| (n)<-->(n) | |
535 +----------+ +---------------------+ +--------------+
536 trafficgen_0 host trafficgen_1
538 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
539 topology and update all the required fields.
541 .. code-block:: console
543 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
544 cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
546 .. note:: Update all the required fields like ip, user, password, pcis, etc...
548 SR-IOV Config pod_trex.yaml
549 +++++++++++++++++++++++++++
560 key_filename: /root/.ssh/id_rsa
562 xe0: # logical name from topology.yaml and vnfd.yaml
564 driver: i40e # default kernel driver
566 local_ip: "152.16.100.20"
567 netmask: "255.255.255.0"
568 local_mac: "00:00:00:00:00:01"
569 xe1: # logical name from topology.yaml and vnfd.yaml
571 driver: i40e # default kernel driver
573 local_ip: "152.16.40.20"
574 netmask: "255.255.255.0"
575 local_mac: "00:00.00:00:00:02"
577 SR-IOV Config host_sriov.yaml
578 +++++++++++++++++++++++++++++
590 SR-IOV testcase update:
591 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
593 Update contexts section
594 '''''''''''''''''''''''
601 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
602 - type: StandaloneSriov
603 file: /etc/yardstick/nodes/standalone/host_sriov.yaml
607 images: "/var/lib/libvirt/images/ubuntu.qcow2"
613 user: "" # update VM username
614 password: "" # update password
619 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
626 phy_port: "0000:05:00.0"
628 cidr: '152.16.100.10/24'
629 gateway_ip: '152.16.100.20'
631 phy_port: "0000:05:00.1"
633 cidr: '152.16.40.10/24'
634 gateway_ip: '152.16.100.20'
640 OVS-DPDK Pre-requisites
641 +++++++++++++++++++++++
643 On Host, where VM is created:
644 a) Create and configure a bridge named ``br-int`` for VM to connect to
645 external network. Currently this can be done using VXLAN tunnel.
647 Execute the following on host, where VM is created:
649 .. code-block:: console
651 ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
653 brctl addif br-int vxlan0
654 ip link set dev vxlan0 up
655 ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
656 ip link set dev br-int up
658 .. note:: May be needed to add extra rules to iptable to forward traffic.
660 .. code-block:: console
662 iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
663 iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
665 Execute the following on a jump host:
667 .. code-block:: console
669 ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
670 ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
671 ip link set dev vxlan0 up
673 .. note:: Host and jump host are different baremetal servers.
675 b) Modify test case management CIDR.
676 IP addresses IP#1, IP#2 and CIDR must be in the same network.
686 c) Build guest image for VNF to run.
687 Most of the sample test cases in Yardstick are using a guest image called
688 ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
689 Yardstick has a tool for building this custom image with SampleVNF.
690 It is necessary to have ``sudo`` rights to use this tool.
692 You may need to install several additional packages to use this tool, by
693 following the commands below::
695 sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
697 This image can be built using the following command in the directory where
698 Yardstick is installed::
700 export YARD_IMG_ARCH='amd64'
701 sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
702 sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
704 for more details refer to chapter :doc:`04-installation`
706 .. note:: VM should be build with static IP and should be accessible from
709 3. OVS & DPDK version.
710 * OVS 2.7 and DPDK 16.11.1 above version is supported
712 4. Setup `OVS-DPDK`_ on host.
715 OVS-DPDK Config pod.yaml describing Topology
716 ++++++++++++++++++++++++++++++++++++++++++++
718 OVS-DPDK 2-Node setup
719 +++++++++++++++++++++
721 .. code-block:: console
723 +--------------------+
729 +--------------------+
730 | virtio | | virtio |
731 +--------+ +--------+
735 +--------+ +--------+
736 | vHOST0 | | vHOST1 |
737 +----------+ +-------------------------+
740 | | (0)<----->(0) | ------ | |
743 | | (n)<----->(n) |------------------ |
744 +----------+ +-------------------------+
748 OVS-DPDK 3-Node setup - Correlated Traffic
749 ++++++++++++++++++++++++++++++++++++++++++
751 .. code-block:: console
753 +--------------------+
759 +--------------------+
760 | virtio | | virtio |
761 +--------+ +--------+
765 +--------+ +--------+
766 | vHOST0 | | vHOST1 |
767 +----------+ +-------------------------+ +------------+
770 | | (0)<----->(0) | ------ | | | TG2 |
771 | TG1 | | SUT | | |(UDP Replay)|
772 | | | (ovs-dpdk) | | | |
773 | | (n)<----->(n) | ------ |(n)<-->(n)| |
774 +----------+ +-------------------------+ +------------+
775 trafficgen_0 host trafficgen_1
778 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
779 the topology and update all the required fields::
781 cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
782 cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
784 .. note:: Update all the required fields like ip, user, password, pcis, etc...
786 OVS-DPDK Config pod_trex.yaml
787 +++++++++++++++++++++++++++++
799 xe0: # logical name from topology.yaml and vnfd.yaml
801 driver: i40e # default kernel driver
803 local_ip: "152.16.100.20"
804 netmask: "255.255.255.0"
805 local_mac: "00:00:00:00:00:01"
806 xe1: # logical name from topology.yaml and vnfd.yaml
808 driver: i40e # default kernel driver
810 local_ip: "152.16.40.20"
811 netmask: "255.255.255.0"
812 local_mac: "00:00.00:00:00:02"
814 OVS-DPDK Config host_ovs.yaml
815 +++++++++++++++++++++++++++++
827 ovs_dpdk testcase update:
828 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
830 Update contexts section
831 '''''''''''''''''''''''
838 file: /etc/yardstick/nodes/standalone/pod_trex.yaml
839 - type: StandaloneOvsDpdk
841 file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
855 images: "/var/lib/libvirt/images/ubuntu.qcow2"
861 user: "" # update VM username
862 password: "" # update password
867 cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
874 phy_port: "0000:05:00.0"
876 cidr: '152.16.100.10/24'
877 gateway_ip: '152.16.100.20'
879 phy_port: "0000:05:00.1"
881 cidr: '152.16.40.10/24'
882 gateway_ip: '152.16.100.20'
885 OpenStack with SR-IOV support
886 -----------------------------
888 This section describes how to run a Sample VNF test case, using Heat context,
889 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
890 DevStack, with SR-IOV support.
893 Single node OpenStack with external TG
894 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
896 .. code-block:: console
898 +----------------------------+
899 |OpenStack(DevStack) |
901 | +--------------------+ |
907 | +--------+ +--------+ |
908 | | VF NIC | | VF NIC | |
909 | +-----+--+--+----+---+ |
912 +----------+ +---------+----------+-------+
916 | TG | (PF0)<----->(PF0) +---------+ | |
918 | | (PF1)<----->(PF1) +--------------------+ |
920 +----------+ +----------------------------+
924 Host pre-configuration
925 ++++++++++++++++++++++
927 .. warning:: The following configuration requires sudo access to the system.
928 Make sure that your user have the access.
930 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
931 manufacturers disable this extension by default.
933 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
934 config file ``/etc/default/grub``.
936 For the Intel platform::
939 GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
942 For the AMD platform::
945 GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
948 Update the grub configuration file and restart the system:
950 .. warning:: The following command will reboot the system.
957 Make sure the extension has been enabled::
959 sudo journalctl -b 0 | grep -e IOMMU -e DMAR
961 Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
962 Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
963 Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
964 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
965 Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
966 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
967 Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
968 Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
970 .. TODO: Refer to the yardstick installation guide for proxy set up
972 Setup system proxy (if needed). Add the following configuration into the
973 ``/etc/environment`` file:
975 .. note:: The proxy server name/port and IPs should be changed according to
976 actual/current proxy configuration in the lab.
980 export http_proxy=http://proxy.company.com:port
981 export https_proxy=http://proxy.company.com:port
982 export ftp_proxy=http://proxy.company.com:port
983 export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
984 export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
990 sudo -EH apt-get update
991 sudo -EH apt-get upgrade
992 sudo -EH apt-get dist-upgrade
994 Install dependencies needed for DevStack
998 sudo -EH apt-get install python python-dev python-pip
1000 Setup SR-IOV ports on the host:
1002 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1003 on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1004 interface names should be changed according to the HW environment used for
1009 sudo ip link set dev enp24s0f0 up
1010 sudo ip link set dev enp24s0f1 up
1011 sudo ip link set dev enp24s0f3 up
1014 echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1015 echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1018 DevStack installation
1019 +++++++++++++++++++++
1021 If you want to try out NSB, but don't have OpenStack set-up, you can use
1022 `Devstack`_ to install OpenStack on a host. Please note, that the
1023 ``stable/pike`` branch of devstack repo should be used during the installation.
1024 The required ``local.conf`` configuration file are described below.
1026 DevStack configuration file:
1028 .. note:: Update the devstack configuration file by replacing angluar brackets
1029 with a short description inside.
1031 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1032 commands to get device and vendor id of the virtual function (VF).
1034 .. literalinclude:: code/single-devstack-local.conf
1037 Start the devstack installation on a host.
1039 TG host configuration
1040 +++++++++++++++++++++
1042 Yardstick automatically installs and configures Trex traffic generator on TG
1043 host based on provided POD file (see below). Anyway, it's recommended to check
1044 the compatibility of the installed NIC on the TG server with software Trex
1045 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1047 Run the Sample VNF test case
1048 ++++++++++++++++++++++++++++
1050 There is an example of Sample VNF test case ready to be executed in an
1051 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1052 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1054 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1057 Create pod file for TG in the yardstick repo folder located in the yardstick
1060 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
1061 according to HW environment used for the testing. Use ``lshw -c network -businfo``
1062 command to get the PF PCI address for ``vpci`` field.
1064 .. literalinclude:: code/single-yardstick-pod.conf
1067 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1068 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1069 context using steps described in `NS testing - using yardstick CLI`_ section.
1072 Multi node OpenStack TG and VNF setup (two nodes)
1073 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1075 .. code-block:: console
1077 +----------------------------+ +----------------------------+
1078 |OpenStack(DevStack) | |OpenStack(DevStack) |
1080 | +--------------------+ | | +--------------------+ |
1081 | |sample-VNF VM | | | |sample-VNF VM | |
1083 | | TG | | | | DUT | |
1084 | | trafficgen_0 | | | | (VNF) | |
1086 | +--------+ +--------+ | | +--------+ +--------+ |
1087 | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
1088 | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
1091 +--------+-----------+-------+ +---------+----------+-------+
1092 | VF0 VF1 | | VF0 VF1 |
1094 | | SUT2 | | | | SUT1 | |
1095 | | +-------+ (PF0)<----->(PF0) +---------+ | |
1097 | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
1099 +----------------------------+ +----------------------------+
1100 host2 (compute) host1 (controller)
1103 Controller/Compute pre-configuration
1104 ++++++++++++++++++++++++++++++++++++
1106 Pre-configuration of the controller and compute hosts are the same as
1107 described in `Host pre-configuration`_ section.
1109 DevStack configuration
1110 ++++++++++++++++++++++
1112 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1113 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1114 devstack repo should be used during the installation.
1116 .. note:: Update the devstack configuration files by replacing angluar brackets
1117 with a short description inside.
1119 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1120 commands to get device and vendor id of the virtual function (VF).
1122 DevStack configuration file for controller host:
1124 .. literalinclude:: code/multi-devstack-controller-local.conf
1127 DevStack configuration file for compute host:
1129 .. literalinclude:: code/multi-devstack-compute-local.conf
1132 Start the devstack installation on the controller and compute hosts.
1134 Run the sample vFW TC
1135 +++++++++++++++++++++
1137 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1140 Run the sample vFW RFC2544 SR-IOV test case
1141 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1142 in the heat context using steps described in
1143 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1148 yardstick -d task start --task-args='{"provider": "sriov"}' \
1149 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1152 Enabling other Traffic generators
1153 ---------------------------------
1158 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1159 ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1160 Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1161 ``<IxOS version>Linux64.bin.tar.gz``
1162 If the installation was not done inside the container, after installing
1163 the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1164 sure you can run this cmd inside the yardstick container. Usually user is
1165 required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1166 ``/usr/bin/ixiapython<ver>`` inside the container.
1168 2. Update ``pod_ixia.yaml`` file with ixia details.
1170 .. code-block:: console
1172 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1173 etc/yardstick/nodes/pod_ixia.yaml
1175 Config ``pod_ixia.yaml``
1177 .. literalinclude:: code/pod_ixia.yaml
1180 for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1181 for ovs-dpdk/sriov configuration
1183 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1184 You will also need to configure the IxLoad machine to start the IXIA
1185 IxosTclServer. This can be started like so:
1187 * Connect to the IxLoad machine using RDP
1189 ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1191 ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1193 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1195 5. Execute testcase in samplevnf folder e.g.
1196 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1201 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1202 installed as part of the requirements of the project.
1204 1. Update ``pod_ixia.yaml`` file with ixia details.
1206 .. code-block:: console
1208 cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1209 etc/yardstick/nodes/pod_ixia.yaml
1211 Configure ``pod_ixia.yaml``
1213 .. literalinclude:: code/pod_ixia.yaml
1216 for sriov/ovs_dpdk pod files, please refer to above
1217 `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1219 2. Start IxNetwork TCL Server
1220 You will also need to configure the IxNetwork machine to start the IXIA
1221 IxNetworkTclServer. This can be started like so:
1223 * Connect to the IxNetwork machine using RDP
1225 ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1226 (or ``IxNetworkApiServer``)
1228 3. Execute testcase in samplevnf folder e.g.
1229 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1234 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1235 to be preinstalled and properly configured.
1239 32-bit Java installation is required for the Spirent Landslide TCL API.
1241 | ``$ sudo apt-get install openjdk-8-jdk:i386``
1244 Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1245 check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1246 section of installation instructions.
1248 - LsApi (Tcl API module)
1250 Follow Landslide documentation for detailed instructions on Linux
1251 installation of Tcl API and its dependencies
1252 ``http://TAS_HOST_IP/tclapiinstall.html``.
1253 For working with LsApi Python wrapper only steps 1-5 are required.
1255 .. note:: After installation make sure your API home path is included in
1256 ``PYTHONPATH`` environment variable.
1259 The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1260 For LsApi module to initialize correctly following lines (184-186) in
1263 .. code-block:: python
1265 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1267 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1269 should be changed to:
1271 .. code-block:: python
1273 ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1274 if not ldpath == '':
1275 environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1277 .. note:: The Spirent landslide TCL software package needs to be updated in case
1278 the user upgrades to a new version of Spirent landslide software.