add yardstick iruya 9.0.0 release notes
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
index 69f6a5a..35f67b9 100644 (file)
@@ -1,42 +1,38 @@
 .. This work is licensed under a Creative Commons Attribution 4.0 International
 .. License.
 .. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2018 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
 
 ..
    Convention for heading levels in Yardstick documentation:
 
    =======  Heading 0 (reserved for the title in a document)
    -------  Heading 1
-   ~~~~~~~  Heading 2
+   ^^^^^^^  Heading 2
    +++++++  Heading 3
    '''''''  Heading 4
 
    Avoid deeper levels because they do not render well.
 
+
 ================
 NSB Installation
 ================
 
 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
 .. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
 
-The Network Service Benchmarking (NSB) extends the yardstick framework to do
-VNF characterization and benchmarking in three different execution
-environments viz., bare metal i.e. native Linux environment, standalone virtual
-environment and managed virtualized environment (e.g. Open stack etc.).
-It also brings in the capability to interact with external traffic generators
-both hardware & software based for triggering and validating the traffic
-according to user defined profiles.
+Abstract
+--------
 
 The steps needed to run Yardstick with NSB testing are:
 
 * Install Yardstick (NSB Testing).
-* Setup/reference ``pod.yaml`` describing Test topology
+* Setup/reference ``pod.yaml`` describing Test topology.
 * Create/reference the test configuration yaml file.
 * Run the test case.
 
-
 Prerequisites
 -------------
 
@@ -57,7 +53,7 @@ Several prerequisites are needed for Yardstick (VNF testing):
   * intel-cmt-cat
 
 Hardware & Software Ingredients
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 SUT requirements:
 
@@ -73,7 +69,6 @@ SUT requirements:
 
 Boot and BIOS settings:
 
-
    ============= =================================================
    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
@@ -95,79 +90,218 @@ Boot and BIOS settings:
 Install Yardstick (NSB Testing)
 -------------------------------
 
-Download the source code and check out the latest stable branch::
-
-.. code-block:: console
-
-  git clone https://gerrit.opnfv.org/gerrit/yardstick
-  cd yardstick
-  # Switch to latest stable branch
-  git checkout stable/gambia
+Yardstick with NSB can be installed using ``nsb_setup.sh``.
+The ``nsb_setup.sh`` allows to:
 
-Configure the network proxy, either using the environment variables or setting
-the global environment file.
+1. Install Yardstick in specified mode: bare metal or container.
+   Refer :doc:`04-installation`.
+2. Install package dependencies on remote servers used as traffic generator or
+   sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+   Add such servers to ``install-inventory.ini`` file to either
+   ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
+   It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+3. Build VM image either nsb or normal. The nsb VM image is used to run
+   Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
+   The normal VM image is used to run Yardstick ping tests in OpenStack context.
+4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
 
-* Set environment
+Firstly, configure the network proxy, either using the environment variables or
+setting the global environment file.
 
-.. code-block::
+Set environment in the file::
 
     http_proxy='http://proxy.company.com:port'
     https_proxy='http://proxy.company.com:port'
 
+Set environment variables:
+
 .. code-block:: console
 
     export http_proxy='http://proxy.company.com:port'
     export https_proxy='http://proxy.company.com:port'
 
-Modify the Yardstick installation inventory, used by Ansible::
+Download the source code and check out the latest stable branch:
+
+.. code-block:: console
+
+  git clone https://gerrit.opnfv.org/gerrit/yardstick
+  cd yardstick
+  # Switch to latest stable branch
+  git checkout stable/gambia
+
+Modify the Yardstick installation inventory used by Ansible:
+
+.. code-block:: ini
 
   cat ./ansible/install-inventory.ini
   [jumphost]
   localhost ansible_connection=local
 
-  [yardstick-standalone]
-  yardstick-standalone-node ansible_host=192.168.1.2
-  yardstick-standalone-node-2 ansible_host=192.168.1.3
-
   # section below is only due backward compatibility.
   # it will be removed later
   [yardstick:children]
   jumphost
 
+  [yardstick-baremetal]
+  baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+  [yardstick-standalone]
+  standalone ansible_host=192.168.2.52 ansible_connection=ssh
+
   [all:vars]
-  ansible_user=root
-  ansible_pass=root
+  # Uncomment credentials below if needed
+    ansible_user=root
+    ansible_ssh_pass=root
+  # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+  # When IMG_PROPERTY is passed neither normal nor nsb set
+  # "path_to_vm=/path/to/image" to add it to OpenStack
+  # path_to_img=/tmp/workspace/yardstick-image.img
+
+  # List of CPUs to be isolated (not used by default)
+  # Grub line will be extended with:
+  # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+  # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
+
+.. warning::
+
+   Before running ``nsb_setup.sh`` make sure python is installed on servers
+   added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
 
 .. note::
 
    SSH access without password needs to be configured for all your nodes
-   defined in ``yardstick-install-inventory.ini`` file.
+   defined in ``install-inventory.ini`` file.
    If you want to use password authentication you need to install ``sshpass``::
 
      sudo -EH apt-get install sshpass
 
-To execute an installation for a BareMetal or a Standalone context::
 
-    ./nsb_setup.sh
+.. note::
+
+   A VM image built by other means than Yardstick can be added to OpenStack.
+   Uncomment and set correct path to the VM image in the
+   ``install-inventory.ini`` file::
+
+     path_to_img=/tmp/workspace/yardstick-image.img
+
+
+.. note::
+
+   CPU isolation can be applied to the remote servers, like:
+   ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
+   ``install-inventory.ini`` file.
+
+By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
+docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
+installs packages to the servers given in ``yardstick-standalone`` and
+``yardstick-baremetal`` host groups.
+
+To pull Yardstick built based on Ubuntu 18 run::
+
+    ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
+
+To change default behavior modify parameters for ``install.yaml`` in
+``nsb_setup.sh`` file.
 
+Refer chapter :doc:`04-installation` for more details on ``install.yaml``
+parameters.
 
-To execute an installation for an OpenStack context::
+To execute an installation for a **BareMetal** or a **Standalone context**::
+
+    ./nsb_setup.sh
+
+To execute an installation for an **OpenStack** context::
 
     ./nsb_setup.sh <path to admin-openrc.sh>
 
+.. note::
+
+   Yardstick may not be operational after distributive linux kernel update if
+   it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
+.. warning::
+
+   The Yardstick VM image (NSB or normal) cannot be built inside a VM.
+
+.. warning::
+
+   The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
+   Reboot of the servers from ``yardstick-standalone`` or
+   ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
+   required to apply those changes.
+
 The above commands will set up Docker with the latest Yardstick code. To
 execute::
 
   docker exec -it yardstick bash
 
+.. note::
+
+   It may be needed to configure tty in docker container to extend commandline
+   character length, for example:
+
+   stty size rows 58 cols 234
+
 It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on Docker
+setup. Refer chapter :doc:`04-installation` for more on Docker:
+:ref:`Install Yardstick using Docker`
+
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+   ``install-inventory.ini`` file to install NSB and dependencies. Install
+   python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+   ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
 
-**Install Yardstick using Docker (recommended)**
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+   ``install-inventory.ini`` file to install NSB and dependencies.
+   Add server where VM with sample VNF will be deployed to
+   ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+   Target VM image named ``yardstick-nsb-image.img`` will be placed to
+   ``/var/lib/libvirt/images/``.
+   Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+   ansible-playbook \
+   -e IMAGE_PROPERTY='nsb' \
+   -e OS_RELEASE='bionic' \
+   -e INSTALLATION_MODE='container_pull' \
+   -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+   -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+   ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
 
-Another way to execute an installation for a Bare-Metal or a Standalone context
-is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
-for more details.
 
 System Topology
 ---------------
@@ -181,7 +315,7 @@ System Topology
   |          |              |          |
   |          | (1)<-----(1) |          |
   +----------+              +----------+
-  trafficgen_1                   vnf
+  trafficgen_0                   vnf
 
 
 Environment parameters and credentials
@@ -191,14 +325,16 @@ Configure yardstick.conf
 ^^^^^^^^^^^^^^^^^^^^^^^^
 
 If you did not run ``yardstick env influxdb`` inside the container to generate
- ``yardstick.conf``, then create the config file manually (run inside the
+``yardstick.conf``, then create the config file manually (run inside the
 container)::
 
     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
     vi /etc/yardstick/yardstick.conf
 
 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
-section::
+section:
+
+.. code-block:: ini
 
   [DEFAULT]
   debug = True
@@ -220,19 +356,19 @@ Run Yardstick - Network Service Testcases
 -----------------------------------------
 
 NS testing - using yardstick CLI
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
   See :doc:`04-installation`
 
 Connect to the Yardstick container::
 
-
   docker exec -it yardstick /bin/bash
 
 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
+
   source /etc/yardstick/openstack.creds
 
-In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
+In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
 OpenStack::
 
   export EXTERNAL_NETWORK="<openstack public network>"
@@ -245,7 +381,7 @@ Network Service Benchmarking - Bare-Metal
 -----------------------------------------
 
 Bare-Metal Config pod.yaml describing Topology
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Bare-Metal 2-Node setup
 +++++++++++++++++++++++
@@ -258,7 +394,7 @@ Bare-Metal 2-Node setup
   |          |              |          |
   |          | (n)<-----(n) |          |
   +----------+              +----------+
-  trafficgen_1                   vnf
+  trafficgen_0                   vnf
 
 Bare-Metal 3-Node setup - Correlated Traffic
 ++++++++++++++++++++++++++++++++++++++++++++
@@ -272,7 +408,7 @@ Bare-Metal 3-Node setup - Correlated Traffic
   |          |              |          |            |            |
   |          |              |          |(1)<---->(0)|            |
   +----------+              +----------+            +------------+
-  trafficgen_1                   vnf                 trafficgen_2
+  trafficgen_0                   vnf                 trafficgen_1
 
 
 Bare-Metal Config pod.yaml
@@ -280,13 +416,13 @@ Bare-Metal Config pod.yaml
 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
 topology and update all the required fields.::
 
-    cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
+    cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
 
 .. code-block:: YAML
 
     nodes:
     -
-        name: trafficgen_1
+        name: trafficgen_0
         role: TrafficGen
         ip: 1.1.1.1
         user: root
@@ -305,7 +441,7 @@ topology and update all the required fields.::
                 dpdk_port_num: 1
                 local_ip: "152.16.40.20"
                 netmask:   "255.255.255.0"
-                local_mac: "00:00.00:00:00:02"
+                local_mac: "00:00:00:00:00:02"
 
     -
         name: vnf
@@ -350,17 +486,26 @@ topology and update all the required fields.::
           if: "xe1"
 
 
-Network Service Benchmarking - Standalone Virtualization
---------------------------------------------------------
+Standalone Virtualization
+-------------------------
+
+VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
+to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
+manually. Test case example, context section::
+
+    contexts:
+     ...
+     vm_deploy: True
+
 
 SR-IOV
-~~~~~~
+^^^^^^
 
 SR-IOV Pre-requisites
 +++++++++++++++++++++
 
 On Host, where VM is created:
a) Create and configure a bridge named ``br-int`` for VM to connect to
1. Create and configure a bridge named ``br-int`` for VM to connect to
     external network. Currently this can be done using VXLAN tunnel.
 
     Execute the following on host, where VM is created::
@@ -389,18 +534,18 @@ On Host, where VM is created:
 
   .. note:: Host and jump host are different baremetal servers.
 
b) Modify test case management CIDR.
2. Modify test case management CIDR.
     IP addresses IP#1, IP#2 and CIDR must be in the same network.
 
   .. code-block:: YAML
 
     servers:
-      vnf:
+      vnf_0:
         network_ports:
           mgmt:
             cidr: '1.1.1.7/24'
 
c) Build guest image for VNF to run.
3. Build guest image for VNF to run.
     Most of the sample test cases in Yardstick are using a guest image called
     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
     Yardstick has a tool for building this custom image with SampleVNF.
@@ -420,8 +565,6 @@ On Host, where VM is created:
    For instructions on generating a cloud image using Ansible, refer to
    :doc:`04-installation`.
 
-   for more details refer to chapter :doc:`04-installation`
-
    .. note:: VM should be build with static IP and be accessible from the
       Yardstick host.
 
@@ -453,7 +596,7 @@ SR-IOV 2-Node setup
   |          | (n)<----->(n) | -----------------       |
   |          |               |                         |
   +----------+               +-------------------------+
-  trafficgen_1                          host
+  trafficgen_0                          host
 
 
 
@@ -481,7 +624,7 @@ SR-IOV 3-Node setup - Correlated Traffic
   |          |               |                |    |            |              |
   |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
   +----------+               +---------------------+            +--------------+
-  trafficgen_1                          host                      trafficgen_2
+  trafficgen_0                          host                      trafficgen_1
 
 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
 topology and update all the required fields.
@@ -500,7 +643,7 @@ SR-IOV Config pod_trex.yaml
 
     nodes:
     -
-        name: trafficgen_1
+        name: trafficgen_0
         role: TrafficGen
         ip: 1.1.1.1
         user: root
@@ -520,7 +663,7 @@ SR-IOV Config pod_trex.yaml
                 dpdk_port_num: 1
                 local_ip: "152.16.40.20"
                 netmask:   "255.255.255.0"
-                local_mac: "00:00.00:00:00:02"
+                local_mac: "00:00:00:00:00:02"
 
 SR-IOV Config host_sriov.yaml
 +++++++++++++++++++++++++++++
@@ -538,8 +681,8 @@ SR-IOV Config host_sriov.yaml
 SR-IOV testcase update:
 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
 
-Update "contexts" section
-'''''''''''''''''''''''''
+Update contexts section
+'''''''''''''''''''''''
 
 .. code-block:: YAML
 
@@ -561,7 +704,7 @@ Update "contexts" section
        user: "" # update VM username
        password: "" # update password
      servers:
-       vnf:
+       vnf_0:
          network_ports:
            mgmt:
              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -582,15 +725,101 @@ Update "contexts" section
          gateway_ip: '152.16.100.20'
 
 
+SRIOV configuration options
++++++++++++++++++++++++++++
+
+The only configuration option available for SRIOV is *vpci*. It is used as base
+address for VFs that are created during SRIOV test case execution.
+
+  .. code-block:: yaml+jinja
+
+    networks:
+      uplink_0:
+        phy_port: "0000:05:00.0"
+        vpci: "0000:00:07.0"
+        cidr: '152.16.100.10/24'
+        gateway_ip: '152.16.100.20'
+      downlink_0:
+        phy_port: "0000:05:00.1"
+        vpci: "0000:00:08.0"
+        cidr: '152.16.40.10/24'
+        gateway_ip: '152.16.100.20'
+
+.. _`VM image properties label`:
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+  .. code-block:: console
+
+      flavor:
+        images: <path>
+        ram: 8192
+        extra_specs:
+           machine_type: 'pc-i440fx-xenial'
+           hw:cpu_sockets: 1
+           hw:cpu_cores: 6
+           hw:cpu_threads: 2
+           hw_socket: 0
+           cputune: |
+             <cputune>
+               <vcpupin vcpu="0" cpuset="7"/>
+               <vcpupin vcpu="1" cpuset="8"/>
+               ...
+               <vcpupin vcpu="11" cpuset="18"/>
+               <emulatorpin cpuset="11"/>
+             </cputune>
+        user: ""
+        password: ""
+
+VM image properties description:
+
+  +-------------------------+-------------------------------------------------+
+  | Parameters              | Detail                                          |
+  +=========================+=================================================+
+  | images                  || Path to the VM image generated by              |
+  |                         |  ``nsb_setup.sh``                               |
+  |                         || Default path is ``/var/lib/libvirt/images/``   |
+  |                         || Default file name ``yardstick-nsb-image.img``  |
+  |                         |  or ``yardstick-image.img``                     |
+  +-------------------------+-------------------------------------------------+
+  | ram                     || Amount of RAM to be used for VM                |
+  |                         || Default is 4096 MB                             |
+  +-------------------------+-------------------------------------------------+
+  | hw:cpu_sockets          || Number of sockets provided to the guest VM     |
+  |                         || Default is 1                                   |
+  +-------------------------+-------------------------------------------------+
+  | hw:cpu_cores            || Number of cores provided to the guest VM       |
+  |                         || Default is 2                                   |
+  +-------------------------+-------------------------------------------------+
+  | hw:cpu_threads          || Number of threads provided to the guest VM     |
+  |                         || Default is 2                                   |
+  +-------------------------+-------------------------------------------------+
+  | hw_socket               || Generate vcpu cpuset from given HW socket      |
+  |                         || Default is 0                                   |
+  +-------------------------+-------------------------------------------------+
+  | cputune                 || Maps virtual cpu with logical cpu              |
+  +-------------------------+-------------------------------------------------+
+  | machine_type            || Machine type to be emulated in VM              |
+  |                         || Default is 'pc-i440fx-xenial'                  |
+  +-------------------------+-------------------------------------------------+
+  | user                    || User name to access the VM                     |
+  |                         || Default value is 'root'                        |
+  +-------------------------+-------------------------------------------------+
+  | password                || Password to access the VM                      |
+  +-------------------------+-------------------------------------------------+
+
 
 OVS-DPDK
-~~~~~~~~
+^^^^^^^^
 
 OVS-DPDK Pre-requisites
-~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++
 
 On Host, where VM is created:
a) Create and configure a bridge named ``br-int`` for VM to connect to
1. Create and configure a bridge named ``br-int`` for VM to connect to
     external network. Currently this can be done using VXLAN tunnel.
 
     Execute the following on host, where VM is created:
@@ -621,18 +850,18 @@ On Host, where VM is created:
 
   .. note:: Host and jump host are different baremetal servers.
 
b) Modify test case management CIDR.
2. Modify test case management CIDR.
     IP addresses IP#1, IP#2 and CIDR must be in the same network.
 
   .. code-block:: YAML
 
     servers:
-      vnf:
+      vnf_0:
         network_ports:
           mgmt:
             cidr: '1.1.1.7/24'
 
c) Build guest image for VNF to run.
3. Build guest image for VNF to run.
     Most of the sample test cases in Yardstick are using a guest image called
     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
     Yardstick has a tool for building this custom image with SampleVNF.
@@ -655,11 +884,11 @@ On Host, where VM is created:
    .. note::  VM should be build with static IP and should be accessible from
       yardstick host.
 
-3. OVS & DPDK version.
-   * OVS 2.7 and DPDK 16.11.1 above version is supported
+4. OVS & DPDK version:
 
-4. Setup `OVS-DPDK`_ on host.
+  * OVS 2.7 and DPDK 16.11.1 above version is supported
 
+Refer setup instructions at `OVS-DPDK`_ on host.
 
 OVS-DPDK Config pod.yaml describing Topology
 ++++++++++++++++++++++++++++++++++++++++++++
@@ -691,7 +920,7 @@ OVS-DPDK 2-Node setup
   |          |               |       (ovs-dpdk) |      |
   |          | (n)<----->(n) |------------------       |
   +----------+               +-------------------------+
-  trafficgen_1                          host
+  trafficgen_0                          host
 
 
 OVS-DPDK 3-Node setup - Correlated Traffic
@@ -721,7 +950,7 @@ OVS-DPDK 3-Node setup - Correlated Traffic
   |          |               |      (ovs-dpdk)  |      |          |            |
   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
   +----------+               +-------------------------+          +------------+
-  trafficgen_1                          host                       trafficgen_2
+  trafficgen_0                          host                       trafficgen_1
 
 
 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
@@ -739,7 +968,7 @@ OVS-DPDK Config pod_trex.yaml
 
     nodes:
     -
-      name: trafficgen_1
+      name: trafficgen_0
       role: TrafficGen
       ip: 1.1.1.1
       user: root
@@ -758,7 +987,7 @@ OVS-DPDK Config pod_trex.yaml
               dpdk_port_num: 1
               local_ip: "152.16.40.20"
               netmask:   "255.255.255.0"
-              local_mac: "00:00.00:00:00:02"
+              local_mac: "00:00:00:00:00:02"
 
 OVS-DPDK Config host_ovs.yaml
 +++++++++++++++++++++++++++++
@@ -776,8 +1005,8 @@ OVS-DPDK Config host_ovs.yaml
 ovs_dpdk testcase update:
 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
 
-Update "contexts" section
-'''''''''''''''''''''''''
+Update contexts section
+'''''''''''''''''''''''
 
 .. code-block:: YAML
 
@@ -810,7 +1039,7 @@ Update "contexts" section
        user: "" # update VM username
        password: "" # update password
      servers:
-       vnf:
+       vnf_0:
          network_ports:
            mgmt:
              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -830,17 +1059,103 @@ Update "contexts" section
          cidr: '152.16.40.10/24'
          gateway_ip: '152.16.100.20'
 
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
+
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
 
-Network Service Benchmarking - OpenStack with SR-IOV support
-------------------------------------------------------------
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+  .. code-block:: console
+
+      ovs_properties:
+        version:
+          ovs: 2.8.1
+          dpdk: 17.05.2
+        pmd_threads: 4
+        pmd_cpu_mask: "0x3c"
+        ram:
+         socket_0: 2048
+         socket_1: 2048
+        queues: 2
+        vpath: "/usr/local"
+        max_idle: 30000
+        lcore_mask: 0x02
+        dpdk_pmd-rxq-affinity:
+          0: "0:2,1:2"
+          1: "0:2,1:2"
+          2: "0:3,1:3"
+          3: "0:3,1:3"
+        vhost_pmd-rxq-affinity:
+          0: "0:3,1:3"
+          1: "0:3,1:3"
+          2: "0:4,1:4"
+          3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+  +-------------------------+-------------------------------------------------+
+  | Parameters              | Detail                                          |
+  +=========================+=================================================+
+  | version                 || Version of OVS and DPDK to be installed        |
+  |                         || There is a relation between OVS and DPDK       |
+  |                         |  version which can be found at                  |
+  |                         | `OVS-DPDK-versions`_                            |
+  |                         || By default OVS: 2.6.0, DPDK: 16.07.2           |
+  +-------------------------+-------------------------------------------------+
+  | lcore_mask              || Core bitmask used during DPDK initialization   |
+  |                         |  where the non-datapath OVS-DPDK threads such   |
+  |                         |  as handler and revalidator threads run         |
+  +-------------------------+-------------------------------------------------+
+  | pmd_cpu_mask            || Core bitmask that sets which cores are used by |
+  |                         || OVS-DPDK for datapath packet processing        |
+  +-------------------------+-------------------------------------------------+
+  | pmd_threads             || Number of PMD threads used by OVS-DPDK for     |
+  |                         |  datapath                                       |
+  |                         || This core mask is evaluated in Yardstick       |
+  |                         || It will be used if pmd_cpu_mask is not given   |
+  |                         || Default is 2                                   |
+  +-------------------------+-------------------------------------------------+
+  | ram                     || Amount of RAM to be used for each socket, MB   |
+  |                         || Default is 2048 MB                             |
+  +-------------------------+-------------------------------------------------+
+  | queues                  || Number of RX queues used for DPDK physical     |
+  |                         |  interface                                      |
+  +-------------------------+-------------------------------------------------+
+  | dpdk_pmd-rxq-affinity   || RX queue assignment to PMD threads for DPDK    |
+  |                         || e.g.: <port number> : <queue-id>:<core-id>     |
+  +-------------------------+-------------------------------------------------+
+  | vhost_pmd-rxq-affinity  || RX queue assignment to PMD threads for vhost   |
+  |                         || e.g.: <port number> : <queue-id>:<core-id>     |
+  +-------------------------+-------------------------------------------------+
+  | vpath                   || User path for openvswitch files                |
+  |                         || Default is ``/usr/local``                      |
+  +-------------------------+-------------------------------------------------+
+  | max_idle                || The maximum time that idle flows will remain   |
+  |                         |  cached in the datapath, ms                     |
+  +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties are same as for SRIOV :ref:`VM image properties label`.
+
+
+OpenStack with SR-IOV support
+-----------------------------
 
 This section describes how to run a Sample VNF test case, using Heat context,
 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
 DevStack, with SR-IOV support.
 
 
-Single node OpenStack setup with external TG
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Single node OpenStack with external TG
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 .. code-block:: console
 
@@ -867,7 +1182,7 @@ Single node OpenStack setup with external TG
   |          | (PF1)<----->(PF1) +--------------------+       |
   |          |                   |                            |
   +----------+                   +----------------------------+
-  trafficgen_1                                 host
+  trafficgen_0                                 host
 
 
 Host pre-configuration
@@ -970,7 +1285,7 @@ DevStack installation
 If you want to try out NSB, but don't have OpenStack set-up, you can use
 `Devstack`_ to install OpenStack on a host. Please note, that the
 ``stable/pike`` branch of devstack repo should be used during the installation.
-The required ``local.conf`` configuration file are described below.
+The required ``local.conf`` configuration file is described below.
 
 DevStack configuration file:
 
@@ -981,11 +1296,10 @@ DevStack configuration file:
   commands to get device and vendor id of the virtual function (VF).
 
 .. literalinclude:: code/single-devstack-local.conf
-   :language: console
+   :language: ini
 
 Start the devstack installation on a host.
 
-
 TG host configuration
 +++++++++++++++++++++
 
@@ -999,7 +1313,7 @@ Run the Sample VNF test case
 
 There is an example of Sample VNF test case ready to be executed in an
 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
-tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
+tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
 
 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
 context.
@@ -1012,7 +1326,7 @@ container:
   command to get the PF PCI address for ``vpci`` field.
 
 .. literalinclude:: code/single-yardstick-pod.conf
-   :language: console
+   :language: ini
 
 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
@@ -1020,7 +1334,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
 
 
 Multi node OpenStack TG and VNF setup (two nodes)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 .. code-block:: console
 
@@ -1031,7 +1345,7 @@ Multi node OpenStack TG and VNF setup (two nodes)
   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
   |   |                    |   |                   |   |                    |   |
   |   |         TG         |   |                   |   |        DUT         |   |
-  |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
+  |   |    trafficgen_0    |   |                   |   |       (VNF)        |   |
   |   |                    |   |                   |   |                    |   |
   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
@@ -1056,7 +1370,6 @@ Controller/Compute pre-configuration
 Pre-configuration of the controller and compute hosts are the same as
 described in `Host pre-configuration`_ section.
 
-
 DevStack configuration
 ++++++++++++++++++++++
 
@@ -1073,16 +1386,15 @@ devstack repo should be used during the installation.
 DevStack configuration file for controller host:
 
 .. literalinclude:: code/multi-devstack-controller-local.conf
-   :language: console
+   :language: ini
 
 DevStack configuration file for compute host:
 
 .. literalinclude:: code/multi-devstack-compute-local.conf
-   :language: console
+   :language: ini
 
 Start the devstack installation on the controller and compute hosts.
 
-
 Run the sample vFW TC
 +++++++++++++++++++++
 
@@ -1105,7 +1417,7 @@ Enabling other Traffic generators
 ---------------------------------
 
 IxLoad
-~~~~~~
+^^^^^^
 
 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
@@ -1127,7 +1439,7 @@ IxLoad
   Config ``pod_ixia.yaml``
 
   .. literalinclude:: code/pod_ixia.yaml
-     :language: console
+     :language: yaml
 
   for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
   for ovs-dpdk/sriov configuration
@@ -1148,7 +1460,7 @@ IxLoad
    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
 
 IxNetwork
-~~~~~~~~~
+^^^^^^^^^
 
 IxNetwork testcases use IxNetwork API Python Bindings module, which is
 installed as part of the requirements of the project.
@@ -1163,7 +1475,7 @@ installed as part of the requirements of the project.
   Configure ``pod_ixia.yaml``
 
   .. literalinclude:: code/pod_ixia.yaml
-     :language: console
+     :language: yaml
 
   for sriov/ovs_dpdk pod files, please refer to above
   `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
@@ -1208,9 +1520,9 @@ to be preinstalled and properly configured.
       ``PYTHONPATH`` environment variable.
 
     .. important::
-    The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
-    For LsApi module to initialize correctly following lines (184-186) in
-    lsapi.py
+      The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+      For LsApi module to initialize correctly following lines (184-186) in
+      lsapi.py
 
     .. code-block:: python