Merge "Rework TestSampleVnf"
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
index fb68fbf..3a06be6 100644 (file)
@@ -1,36 +1,43 @@
 .. This work is licensed under a Creative Commons Attribution 4.0 International
 .. License.
 .. http://creativecommons.org/licenses/by/4.0
 .. This work is licensed under a Creative Commons Attribution 4.0 International
 .. License.
 .. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
 
 
-=====================================
-Yardstick - NSB Testing -Installation
-=====================================
+..
+   Convention for heading levels in Yardstick documentation:
 
 
-Abstract
-========
+   =======  Heading 0 (reserved for the title in a document)
+   -------  Heading 1
+   ^^^^^^^  Heading 2
+   +++++++  Heading 3
+   '''''''  Heading 4
+
+   Avoid deeper levels because they do not render well.
+
+
+================
+NSB Installation
+================
 
 
-The Network Service Benchmarking (NSB) extends the yardstick framework to do
-VNF characterization and benchmarking in three different execution
-environments viz., bare metal i.e. native Linux environment, standalone virtual
-environment and managed virtualized environment (e.g. Open stack etc.).
-It also brings in the capability to interact with external traffic generators
-both hardware & software based for triggering and validating the traffic
-according to user defined profiles.
+.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
+.. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
+
+Abstract
+--------
 
 The steps needed to run Yardstick with NSB testing are:
 
 * Install Yardstick (NSB Testing).
 
 The steps needed to run Yardstick with NSB testing are:
 
 * Install Yardstick (NSB Testing).
-* Setup/Reference pod.yaml describing Test topology
-* Create/Reference the test configuration yaml file.
+* Setup/reference ``pod.yaml`` describing Test topology.
+* Create/reference the test configuration yaml file.
 * Run the test case.
 
 * Run the test case.
 
-
 Prerequisites
 Prerequisites
-=============
+-------------
 
 
-Refer chapter Yardstick Installation for more information on yardstick
-prerequisites
+Refer to :doc:`04-installation` for more information on Yardstick
+prerequisites.
 
 Several prerequisites are needed for Yardstick (VNF testing):
 
 
 Several prerequisites are needed for Yardstick (VNF testing):
 
@@ -46,11 +53,10 @@ Several prerequisites are needed for Yardstick (VNF testing):
   * intel-cmt-cat
 
 Hardware & Software Ingredients
   * intel-cmt-cat
 
 Hardware & Software Ingredients
--------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 SUT requirements:
 
 
 SUT requirements:
 
-
    ======= ===================
    Item    Description
    ======= ===================
    ======= ===================
    Item    Description
    ======= ===================
@@ -63,7 +69,6 @@ SUT requirements:
 
 Boot and BIOS settings:
 
 
 Boot and BIOS settings:
 
-
    ============= =================================================
    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
    ============= =================================================
    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
@@ -82,29 +87,29 @@ Boot and BIOS settings:
                  Turbo Boost Disabled
    ============= =================================================
 
                  Turbo Boost Disabled
    ============= =================================================
 
-
-
 Install Yardstick (NSB Testing)
 Install Yardstick (NSB Testing)
-===============================
-
-Download the source code and install Yardstick from it
+-------------------------------
 
 
-.. code-block:: console
+Yardstick with NSB can be installed using ``nsb_setup.sh``.
+The ``nsb_setup.sh`` allows to:
 
 
-  git clone https://gerrit.opnfv.org/gerrit/yardstick
+1. Install Yardstick in specified mode: bare metal or container.
+   Refer :doc:`04-installation`.
+2. Install package dependencies on remote servers used as traffic generator or
+   sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+   Add such servers to ``install-inventory.ini`` file to either
+   ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
+   It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+3. Build VM image either nsb or normal. The nsb VM image is used to run
+   Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
+   The normal VM image is used to run Yardstick ping tests in OpenStack context.
+4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
 
 
-  cd yardstick
+Firstly, configure the network proxy, either using the environment variables or
+setting the global environment file.
 
 
-  # Switch to latest stable branch
-  # git checkout <tag or stable branch>
-  git checkout stable/euphrates
+Set environment::
 
 
-Configure the network proxy, either using the environment variables or setting
-the global environment file:
-
-.. code-block:: ini
-
-    cat /etc/environment
     http_proxy='http://proxy.company.com:port'
     https_proxy='http://proxy.company.com:port'
 
     http_proxy='http://proxy.company.com:port'
     https_proxy='http://proxy.company.com:port'
 
@@ -113,63 +118,190 @@ the global environment file:
     export http_proxy='http://proxy.company.com:port'
     export https_proxy='http://proxy.company.com:port'
 
     export http_proxy='http://proxy.company.com:port'
     export https_proxy='http://proxy.company.com:port'
 
-The last step is to modify the Yardstick installation inventory, used by
-Ansible:
+Download the source code and check out the latest stable branch
+
+.. code-block:: console
+
+  git clone https://gerrit.opnfv.org/gerrit/yardstick
+  cd yardstick
+  # Switch to latest stable branch
+  git checkout stable/gambia
 
 
-.. code-block:: ini
+Modify the Yardstick installation inventory used by Ansible::
 
   cat ./ansible/install-inventory.ini
   [jumphost]
 
   cat ./ansible/install-inventory.ini
   [jumphost]
-  localhost  ansible_connection=local
-
-  [yardstick-standalone]
-  yardstick-standalone-node ansible_host=192.168.1.2
-  yardstick-standalone-node-2 ansible_host=192.168.1.3
+  localhost ansible_connection=local
 
   # section below is only due backward compatibility.
   # it will be removed later
   [yardstick:children]
   jumphost
 
 
   # section below is only due backward compatibility.
   # it will be removed later
   [yardstick:children]
   jumphost
 
+  [yardstick-baremetal]
+  baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+  [yardstick-standalone]
+  standalone ansible_host=192.168.2.52 ansible_connection=ssh
+
   [all:vars]
   [all:vars]
-  ansible_user=root
-  ansible_pass=root
+  # Uncomment credentials below if needed
+    ansible_user=root
+    ansible_ssh_pass=root
+  # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+  # When IMG_PROPERTY is passed neither normal nor nsb set
+  # "path_to_vm=/path/to/image" to add it to OpenStack
+  # path_to_img=/tmp/workspace/yardstick-image.img
+
+  # List of CPUs to be isolated (not used by default)
+  # Grub line will be extended with:
+  # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+  # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
+
+.. warning::
+
+   Before running ``nsb_setup.sh`` make sure python is installed on servers
+   added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
 
 .. note::
 
 
 .. note::
 
-   SSH access without password needs to be configured for all your nodes defined in
+   SSH access without password needs to be configured for all your nodes
+   defined in ``install-inventory.ini`` file.
+   If you want to use password authentication you need to install ``sshpass``::
+
+     sudo -EH apt-get install sshpass
+
+
+.. note::
+
+   A VM image built by other means than Yardstick can be added to OpenStack.
+   Uncomment and set correct path to the VM image in the
+   ``install-inventory.ini`` file::
+
+     path_to_img=/tmp/workspace/yardstick-image.img
+
+
+.. note::
+
+   CPU isolation can be applied to the remote servers, like:
+   ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
    ``install-inventory.ini`` file.
    ``install-inventory.ini`` file.
-   If you want to use password authentication you need to install sshpass
 
 
-   .. code-block:: console
+By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
+docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
+installs packages to the servers given in ``yardstick-standalone`` and
+``yardstick-baremetal`` host groups.
 
 
-     sudo -EH apt-get install sshpass
+To pull Yardstick built based on Ubuntu 18 run::
 
 
-To execute an installation for a Bare-Metal or a Standalone context:
+    ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
 
 
-.. code-block:: console
+To change default behavior modify parameters for ``install.yaml`` in
+``nsb_setup.sh`` file.
 
 
-    ./nsb_setup.sh
+Refer chapter :doc:`04-installation` for more details on ``install.yaml``
+parameters.
 
 
+To execute an installation for a **BareMetal** or a **Standalone context**::
 
 
-To execute an installation for an OpenStack context:
+    ./nsb_setup.sh
 
 
-.. code-block:: console
+To execute an installation for an **OpenStack** context::
 
     ./nsb_setup.sh <path to admin-openrc.sh>
 
 
     ./nsb_setup.sh <path to admin-openrc.sh>
 
-Above command setup docker with latest yardstick code. To execute
+.. note::
 
 
-.. code-block:: console
+   Yardstick may not be operational after distributive linux kernel update if
+   it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
+.. warning::
+
+   The Yardstick VM image (NSB or normal) cannot be built inside a VM.
+
+.. warning::
+
+   The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
+   Reboot of the servers from ``yardstick-standalone`` or
+   ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
+   required to apply those changes.
+
+The above commands will set up Docker with the latest Yardstick code. To
+execute::
 
   docker exec -it yardstick bash
 
 
   docker exec -it yardstick bash
 
+.. note::
+
+   It may be needed to configure tty in docker container to extend commandline
+   character length, for example:
+
+   stty size rows 58 cols 234
+
 It will also automatically download all the packages needed for NSB Testing
 It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on docker
+setup. Refer chapter :doc:`04-installation` for more on Docker.
+
 **Install Yardstick using Docker (recommended)**
 
 **Install Yardstick using Docker (recommended)**
 
-System Topology:
-================
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+   ``install-inventory.ini`` file to install NSB and dependencies. Install
+   python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+   ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+   ``install-inventory.ini`` file to install NSB and dependencies.
+   Add server where VM with sample VNF will be deployed to
+   ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+   Target VM image named ``yardstick-nsb-image.img`` will be placed to
+   ``/var/lib/libvirt/images/``.
+   Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+   ansible-playbook \
+   -e IMAGE_PROPERTY='nsb' \
+   -e OS_RELEASE='bionic' \
+   -e INSTALLATION_MODE='container_pull' \
+   -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+   -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+   ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
+
+
+System Topology
+---------------
 
 .. code-block:: console
 
 
 .. code-block:: console
 
@@ -180,30 +312,28 @@ System Topology:
   |          |              |          |
   |          | (1)<-----(1) |          |
   +----------+              +----------+
   |          |              |          |
   |          | (1)<-----(1) |          |
   +----------+              +----------+
-  trafficgen_1                   vnf
+  trafficgen_0                   vnf
 
 
 Environment parameters and credentials
 
 
 Environment parameters and credentials
-======================================
+--------------------------------------
 
 
-Config yardstick conf
----------------------
+Configure yardstick.conf
+^^^^^^^^^^^^^^^^^^^^^^^^
 
 
-If user did not run 'yardstick env influxdb' inside the container, which will
-generate correct ``yardstick.conf``, then create the config file manually (run
-inside the container):
-::
+If you did not run ``yardstick env influxdb`` inside the container to generate
+``yardstick.conf``, then create the config file manually (run inside the
+container)::
 
     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
     vi /etc/yardstick/yardstick.conf
 
 
     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
     vi /etc/yardstick/yardstick.conf
 
-Add trex_path, trex_client_lib and bin_path in 'nsb' section.
-
-::
+Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
+section::
 
   [DEFAULT]
   debug = True
 
   [DEFAULT]
   debug = True
-  dispatcher = file, influxdb
+  dispatcher = influxdb
 
   [dispatcher_influxdb]
   timeout = 5
 
   [dispatcher_influxdb]
   timeout = 5
@@ -218,30 +348,37 @@ Add trex_path, trex_client_lib and bin_path in 'nsb' section.
   trex_client_lib=/opt/nsb_bin/trex_client/stl
 
 Run Yardstick - Network Service Testcases
   trex_client_lib=/opt/nsb_bin/trex_client/stl
 
 Run Yardstick - Network Service Testcases
-=========================================
-
+-----------------------------------------
 
 NS testing - using yardstick CLI
 
 NS testing - using yardstick CLI
---------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
   See :doc:`04-installation`
 
 
   See :doc:`04-installation`
 
-.. code-block:: console
-
+Connect to the Yardstick container::
 
   docker exec -it yardstick /bin/bash
 
   docker exec -it yardstick /bin/bash
-  source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
-  export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
-  yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
+
+If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
+  source /etc/yardstick/openstack.creds
+
+In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
+OpenStack::
+
+  export EXTERNAL_NETWORK="<openstack public network>"
+
+Finally, you should be able to run the testcase::
+
+  yardstick --debug task start ./yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
 
 Network Service Benchmarking - Bare-Metal
 
 Network Service Benchmarking - Bare-Metal
-=========================================
+-----------------------------------------
 
 Bare-Metal Config pod.yaml describing Topology
 
 Bare-Metal Config pod.yaml describing Topology
-----------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Bare-Metal 2-Node setup
 
 Bare-Metal 2-Node setup
-^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++
 .. code-block:: console
 
   +----------+              +----------+
 .. code-block:: console
 
   +----------+              +----------+
@@ -251,10 +388,10 @@ Bare-Metal 2-Node setup
   |          |              |          |
   |          | (n)<-----(n) |          |
   +----------+              +----------+
   |          |              |          |
   |          | (n)<-----(n) |          |
   +----------+              +----------+
-  trafficgen_1                   vnf
+  trafficgen_0                   vnf
 
 Bare-Metal 3-Node setup - Correlated Traffic
 
 Bare-Metal 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++++
 .. code-block:: console
 
   +----------+              +----------+            +------------+
 .. code-block:: console
 
   +----------+              +----------+            +------------+
@@ -265,21 +402,21 @@ Bare-Metal 3-Node setup - Correlated Traffic
   |          |              |          |            |            |
   |          |              |          |(1)<---->(0)|            |
   +----------+              +----------+            +------------+
   |          |              |          |            |            |
   |          |              |          |(1)<---->(0)|            |
   +----------+              +----------+            +------------+
-  trafficgen_1                   vnf                 trafficgen_2
+  trafficgen_0                   vnf                 trafficgen_1
 
 
 Bare-Metal Config pod.yaml
 
 
 Bare-Metal Config pod.yaml
---------------------------
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
 topology and update all the required fields.::
 
 topology and update all the required fields.::
 
-    cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
+    cp ./etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
 
 .. code-block:: YAML
 
     nodes:
     -
 
 .. code-block:: YAML
 
     nodes:
     -
-        name: trafficgen_1
+        name: trafficgen_0
         role: TrafficGen
         ip: 1.1.1.1
         user: root
         role: TrafficGen
         ip: 1.1.1.1
         user: root
@@ -343,22 +480,20 @@ topology and update all the required fields.::
           if: "xe1"
 
 
           if: "xe1"
 
 
-Network Service Benchmarking - Standalone Virtualization
-========================================================
+Standalone Virtualization
+-------------------------
 
 SR-IOV
 
 SR-IOV
-------
+^^^^^^
 
 SR-IOV Pre-requisites
 
 SR-IOV Pre-requisites
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
 
 On Host, where VM is created:
 
 On Host, where VM is created:
- a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
-    Currently this can be done using VXLAN tunnel.
+ a) Create and configure a bridge named ``br-int`` for VM to connect to
+    external network. Currently this can be done using VXLAN tunnel.
 
 
-    Execute the following on host, where VM is created:
-
-  .. code-block:: console
+    Execute the following on host, where VM is created::
 
       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
       brctl addbr br-int
 
       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
       brctl addbr br-int
@@ -367,7 +502,7 @@ On Host, where VM is created:
       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
       ip link set dev br-int up
 
       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
       ip link set dev br-int up
 
-  .. note:: May be needed to add extra rules to iptable to forward traffic.
+  .. note:: You may need to add extra rules to iptable to forward traffic.
 
   .. code-block:: console
 
 
   .. code-block:: console
 
@@ -390,7 +525,7 @@ On Host, where VM is created:
   .. code-block:: YAML
 
     servers:
   .. code-block:: YAML
 
     servers:
-      vnf:
+      vnf_0:
         network_ports:
           mgmt:
             cidr: '1.1.1.7/24'
         network_ports:
           mgmt:
             cidr: '1.1.1.7/24'
@@ -401,30 +536,31 @@ On Host, where VM is created:
     Yardstick has a tool for building this custom image with SampleVNF.
     It is necessary to have ``sudo`` rights to use this tool.
 
     Yardstick has a tool for building this custom image with SampleVNF.
     It is necessary to have ``sudo`` rights to use this tool.
 
-    Also you may need to install several additional packages to use this tool, by
-    following the commands below::
-
-       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+   Also you may need to install several additional packages to use this tool, by
+   following the commands below::
 
 
-    This image can be built using the following command in the directory where Yardstick is installed
+      sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
 
 
-    .. code-block:: console
+   This image can be built using the following command in the directory where
+   Yardstick is installed::
 
 
-       export YARD_IMG_ARCH='amd64'
-       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+      export YARD_IMG_ARCH='amd64'
+      sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
 
 
-    Please use ansible script to generate a cloud image refer to :doc:`04-installation`
+   For instructions on generating a cloud image using Ansible, refer to
+   :doc:`04-installation`.
 
 
-    for more details refer to chapter :doc:`04-installation`
+   for more details refer to chapter :doc:`04-installation`
 
 
-    .. note:: VM should be build with static IP and should be accessible from yardstick host.
+   .. note:: VM should be build with static IP and be accessible from the
+      Yardstick host.
 
 
 SR-IOV Config pod.yaml describing Topology
 
 
 SR-IOV Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++
 
 
-SR-IOV 2-Node setup:
-^^^^^^^^^^^^^^^^^^^^
+SR-IOV 2-Node setup
++++++++++++++++++++
 .. code-block:: console
 
                                +--------------------+
 .. code-block:: console
 
                                +--------------------+
@@ -442,59 +578,59 @@ SR-IOV 2-Node setup:
   +----------+               +-------------------------+
   |          |               |       ^          ^      |
   |          |               |       |          |      |
   +----------+               +-------------------------+
   |          |               |       ^          ^      |
   |          |               |       |          |      |
-  |          | (0)<----->(0) | ------           |      |
-  |    TG1   |               |           SUT    |      |
-  |          |               |                  |      |
-  |          | (n)<----->(n) |------------------       |
+  |          | (0)<----->(0) | ------    SUT    |      |
+  |    TG1   |               |                  |      |
+  |          | (n)<----->(n) | -----------------       |
+  |          |               |                         |
   +----------+               +-------------------------+
   +----------+               +-------------------------+
-  trafficgen_1                          host
+  trafficgen_0                          host
 
 
 
 SR-IOV 3-Node setup - Correlated Traffic
 
 
 
 SR-IOV 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++
 .. code-block:: console
 
 .. code-block:: console
 
-                               +--------------------+
-                               |                    |
-                               |                    |
-                               |        DUT         |
-                               |       (VNF)        |
-                               |                    |
-                               +--------------------+
-                               | VF NIC |  | VF NIC |
-                               +--------+  +--------+
-                                     ^          ^
-                                     |          |
-                                     |          |
-  +----------+               +-------------------------+            +--------------+
-  |          |               |       ^          ^      |            |              |
-  |          |               |       |          |      |            |              |
-  |          | (0)<----->(0) | ------           |      |            |     TG2      |
-  |    TG1   |               |           SUT    |      |            | (UDP Replay) |
-  |          |               |                  |      |            |              |
-  |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
-  +----------+               +-------------------------+            +--------------+
-  trafficgen_1                          host                       trafficgen_2
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
+                             +--------------------+
+                             |                    |
+                             |                    |
+                             |        DUT         |
+                             |       (VNF)        |
+                             |                    |
+                             +--------------------+
+                             | VF NIC |  | VF NIC |
+                             +--------+  +--------+
+                                   ^          ^
+                                   |          |
+                                   |          |
+  +----------+               +---------------------+            +--------------+
+  |          |               |     ^          ^    |            |              |
+  |          |               |     |          |    |            |              |
+  |          | (0)<----->(0) |-----           |    |            |     TG2      |
+  |    TG1   |               |         SUT    |    |            | (UDP Replay) |
+  |          |               |                |    |            |              |
+  |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
+  +----------+               +---------------------+            +--------------+
+  trafficgen_0                          host                      trafficgen_1
+
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
 topology and update all the required fields.
 
 .. code-block:: console
 
 topology and update all the required fields.
 
 .. code-block:: console
 
-    cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
-    cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
+    cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
+    cp ./etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
 
 .. note:: Update all the required fields like ip, user, password, pcis, etc...
 
 SR-IOV Config pod_trex.yaml
 
 .. note:: Update all the required fields like ip, user, password, pcis, etc...
 
 SR-IOV Config pod_trex.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++
 
 .. code-block:: YAML
 
     nodes:
     -
 
 .. code-block:: YAML
 
     nodes:
     -
-        name: trafficgen_1
+        name: trafficgen_0
         role: TrafficGen
         ip: 1.1.1.1
         user: root
         role: TrafficGen
         ip: 1.1.1.1
         user: root
@@ -517,7 +653,7 @@ SR-IOV Config pod_trex.yaml
                 local_mac: "00:00.00:00:00:02"
 
 SR-IOV Config host_sriov.yaml
                 local_mac: "00:00.00:00:00:02"
 
 SR-IOV Config host_sriov.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
 
 .. code-block:: YAML
 
 
 .. code-block:: YAML
 
@@ -530,10 +666,10 @@ SR-IOV Config host_sriov.yaml
        password: ""
 
 SR-IOV testcase update:
        password: ""
 
 SR-IOV testcase update:
-``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+``./samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
 
 
-Update "contexts" section
-"""""""""""""""""""""""""
+Update contexts section
+'''''''''''''''''''''''
 
 .. code-block:: YAML
 
 
 .. code-block:: YAML
 
@@ -555,7 +691,7 @@ Update "contexts" section
        user: "" # update VM username
        password: "" # update password
      servers:
        user: "" # update VM username
        password: "" # update password
      servers:
-       vnf:
+       vnf_0:
          network_ports:
            mgmt:
              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
          network_ports:
            mgmt:
              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -576,16 +712,15 @@ Update "contexts" section
          gateway_ip: '152.16.100.20'
 
 
          gateway_ip: '152.16.100.20'
 
 
-
 OVS-DPDK
 OVS-DPDK
---------
+^^^^^^^^
 
 OVS-DPDK Pre-requisites
 
 OVS-DPDK Pre-requisites
-^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++
 
 On Host, where VM is created:
 
 On Host, where VM is created:
- a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
-    Currently this can be done using VXLAN tunnel.
+ a) Create and configure a bridge named ``br-int`` for VM to connect to
+    external network. Currently this can be done using VXLAN tunnel.
 
     Execute the following on host, where VM is created:
 
 
     Execute the following on host, where VM is created:
 
@@ -621,7 +756,7 @@ On Host, where VM is created:
   .. code-block:: YAML
 
     servers:
   .. code-block:: YAML
 
     servers:
-      vnf:
+      vnf_0:
         network_ports:
           mgmt:
             cidr: '1.1.1.7/24'
         network_ports:
           mgmt:
             cidr: '1.1.1.7/24'
@@ -632,34 +767,34 @@ On Host, where VM is created:
     Yardstick has a tool for building this custom image with SampleVNF.
     It is necessary to have ``sudo`` rights to use this tool.
 
     Yardstick has a tool for building this custom image with SampleVNF.
     It is necessary to have ``sudo`` rights to use this tool.
 
-    Also you may need to install several additional packages to use this tool, by
-    following the commands below::
+   You may need to install several additional packages to use this tool, by
+   following the commands below::
 
 
-       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+      sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
 
 
-    This image can be built using the following command in the directory where Yardstick is installed::
+   This image can be built using the following command in the directory where
+   Yardstick is installed::
 
 
-       export YARD_IMG_ARCH='amd64'
-       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
-       sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
+      export YARD_IMG_ARCH='amd64'
+      sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+      sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
 
 
-    for more details refer to chapter :doc:`04-installation`
+   for more details refer to chapter :doc:`04-installation`
 
 
-    .. note::  VM should be build with static IP and should be accessible from yardstick host.
+   .. note::  VM should be build with static IP and should be accessible from
+      yardstick host.
 
 
- c) OVS & DPDK version.
-     - OVS 2.7 and DPDK 16.11.1 above version is supported
+3. OVS & DPDK version.
+   * OVS 2.7 and DPDK 16.11.1 above version is supported
 
 
- d) Setup OVS/DPDK on host.
-     Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
+4. Setup `OVS-DPDK`_ on host.
 
 
 OVS-DPDK Config pod.yaml describing Topology
 
 
 OVS-DPDK Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++++
 
 OVS-DPDK 2-Node setup
 
 OVS-DPDK 2-Node setup
-^^^^^^^^^^^^^^^^^^^^^
-
++++++++++++++++++++++
 
 .. code-block:: console
 
 
 .. code-block:: console
 
@@ -685,11 +820,11 @@ OVS-DPDK 2-Node setup
   |          |               |       (ovs-dpdk) |      |
   |          | (n)<----->(n) |------------------       |
   +----------+               +-------------------------+
   |          |               |       (ovs-dpdk) |      |
   |          | (n)<----->(n) |------------------       |
   +----------+               +-------------------------+
-  trafficgen_1                          host
+  trafficgen_0                          host
 
 
 OVS-DPDK 3-Node setup - Correlated Traffic
 
 
 OVS-DPDK 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++
 
 .. code-block:: console
 
 
 .. code-block:: console
 
@@ -715,27 +850,25 @@ OVS-DPDK 3-Node setup - Correlated Traffic
   |          |               |      (ovs-dpdk)  |      |          |            |
   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
   +----------+               +-------------------------+          +------------+
   |          |               |      (ovs-dpdk)  |      |          |            |
   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
   +----------+               +-------------------------+          +------------+
-  trafficgen_1                          host                       trafficgen_2
+  trafficgen_0                          host                       trafficgen_1
 
 
 
 
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.
-
-.. code-block:: console
+Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
+the topology and update all the required fields::
 
 
-  cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
-  cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
+  cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
+  cp ./etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
 
 .. note:: Update all the required fields like ip, user, password, pcis, etc...
 
 OVS-DPDK Config pod_trex.yaml
 
 .. note:: Update all the required fields like ip, user, password, pcis, etc...
 
 OVS-DPDK Config pod_trex.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
 
 .. code-block:: YAML
 
     nodes:
     -
 
 .. code-block:: YAML
 
     nodes:
     -
-      name: trafficgen_1
+      name: trafficgen_0
       role: TrafficGen
       ip: 1.1.1.1
       user: root
       role: TrafficGen
       ip: 1.1.1.1
       user: root
@@ -757,7 +890,7 @@ OVS-DPDK Config pod_trex.yaml
               local_mac: "00:00.00:00:00:02"
 
 OVS-DPDK Config host_ovs.yaml
               local_mac: "00:00.00:00:00:02"
 
 OVS-DPDK Config host_ovs.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
 
 .. code-block:: YAML
 
 
 .. code-block:: YAML
 
@@ -770,10 +903,10 @@ OVS-DPDK Config host_ovs.yaml
        password: ""
 
 ovs_dpdk testcase update:
        password: ""
 
 ovs_dpdk testcase update:
-``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+``./samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
 
 
-Update "contexts" section
-"""""""""""""""""""""""""
+Update contexts section
+'''''''''''''''''''''''
 
 .. code-block:: YAML
 
 
 .. code-block:: YAML
 
@@ -806,7 +939,7 @@ Update "contexts" section
        user: "" # update VM username
        password: "" # update password
      servers:
        user: "" # update VM username
        password: "" # update password
      servers:
-       vnf:
+       vnf_0:
          network_ports:
            mgmt:
              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
          network_ports:
            mgmt:
              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -826,17 +959,155 @@ Update "contexts" section
          cidr: '152.16.40.10/24'
          gateway_ip: '152.16.100.20'
 
          cidr: '152.16.40.10/24'
          gateway_ip: '152.16.100.20'
 
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
+
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
+
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+  .. code-block:: console
+
+      ovs_properties:
+        version:
+          ovs: 2.8.1
+          dpdk: 17.05.2
+        pmd_threads: 4
+        pmd_cpu_mask: "0x3c"
+        ram:
+         socket_0: 2048
+         socket_1: 2048
+        queues: 2
+        vpath: "/usr/local"
+        max_idle: 30000
+        lcore_mask: 0x02
+        dpdk_pmd-rxq-affinity:
+          0: "0:2,1:2"
+          1: "0:2,1:2"
+          2: "0:3,1:3"
+          3: "0:3,1:3"
+        vhost_pmd-rxq-affinity:
+          0: "0:3,1:3"
+          1: "0:3,1:3"
+          2: "0:4,1:4"
+          3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+  +-------------------------+-------------------------------------------------+
+  | Parameters              | Detail                                          |
+  +=========================+=================================================+
+  | version                 || Version of OVS and DPDK to be installed        |
+  |                         || There is a relation between OVS and DPDK       |
+  |                         |  version which can be found at                  |
+  |                         | `OVS-DPDK-versions`_                            |
+  |                         || By default OVS: 2.6.0, DPDK: 16.07.2           |
+  +-------------------------+-------------------------------------------------+
+  | lcore_mask              || Core bitmask used during DPDK initialization   |
+  |                         |  where the non-datapath OVS-DPDK threads such   |
+  |                         |  as handler and revalidator threads run         |
+  +-------------------------+-------------------------------------------------+
+  | pmd_cpu_mask            || Core bitmask that sets which cores are used by |
+  |                         || OVS-DPDK for datapath packet processing        |
+  +-------------------------+-------------------------------------------------+
+  | pmd_threads             || Number of PMD threads used by OVS-DPDK for     |
+  |                         |  datapath                                       |
+  |                         || This core mask is evaluated in Yardstick       |
+  |                         || It will be used if pmd_cpu_mask is not given   |
+  |                         || Default is 2                                   |
+  +-------------------------+-------------------------------------------------+
+  | ram                     || Amount of RAM to be used for each socket, MB   |
+  |                         || Default is 2048 MB                             |
+  +-------------------------+-------------------------------------------------+
+  | queues                  || Number of RX queues used for DPDK physical     |
+  |                         |  interface                                      |
+  +-------------------------+-------------------------------------------------+
+  | dpdk_pmd-rxq-affinity   || RX queue assignment to PMD threads for DPDK    |
+  |                         || e.g.: <port number> : <queue-id>:<core-id>     |
+  +-------------------------+-------------------------------------------------+
+  | vhost_pmd-rxq-affinity  || RX queue assignment to PMD threads for vhost   |
+  |                         || e.g.: <port number> : <queue-id>:<core-id>     |
+  +-------------------------+-------------------------------------------------+
+  | vpath                   || User path for openvswitch files                |
+  |                         || Default is ``/usr/local``                      |
+  +-------------------------+-------------------------------------------------+
+  | max_idle                || The maximum time that idle flows will remain   |
+  |                         |  cached in the datapath, ms                     |
+  +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+  .. code-block:: console
 
 
-Network Service Benchmarking - OpenStack with SR-IOV support
-============================================================
+      flavor:
+        images: <path>
+        ram: 8192
+        extra_specs:
+           machine_type: 'pc-i440fx-xenial'
+           hw:cpu_sockets: 1
+           hw:cpu_cores: 6
+           hw:cpu_threads: 2
+           hw_socket: 0
+           cputune: |
+             <cputune>
+               <vcpupin vcpu="0" cpuset="7"/>
+               <vcpupin vcpu="1" cpuset="8"/>
+               ...
+               <vcpupin vcpu="11" cpuset="18"/>
+               <emulatorpin cpuset="11"/>
+             </cputune>
+
+VM image properties description:
+
+  +-------------------------+-------------------------------------------------+
+  | Parameters              | Detail                                          |
+  +=========================+=================================================+
+  | images                  || Path to the VM image generated by              |
+  |                         |  ``nsb_setup.sh``                               |
+  |                         || Default path is ``/var/lib/libvirt/images/``   |
+  |                         || Default file name ``yardstick-nsb-image.img``  |
+  |                         |  or ``yardstick-image.img``                     |
+  +-------------------------+-------------------------------------------------+
+  | ram                     || Amount of RAM to be used for VM                |
+  |                         || Default is 4096 MB                             |
+  +-------------------------+-------------------------------------------------+
+  | hw:cpu_sockets          || Number of sockets provided to the guest VM     |
+  |                         || Default is 1                                   |
+  +-------------------------+-------------------------------------------------+
+  | hw:cpu_cores            || Number of cores provided to the guest VM       |
+  |                         || Default is 2                                   |
+  +-------------------------+-------------------------------------------------+
+  | hw:cpu_threads          || Number of threads provided to the guest VM     |
+  |                         || Default is 2                                   |
+  +-------------------------+-------------------------------------------------+
+  | hw_socket               || Generate vcpu cpuset from given HW socket      |
+  |                         || Default is 0                                   |
+  +-------------------------+-------------------------------------------------+
+  | cputune                 || Maps virtual cpu with logical cpu              |
+  +-------------------------+-------------------------------------------------+
+  | machine_type            || Machine type to be emulated in VM              |
+  |                         || Default is 'pc-i440fx-xenial'                  |
+  +-------------------------+-------------------------------------------------+
+
+
+OpenStack with SR-IOV support
+-----------------------------
 
 This section describes how to run a Sample VNF test case, using Heat context,
 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
 DevStack, with SR-IOV support.
 
 
 
 This section describes how to run a Sample VNF test case, using Heat context,
 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
 DevStack, with SR-IOV support.
 
 
-Single node OpenStack setup with external TG
---------------------------------------------
+Single node OpenStack with external TG
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 .. code-block:: console
 
 
 .. code-block:: console
 
@@ -863,32 +1134,28 @@ Single node OpenStack setup with external TG
   |          | (PF1)<----->(PF1) +--------------------+       |
   |          |                   |                            |
   +----------+                   +----------------------------+
   |          | (PF1)<----->(PF1) +--------------------+       |
   |          |                   |                            |
   +----------+                   +----------------------------+
-  trafficgen_1                                 host
+  trafficgen_0                                 host
 
 
 Host pre-configuration
 
 
 Host pre-configuration
-^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++
 
 
-.. warning:: The following configuration requires sudo access to the system. Make
-  sure that your user have the access.
+.. warning:: The following configuration requires sudo access to the system.
+   Make sure that your user have the access.
 
 
-Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
-disable this extension by default.
+Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
+manufacturers disable this extension by default.
 
 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
 config file ``/etc/default/grub``.
 
 
 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
 config file ``/etc/default/grub``.
 
-For the Intel platform:
-
-.. code:: bash
+For the Intel platform::
 
   ...
   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
   ...
 
 
   ...
   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
   ...
 
-For the AMD platform:
-
-.. code:: bash
+For the AMD platform::
 
   ...
   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
 
   ...
   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
@@ -903,9 +1170,7 @@ Update the grub configuration file and restart the system:
   sudo update-grub
   sudo reboot
 
   sudo update-grub
   sudo reboot
 
-Make sure the extension has been enabled:
-
-.. code:: bash
+Make sure the extension has been enabled::
 
   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
 
 
   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
 
@@ -918,11 +1183,13 @@ Make sure the extension has been enabled:
   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
 
   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
 
+.. TODO: Refer to the yardstick installation guide for proxy set up
+
 Setup system proxy (if needed). Add the following configuration into the
 ``/etc/environment`` file:
 
 .. note:: The proxy server name/port and IPs should be changed according to
 Setup system proxy (if needed). Add the following configuration into the
 ``/etc/environment`` file:
 
 .. note:: The proxy server name/port and IPs should be changed according to
-  actuall/current proxy configuration in the lab.
+  actual/current proxy configuration in the lab.
 
 .. code:: bash
 
 
 .. code:: bash
 
@@ -940,13 +1207,11 @@ Upgrade the system:
   sudo -EH apt-get upgrade
   sudo -EH apt-get dist-upgrade
 
   sudo -EH apt-get upgrade
   sudo -EH apt-get dist-upgrade
 
-Install dependencies needed for the DevStack
+Install dependencies needed for DevStack
 
 .. code:: bash
 
 
 .. code:: bash
 
-  sudo -EH apt-get install python
-  sudo -EH apt-get install python-dev
-  sudo -EH apt-get install python-pip
+  sudo -EH apt-get install python python-dev python-pip
 
 Setup SR-IOV ports on the host:
 
 
 Setup SR-IOV ports on the host:
 
@@ -967,12 +1232,12 @@ Setup SR-IOV ports on the host:
 
 
 DevStack installation
 
 
 DevStack installation
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
 
 
-Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
-documentation to install OpenStack on a host. Please note, that stable
-``pike`` branch of devstack repo should be used during the installation.
-The required `local.conf`` configuration file are described below.
+If you want to try out NSB, but don't have OpenStack set-up, you can use
+`Devstack`_ to install OpenStack on a host. Please note, that the
+``stable/pike`` branch of devstack repo should be used during the installation.
+The required ``local.conf`` configuration file are described below.
 
 DevStack configuration file:
 
 
 DevStack configuration file:
 
@@ -987,24 +1252,22 @@ DevStack configuration file:
 
 Start the devstack installation on a host.
 
 
 Start the devstack installation on a host.
 
-
 TG host configuration
 TG host configuration
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
 
 
-Yardstick automatically install and configure Trex traffic generator on TG
+Yardstick automatically installs and configures Trex traffic generator on TG
 host based on provided POD file (see below). Anyway, it's recommended to check
 host based on provided POD file (see below). Anyway, it's recommended to check
-the compatibility of the installed NIC on the TG server with software Trex using
-the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
-
+the compatibility of the installed NIC on the TG server with software Trex
+using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
 
 Run the Sample VNF test case
 
 Run the Sample VNF test case
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++
 
 There is an example of Sample VNF test case ready to be executed in an
 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
 
 
 There is an example of Sample VNF test case ready to be executed in an
 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
 
-Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
 context.
 
 Create pod file for TG in the yardstick repo folder located in the yardstick
 context.
 
 Create pod file for TG in the yardstick repo folder located in the yardstick
@@ -1023,7 +1286,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
 
 
 Multi node OpenStack TG and VNF setup (two nodes)
 
 
 Multi node OpenStack TG and VNF setup (two nodes)
--------------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 .. code-block:: console
 
 
 .. code-block:: console
 
@@ -1034,7 +1297,7 @@ Multi node OpenStack TG and VNF setup (two nodes)
   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
   |   |                    |   |                   |   |                    |   |
   |   |         TG         |   |                   |   |        DUT         |   |
   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
   |   |                    |   |                   |   |                    |   |
   |   |         TG         |   |                   |   |        DUT         |   |
-  |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
+  |   |    trafficgen_0    |   |                   |   |       (VNF)        |   |
   |   |                    |   |                   |   |                    |   |
   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
   |   |                    |   |                   |   |                    |   |
   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
@@ -1054,19 +1317,17 @@ Multi node OpenStack TG and VNF setup (two nodes)
 
 
 Controller/Compute pre-configuration
 
 
 Controller/Compute pre-configuration
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++
 
 Pre-configuration of the controller and compute hosts are the same as
 
 Pre-configuration of the controller and compute hosts are the same as
-described in `Host pre-configuration`_ section. Follow the steps in the section.
-
+described in `Host pre-configuration`_ section.
 
 DevStack configuration
 
 DevStack configuration
-^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++
 
 
-Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
-documentation to install OpenStack on a host. Please note, that stable
-``pike`` branch of devstack repo should be used during the installation.
-The required `local.conf`` configuration file are described below.
+A reference ``local.conf`` for deploying OpenStack in a multi-host environment
+using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
+devstack repo should be used during the installation.
 
 .. note:: Update the devstack configuration files by replacing angluar brackets
   with a short description inside.
 
 .. note:: Update the devstack configuration files by replacing angluar brackets
   with a short description inside.
@@ -1086,17 +1347,17 @@ DevStack configuration file for compute host:
 
 Start the devstack installation on the controller and compute hosts.
 
 
 Start the devstack installation on the controller and compute hosts.
 
-
 Run the sample vFW TC
 Run the sample vFW TC
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
 
 
-Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
 context.
 
 context.
 
-Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
-tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
-context using steps described in `NS testing - using yardstick CLI`_ section
-and the following yardtick command line arguments:
+Run the sample vFW RFC2544 SR-IOV test case
+(``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
+in the heat context using steps described in
+`NS testing - using yardstick CLI`_ section and the following Yardstick command
+line arguments:
 
 .. code:: bash
 
 
 .. code:: bash
 
@@ -1104,8 +1365,8 @@ and the following yardtick command line arguments:
   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
 
 
   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
 
 
-Enabling other Traffic generator
-================================
+Enabling other Traffic generators
+---------------------------------
 
 IxLoad
 ^^^^^^
 
 IxLoad
 ^^^^^^
@@ -1124,14 +1385,16 @@ IxLoad
 
   .. code-block:: console
 
 
   .. code-block:: console
 
-    cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
+    cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+      /etc/yardstick/nodes/pod_ixia.yaml
 
   Config ``pod_ixia.yaml``
 
   .. literalinclude:: code/pod_ixia.yaml
      :language: console
 
 
   Config ``pod_ixia.yaml``
 
   .. literalinclude:: code/pod_ixia.yaml
      :language: console
 
-  for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
+  for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
+  for ovs-dpdk/sriov configuration
 
 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
    You will also need to configure the IxLoad machine to start the IXIA
 
 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
    You will also need to configure the IxLoad machine to start the IXIA
@@ -1141,15 +1404,15 @@ IxLoad
    * Go to:
      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
      or
    * Go to:
      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
      or
-     ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
+     ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
 
 4. Create a folder ``Results`` in c:\ and share the folder on the network.
 
 5. Execute testcase in samplevnf folder e.g.
 
 4. Create a folder ``Results`` in c:\ and share the folder on the network.
 
 5. Execute testcase in samplevnf folder e.g.
-   ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
+   ``./samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
 
 IxNetwork
 
 IxNetwork
----------
+^^^^^^^^^
 
 IxNetwork testcases use IxNetwork API Python Bindings module, which is
 installed as part of the requirements of the project.
 
 IxNetwork testcases use IxNetwork API Python Bindings module, which is
 installed as part of the requirements of the project.
@@ -1158,14 +1421,16 @@ installed as part of the requirements of the project.
 
   .. code-block:: console
 
 
   .. code-block:: console
 
-    cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
+    cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+    /etc/yardstick/nodes/pod_ixia.yaml
 
 
-  Config pod_ixia.yaml
+  Configure ``pod_ixia.yaml``
 
   .. literalinclude:: code/pod_ixia.yaml
      :language: console
 
 
   .. literalinclude:: code/pod_ixia.yaml
      :language: console
 
-  for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
+  for sriov/ovs_dpdk pod files, please refer to above
+  `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
 
 2. Start IxNetwork TCL Server
    You will also need to configure the IxNetwork machine to start the IXIA
 
 2. Start IxNetwork TCL Server
    You will also need to configure the IxNetwork machine to start the IXIA
@@ -1177,4 +1442,53 @@ installed as part of the requirements of the project.
       (or ``IxNetworkApiServer``)
 
 3. Execute testcase in samplevnf folder e.g.
       (or ``IxNetworkApiServer``)
 
 3. Execute testcase in samplevnf folder e.g.
-   ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
+   ``./samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
+
+Spirent Landslide
+-----------------
+
+In order to use Spirent Landslide for vEPC testcases, some dependencies have
+to be preinstalled and properly configured.
+
+- Java
+
+    32-bit Java installation is required for the Spirent Landslide TCL API.
+
+    | ``$ sudo apt-get install openjdk-8-jdk:i386``
+
+    .. important::
+      Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
+      check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
+      section of installation instructions.
+
+- LsApi (Tcl API module)
+
+    Follow Landslide documentation for detailed instructions on Linux
+    installation of Tcl API and its dependencies
+    ``http://TAS_HOST_IP/tclapiinstall.html``.
+    For working with LsApi Python wrapper only steps 1-5 are required.
+
+    .. note:: After installation make sure your API home path is included in
+      ``PYTHONPATH`` environment variable.
+
+    .. important::
+      The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+      For LsApi module to initialize correctly following lines (184-186) in
+      lsapi.py
+
+    .. code-block:: python
+
+        ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+        if ldpath == '':
+         environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+    should be changed to:
+
+    .. code-block:: python
+
+        ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+        if not ldpath == '':
+               environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+.. note:: The Spirent landslide TCL software package needs to be updated in case
+  the user upgrades to a new version of Spirent landslide software.