[docs] Document SRIOV configuration options
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2019 Intel Corporation.
5
6 ..
7    Convention for heading levels in Yardstick documentation:
8
9    =======  Heading 0 (reserved for the title in a document)
10    -------  Heading 1
11    ^^^^^^^  Heading 2
12    +++++++  Heading 3
13    '''''''  Heading 4
14
15    Avoid deeper levels because they do not render well.
16
17
18 ================
19 NSB Installation
20 ================
21
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
25
26 Abstract
27 --------
28
29 The steps needed to run Yardstick with NSB testing are:
30
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
34 * Run the test case.
35
36 Prerequisites
37 -------------
38
39 Refer to :doc:`04-installation` for more information on Yardstick
40 prerequisites.
41
42 Several prerequisites are needed for Yardstick (VNF testing):
43
44   * Python Modules: pyzmq, pika.
45   * flex
46   * bison
47   * build-essential
48   * automake
49   * libtool
50   * librabbitmq-dev
51   * rabbitmq-server
52   * collectd
53   * intel-cmt-cat
54
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57
58 SUT requirements:
59
60    ======= ===================
61    Item    Description
62    ======= ===================
63    Memory  Min 20GB
64    NICs    2 x 10G
65    OS      Ubuntu 16.04.3 LTS
66    kernel  4.4.0-34-generic
67    DPDK    17.02
68    ======= ===================
69
70 Boot and BIOS settings:
71
72    ============= =================================================
73    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76                  iommu=on iommu=pt intel_iommu=on
77                  Note: nohz_full and rcu_nocbs is to disable Linux
78                  kernel interrupts
79    BIOS          CPU Power and Performance Policy <Performance>
80                  CPU C-state Disabled
81                  CPU P-state Disabled
82                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83                  Hyper-Threading Technology (If supported) Enabled
84                  Virtualization Techology Enabled
85                  Intel(R) VT for Direct I/O Enabled
86                  Coherency Enabled
87                  Turbo Boost Disabled
88    ============= =================================================
89
90 Install Yardstick (NSB Testing)
91 -------------------------------
92
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
95
96 1. Install Yardstick in specified mode: bare metal or container.
97    Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99    sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100    Add such servers to ``install-inventory.ini`` file to either
101    ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102    It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104    Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105    The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
107
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
110
111 Set environment::
112
113     http_proxy='http://proxy.company.com:port'
114     https_proxy='http://proxy.company.com:port'
115
116 .. code-block:: console
117
118     export http_proxy='http://proxy.company.com:port'
119     export https_proxy='http://proxy.company.com:port'
120
121 Download the source code and check out the latest stable branch
122
123 .. code-block:: console
124
125   git clone https://gerrit.opnfv.org/gerrit/yardstick
126   cd yardstick
127   # Switch to latest stable branch
128   git checkout stable/gambia
129
130 Modify the Yardstick installation inventory used by Ansible::
131
132   cat ./ansible/install-inventory.ini
133   [jumphost]
134   localhost ansible_connection=local
135
136   # section below is only due backward compatibility.
137   # it will be removed later
138   [yardstick:children]
139   jumphost
140
141   [yardstick-baremetal]
142   baremetal ansible_host=192.168.2.51 ansible_connection=ssh
143
144   [yardstick-standalone]
145   standalone ansible_host=192.168.2.52 ansible_connection=ssh
146
147   [all:vars]
148   # Uncomment credentials below if needed
149     ansible_user=root
150     ansible_ssh_pass=root
151   # ansible_ssh_private_key_file=/root/.ssh/id_rsa
152   # When IMG_PROPERTY is passed neither normal nor nsb set
153   # "path_to_vm=/path/to/image" to add it to OpenStack
154   # path_to_img=/tmp/workspace/yardstick-image.img
155
156   # List of CPUs to be isolated (not used by default)
157   # Grub line will be extended with:
158   # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
159   # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
160
161 .. warning::
162
163    Before running ``nsb_setup.sh`` make sure python is installed on servers
164    added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
165
166 .. note::
167
168    SSH access without password needs to be configured for all your nodes
169    defined in ``install-inventory.ini`` file.
170    If you want to use password authentication you need to install ``sshpass``::
171
172      sudo -EH apt-get install sshpass
173
174
175 .. note::
176
177    A VM image built by other means than Yardstick can be added to OpenStack.
178    Uncomment and set correct path to the VM image in the
179    ``install-inventory.ini`` file::
180
181      path_to_img=/tmp/workspace/yardstick-image.img
182
183
184 .. note::
185
186    CPU isolation can be applied to the remote servers, like:
187    ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
188    ``install-inventory.ini`` file.
189
190 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
191 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
192 installs packages to the servers given in ``yardstick-standalone`` and
193 ``yardstick-baremetal`` host groups.
194
195 To pull Yardstick built based on Ubuntu 18 run::
196
197     ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
198
199 To change default behavior modify parameters for ``install.yaml`` in
200 ``nsb_setup.sh`` file.
201
202 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
203 parameters.
204
205 To execute an installation for a **BareMetal** or a **Standalone context**::
206
207     ./nsb_setup.sh
208
209 To execute an installation for an **OpenStack** context::
210
211     ./nsb_setup.sh <path to admin-openrc.sh>
212
213 .. note::
214
215    Yardstick may not be operational after distributive linux kernel update if
216    it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
217
218 .. warning::
219
220    The Yardstick VM image (NSB or normal) cannot be built inside a VM.
221
222 .. warning::
223
224    The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
225    Reboot of the servers from ``yardstick-standalone`` or
226    ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
227    required to apply those changes.
228
229 The above commands will set up Docker with the latest Yardstick code. To
230 execute::
231
232   docker exec -it yardstick bash
233
234 .. note::
235
236    It may be needed to configure tty in docker container to extend commandline
237    character length, for example:
238
239    stty size rows 58 cols 234
240
241 It will also automatically download all the packages needed for NSB Testing
242 setup. Refer chapter :doc:`04-installation` for more on Docker.
243
244 **Install Yardstick using Docker (recommended)**
245
246 Bare Metal context example
247 ^^^^^^^^^^^^^^^^^^^^^^^^^^
248
249 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
250
251 Perform following steps to install NSB:
252
253 1. Clone Yardstick repo to jump host.
254 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
255    ``install-inventory.ini`` file to install NSB and dependencies. Install
256    python on servers.
257 3. Start deployment using docker image based on Ubuntu 16:
258
259 .. code-block:: console
260
261    ./nsb_setup.sh
262
263 4. Reboot bare metal servers.
264 5. Enter to yardstick container and modify pod yaml file and run tests.
265
266 Standalone context example for Ubuntu 18
267 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
268
269 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
270 Ubuntu 18 is installed on all servers.
271
272 Perform following steps to install NSB:
273
274 1. Clone Yardstick repo to jump host.
275 2. Add TG server to ``yardstick-baremetal`` group in
276    ``install-inventory.ini`` file to install NSB and dependencies.
277    Add server where VM with sample VNF will be deployed to
278    ``yardstick-standalone`` group in ``install-inventory.ini`` file.
279    Target VM image named ``yardstick-nsb-image.img`` will be placed to
280    ``/var/lib/libvirt/images/``.
281    Install python on servers.
282 3. Modify ``nsb_setup.sh`` on jump host:
283
284 .. code-block:: console
285
286    ansible-playbook \
287    -e IMAGE_PROPERTY='nsb' \
288    -e OS_RELEASE='bionic' \
289    -e INSTALLATION_MODE='container_pull' \
290    -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
291    -i install-inventory.ini install.yaml
292
293 4. Start deployment with Yardstick docker images based on Ubuntu 18:
294
295 .. code-block:: console
296
297    ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
298
299 5. Reboot servers.
300 6. Enter to yardstick container and modify pod yaml file and run tests.
301
302
303 System Topology
304 ---------------
305
306 .. code-block:: console
307
308   +----------+              +----------+
309   |          |              |          |
310   |          | (0)----->(0) |          |
311   |    TG1   |              |    DUT   |
312   |          |              |          |
313   |          | (1)<-----(1) |          |
314   +----------+              +----------+
315   trafficgen_0                   vnf
316
317
318 Environment parameters and credentials
319 --------------------------------------
320
321 Configure yardstick.conf
322 ^^^^^^^^^^^^^^^^^^^^^^^^
323
324 If you did not run ``yardstick env influxdb`` inside the container to generate
325 ``yardstick.conf``, then create the config file manually (run inside the
326 container)::
327
328     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
329     vi /etc/yardstick/yardstick.conf
330
331 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
332 section::
333
334   [DEFAULT]
335   debug = True
336   dispatcher = influxdb
337
338   [dispatcher_influxdb]
339   timeout = 5
340   target = http://{YOUR_IP_HERE}:8086
341   db_name = yardstick
342   username = root
343   password = root
344
345   [nsb]
346   trex_path=/opt/nsb_bin/trex/scripts
347   bin_path=/opt/nsb_bin
348   trex_client_lib=/opt/nsb_bin/trex_client/stl
349
350 Run Yardstick - Network Service Testcases
351 -----------------------------------------
352
353 NS testing - using yardstick CLI
354 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
355
356   See :doc:`04-installation`
357
358 Connect to the Yardstick container::
359
360   docker exec -it yardstick /bin/bash
361
362 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
363   source /etc/yardstick/openstack.creds
364
365 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
366 OpenStack::
367
368   export EXTERNAL_NETWORK="<openstack public network>"
369
370 Finally, you should be able to run the testcase::
371
372   yardstick --debug task start ./yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
373
374 Network Service Benchmarking - Bare-Metal
375 -----------------------------------------
376
377 Bare-Metal Config pod.yaml describing Topology
378 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
379
380 Bare-Metal 2-Node setup
381 +++++++++++++++++++++++
382 .. code-block:: console
383
384   +----------+              +----------+
385   |          |              |          |
386   |          | (0)----->(0) |          |
387   |    TG1   |              |    DUT   |
388   |          |              |          |
389   |          | (n)<-----(n) |          |
390   +----------+              +----------+
391   trafficgen_0                   vnf
392
393 Bare-Metal 3-Node setup - Correlated Traffic
394 ++++++++++++++++++++++++++++++++++++++++++++
395 .. code-block:: console
396
397   +----------+              +----------+            +------------+
398   |          |              |          |            |            |
399   |          |              |          |            |            |
400   |          | (0)----->(0) |          |            |    UDP     |
401   |    TG1   |              |    DUT   |            |   Replay   |
402   |          |              |          |            |            |
403   |          |              |          |(1)<---->(0)|            |
404   +----------+              +----------+            +------------+
405   trafficgen_0                   vnf                 trafficgen_1
406
407
408 Bare-Metal Config pod.yaml
409 ^^^^^^^^^^^^^^^^^^^^^^^^^^
410 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
411 topology and update all the required fields.::
412
413     cp ./etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
414
415 .. code-block:: YAML
416
417     nodes:
418     -
419         name: trafficgen_0
420         role: TrafficGen
421         ip: 1.1.1.1
422         user: root
423         password: r00t
424         interfaces:
425             xe0:  # logical name from topology.yaml and vnfd.yaml
426                 vpci:      "0000:07:00.0"
427                 driver:    i40e # default kernel driver
428                 dpdk_port_num: 0
429                 local_ip: "152.16.100.20"
430                 netmask:   "255.255.255.0"
431                 local_mac: "00:00:00:00:00:01"
432             xe1:  # logical name from topology.yaml and vnfd.yaml
433                 vpci:      "0000:07:00.1"
434                 driver:    i40e # default kernel driver
435                 dpdk_port_num: 1
436                 local_ip: "152.16.40.20"
437                 netmask:   "255.255.255.0"
438                 local_mac: "00:00.00:00:00:02"
439
440     -
441         name: vnf
442         role: vnf
443         ip: 1.1.1.2
444         user: root
445         password: r00t
446         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
447         interfaces:
448             xe0:  # logical name from topology.yaml and vnfd.yaml
449                 vpci:      "0000:07:00.0"
450                 driver:    i40e # default kernel driver
451                 dpdk_port_num: 0
452                 local_ip: "152.16.100.19"
453                 netmask:   "255.255.255.0"
454                 local_mac: "00:00:00:00:00:03"
455
456             xe1:  # logical name from topology.yaml and vnfd.yaml
457                 vpci:      "0000:07:00.1"
458                 driver:    i40e # default kernel driver
459                 dpdk_port_num: 1
460                 local_ip: "152.16.40.19"
461                 netmask:   "255.255.255.0"
462                 local_mac: "00:00:00:00:00:04"
463         routing_table:
464         - network: "152.16.100.20"
465           netmask: "255.255.255.0"
466           gateway: "152.16.100.20"
467           if: "xe0"
468         - network: "152.16.40.20"
469           netmask: "255.255.255.0"
470           gateway: "152.16.40.20"
471           if: "xe1"
472         nd_route_tbl:
473         - network: "0064:ff9b:0:0:0:0:9810:6414"
474           netmask: "112"
475           gateway: "0064:ff9b:0:0:0:0:9810:6414"
476           if: "xe0"
477         - network: "0064:ff9b:0:0:0:0:9810:2814"
478           netmask: "112"
479           gateway: "0064:ff9b:0:0:0:0:9810:2814"
480           if: "xe1"
481
482
483 Standalone Virtualization
484 -------------------------
485
486 VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
487 to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
488 manually. Test case example, context section::
489
490     contexts:
491      ...
492      vm_deploy: True
493
494
495 SR-IOV
496 ^^^^^^
497
498 SR-IOV Pre-requisites
499 +++++++++++++++++++++
500
501 On Host, where VM is created:
502  a) Create and configure a bridge named ``br-int`` for VM to connect to
503     external network. Currently this can be done using VXLAN tunnel.
504
505     Execute the following on host, where VM is created::
506
507       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
508       brctl addbr br-int
509       brctl addif br-int vxlan0
510       ip link set dev vxlan0 up
511       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
512       ip link set dev br-int up
513
514   .. note:: You may need to add extra rules to iptable to forward traffic.
515
516   .. code-block:: console
517
518     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
519     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
520
521   Execute the following on a jump host:
522
523   .. code-block:: console
524
525       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
526       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
527       ip link set dev vxlan0 up
528
529   .. note:: Host and jump host are different baremetal servers.
530
531  b) Modify test case management CIDR.
532     IP addresses IP#1, IP#2 and CIDR must be in the same network.
533
534   .. code-block:: YAML
535
536     servers:
537       vnf_0:
538         network_ports:
539           mgmt:
540             cidr: '1.1.1.7/24'
541
542  c) Build guest image for VNF to run.
543     Most of the sample test cases in Yardstick are using a guest image called
544     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
545     Yardstick has a tool for building this custom image with SampleVNF.
546     It is necessary to have ``sudo`` rights to use this tool.
547
548    Also you may need to install several additional packages to use this tool, by
549    following the commands below::
550
551       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
552
553    This image can be built using the following command in the directory where
554    Yardstick is installed::
555
556       export YARD_IMG_ARCH='amd64'
557       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
558
559    For instructions on generating a cloud image using Ansible, refer to
560    :doc:`04-installation`.
561
562    for more details refer to chapter :doc:`04-installation`
563
564    .. note:: VM should be build with static IP and be accessible from the
565       Yardstick host.
566
567
568 SR-IOV Config pod.yaml describing Topology
569 ++++++++++++++++++++++++++++++++++++++++++
570
571 SR-IOV 2-Node setup
572 +++++++++++++++++++
573 .. code-block:: console
574
575                                +--------------------+
576                                |                    |
577                                |                    |
578                                |        DUT         |
579                                |       (VNF)        |
580                                |                    |
581                                +--------------------+
582                                | VF NIC |  | VF NIC |
583                                +--------+  +--------+
584                                      ^          ^
585                                      |          |
586                                      |          |
587   +----------+               +-------------------------+
588   |          |               |       ^          ^      |
589   |          |               |       |          |      |
590   |          | (0)<----->(0) | ------    SUT    |      |
591   |    TG1   |               |                  |      |
592   |          | (n)<----->(n) | -----------------       |
593   |          |               |                         |
594   +----------+               +-------------------------+
595   trafficgen_0                          host
596
597
598
599 SR-IOV 3-Node setup - Correlated Traffic
600 ++++++++++++++++++++++++++++++++++++++++
601 .. code-block:: console
602
603                              +--------------------+
604                              |                    |
605                              |                    |
606                              |        DUT         |
607                              |       (VNF)        |
608                              |                    |
609                              +--------------------+
610                              | VF NIC |  | VF NIC |
611                              +--------+  +--------+
612                                    ^          ^
613                                    |          |
614                                    |          |
615   +----------+               +---------------------+            +--------------+
616   |          |               |     ^          ^    |            |              |
617   |          |               |     |          |    |            |              |
618   |          | (0)<----->(0) |-----           |    |            |     TG2      |
619   |    TG1   |               |         SUT    |    |            | (UDP Replay) |
620   |          |               |                |    |            |              |
621   |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
622   +----------+               +---------------------+            +--------------+
623   trafficgen_0                          host                      trafficgen_1
624
625 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
626 topology and update all the required fields.
627
628 .. code-block:: console
629
630     cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
631     cp ./etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
632
633 .. note:: Update all the required fields like ip, user, password, pcis, etc...
634
635 SR-IOV Config pod_trex.yaml
636 +++++++++++++++++++++++++++
637
638 .. code-block:: YAML
639
640     nodes:
641     -
642         name: trafficgen_0
643         role: TrafficGen
644         ip: 1.1.1.1
645         user: root
646         password: r00t
647         key_filename: /root/.ssh/id_rsa
648         interfaces:
649             xe0:  # logical name from topology.yaml and vnfd.yaml
650                 vpci:      "0000:07:00.0"
651                 driver:    i40e # default kernel driver
652                 dpdk_port_num: 0
653                 local_ip: "152.16.100.20"
654                 netmask:   "255.255.255.0"
655                 local_mac: "00:00:00:00:00:01"
656             xe1:  # logical name from topology.yaml and vnfd.yaml
657                 vpci:      "0000:07:00.1"
658                 driver:    i40e # default kernel driver
659                 dpdk_port_num: 1
660                 local_ip: "152.16.40.20"
661                 netmask:   "255.255.255.0"
662                 local_mac: "00:00.00:00:00:02"
663
664 SR-IOV Config host_sriov.yaml
665 +++++++++++++++++++++++++++++
666
667 .. code-block:: YAML
668
669     nodes:
670     -
671        name: sriov
672        role: Sriov
673        ip: 192.168.100.101
674        user: ""
675        password: ""
676
677 SR-IOV testcase update:
678 ``./samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
679
680 Update contexts section
681 '''''''''''''''''''''''
682
683 .. code-block:: YAML
684
685   contexts:
686    - name: yardstick
687      type: Node
688      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
689    - type: StandaloneSriov
690      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
691      name: yardstick
692      vm_deploy: True
693      flavor:
694        images: "/var/lib/libvirt/images/ubuntu.qcow2"
695        ram: 4096
696        extra_specs:
697          hw:cpu_sockets: 1
698          hw:cpu_cores: 6
699          hw:cpu_threads: 2
700        user: "" # update VM username
701        password: "" # update password
702      servers:
703        vnf_0:
704          network_ports:
705            mgmt:
706              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
707            xe0:
708              - uplink_0
709            xe1:
710              - downlink_0
711      networks:
712        uplink_0:
713          phy_port: "0000:05:00.0"
714          vpci: "0000:00:07.0"
715          cidr: '152.16.100.10/24'
716          gateway_ip: '152.16.100.20'
717        downlink_0:
718          phy_port: "0000:05:00.1"
719          vpci: "0000:00:08.0"
720          cidr: '152.16.40.10/24'
721          gateway_ip: '152.16.100.20'
722
723
724 SRIOV configuration options
725 +++++++++++++++++++++++++++
726
727 The only configuration option available for SRIOV is *vpci*. It is used as base
728 address for VFs that are created during SRIOV test case execution.
729
730   .. code-block:: yaml+jinja
731
732     networks:
733       uplink_0:
734         phy_port: "0000:05:00.0"
735         vpci: "0000:00:07.0"
736         cidr: '152.16.100.10/24'
737         gateway_ip: '152.16.100.20'
738       downlink_0:
739         phy_port: "0000:05:00.1"
740         vpci: "0000:00:08.0"
741         cidr: '152.16.40.10/24'
742         gateway_ip: '152.16.100.20'
743
744 .. _`VM image properties label`:
745
746 VM image properties
747 '''''''''''''''''''
748
749 VM image properties example under *flavor* section:
750
751   .. code-block:: console
752
753       flavor:
754         images: <path>
755         ram: 8192
756         extra_specs:
757            machine_type: 'pc-i440fx-xenial'
758            hw:cpu_sockets: 1
759            hw:cpu_cores: 6
760            hw:cpu_threads: 2
761            hw_socket: 0
762            cputune: |
763              <cputune>
764                <vcpupin vcpu="0" cpuset="7"/>
765                <vcpupin vcpu="1" cpuset="8"/>
766                ...
767                <vcpupin vcpu="11" cpuset="18"/>
768                <emulatorpin cpuset="11"/>
769              </cputune>
770         user: ""
771         password: ""
772
773 VM image properties description:
774
775   +-------------------------+-------------------------------------------------+
776   | Parameters              | Detail                                          |
777   +=========================+=================================================+
778   | images                  || Path to the VM image generated by              |
779   |                         |  ``nsb_setup.sh``                               |
780   |                         || Default path is ``/var/lib/libvirt/images/``   |
781   |                         || Default file name ``yardstick-nsb-image.img``  |
782   |                         |  or ``yardstick-image.img``                     |
783   +-------------------------+-------------------------------------------------+
784   | ram                     || Amount of RAM to be used for VM                |
785   |                         || Default is 4096 MB                             |
786   +-------------------------+-------------------------------------------------+
787   | hw:cpu_sockets          || Number of sockets provided to the guest VM     |
788   |                         || Default is 1                                   |
789   +-------------------------+-------------------------------------------------+
790   | hw:cpu_cores            || Number of cores provided to the guest VM       |
791   |                         || Default is 2                                   |
792   +-------------------------+-------------------------------------------------+
793   | hw:cpu_threads          || Number of threads provided to the guest VM     |
794   |                         || Default is 2                                   |
795   +-------------------------+-------------------------------------------------+
796   | hw_socket               || Generate vcpu cpuset from given HW socket      |
797   |                         || Default is 0                                   |
798   +-------------------------+-------------------------------------------------+
799   | cputune                 || Maps virtual cpu with logical cpu              |
800   +-------------------------+-------------------------------------------------+
801   | machine_type            || Machine type to be emulated in VM              |
802   |                         || Default is 'pc-i440fx-xenial'                  |
803   +-------------------------+-------------------------------------------------+
804   | user                    || User name to access the VM                     |
805   |                         || Default value is 'root'                        |
806   +-------------------------+-------------------------------------------------+
807   | password                || Password to access the VM                      |
808   +-------------------------+-------------------------------------------------+
809
810
811 OVS-DPDK
812 ^^^^^^^^
813
814 OVS-DPDK Pre-requisites
815 +++++++++++++++++++++++
816
817 On Host, where VM is created:
818  a) Create and configure a bridge named ``br-int`` for VM to connect to
819     external network. Currently this can be done using VXLAN tunnel.
820
821     Execute the following on host, where VM is created:
822
823   .. code-block:: console
824
825       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
826       brctl addbr br-int
827       brctl addif br-int vxlan0
828       ip link set dev vxlan0 up
829       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
830       ip link set dev br-int up
831
832   .. note:: May be needed to add extra rules to iptable to forward traffic.
833
834   .. code-block:: console
835
836     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
837     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
838
839   Execute the following on a jump host:
840
841   .. code-block:: console
842
843       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
844       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
845       ip link set dev vxlan0 up
846
847   .. note:: Host and jump host are different baremetal servers.
848
849  b) Modify test case management CIDR.
850     IP addresses IP#1, IP#2 and CIDR must be in the same network.
851
852   .. code-block:: YAML
853
854     servers:
855       vnf_0:
856         network_ports:
857           mgmt:
858             cidr: '1.1.1.7/24'
859
860  c) Build guest image for VNF to run.
861     Most of the sample test cases in Yardstick are using a guest image called
862     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
863     Yardstick has a tool for building this custom image with SampleVNF.
864     It is necessary to have ``sudo`` rights to use this tool.
865
866    You may need to install several additional packages to use this tool, by
867    following the commands below::
868
869       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
870
871    This image can be built using the following command in the directory where
872    Yardstick is installed::
873
874       export YARD_IMG_ARCH='amd64'
875       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
876       sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
877
878    for more details refer to chapter :doc:`04-installation`
879
880    .. note::  VM should be build with static IP and should be accessible from
881       yardstick host.
882
883 3. OVS & DPDK version.
884    * OVS 2.7 and DPDK 16.11.1 above version is supported
885
886 4. Setup `OVS-DPDK`_ on host.
887
888
889 OVS-DPDK Config pod.yaml describing Topology
890 ++++++++++++++++++++++++++++++++++++++++++++
891
892 OVS-DPDK 2-Node setup
893 +++++++++++++++++++++
894
895 .. code-block:: console
896
897                                +--------------------+
898                                |                    |
899                                |                    |
900                                |        DUT         |
901                                |       (VNF)        |
902                                |                    |
903                                +--------------------+
904                                | virtio |  | virtio |
905                                +--------+  +--------+
906                                     ^          ^
907                                     |          |
908                                     |          |
909                                +--------+  +--------+
910                                | vHOST0 |  | vHOST1 |
911   +----------+               +-------------------------+
912   |          |               |       ^          ^      |
913   |          |               |       |          |      |
914   |          | (0)<----->(0) | ------           |      |
915   |    TG1   |               |          SUT     |      |
916   |          |               |       (ovs-dpdk) |      |
917   |          | (n)<----->(n) |------------------       |
918   +----------+               +-------------------------+
919   trafficgen_0                          host
920
921
922 OVS-DPDK 3-Node setup - Correlated Traffic
923 ++++++++++++++++++++++++++++++++++++++++++
924
925 .. code-block:: console
926
927                                +--------------------+
928                                |                    |
929                                |                    |
930                                |        DUT         |
931                                |       (VNF)        |
932                                |                    |
933                                +--------------------+
934                                | virtio |  | virtio |
935                                +--------+  +--------+
936                                     ^          ^
937                                     |          |
938                                     |          |
939                                +--------+  +--------+
940                                | vHOST0 |  | vHOST1 |
941   +----------+               +-------------------------+          +------------+
942   |          |               |       ^          ^      |          |            |
943   |          |               |       |          |      |          |            |
944   |          | (0)<----->(0) | ------           |      |          |    TG2     |
945   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
946   |          |               |      (ovs-dpdk)  |      |          |            |
947   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
948   +----------+               +-------------------------+          +------------+
949   trafficgen_0                          host                       trafficgen_1
950
951
952 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
953 the topology and update all the required fields::
954
955   cp ./etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
956   cp ./etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
957
958 .. note:: Update all the required fields like ip, user, password, pcis, etc...
959
960 OVS-DPDK Config pod_trex.yaml
961 +++++++++++++++++++++++++++++
962
963 .. code-block:: YAML
964
965     nodes:
966     -
967       name: trafficgen_0
968       role: TrafficGen
969       ip: 1.1.1.1
970       user: root
971       password: r00t
972       interfaces:
973           xe0:  # logical name from topology.yaml and vnfd.yaml
974               vpci:      "0000:07:00.0"
975               driver:    i40e # default kernel driver
976               dpdk_port_num: 0
977               local_ip: "152.16.100.20"
978               netmask:   "255.255.255.0"
979               local_mac: "00:00:00:00:00:01"
980           xe1:  # logical name from topology.yaml and vnfd.yaml
981               vpci:      "0000:07:00.1"
982               driver:    i40e # default kernel driver
983               dpdk_port_num: 1
984               local_ip: "152.16.40.20"
985               netmask:   "255.255.255.0"
986               local_mac: "00:00.00:00:00:02"
987
988 OVS-DPDK Config host_ovs.yaml
989 +++++++++++++++++++++++++++++
990
991 .. code-block:: YAML
992
993     nodes:
994     -
995        name: ovs_dpdk
996        role: OvsDpdk
997        ip: 192.168.100.101
998        user: ""
999        password: ""
1000
1001 ovs_dpdk testcase update:
1002 ``./samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
1003
1004 Update contexts section
1005 '''''''''''''''''''''''
1006
1007 .. code-block:: YAML
1008
1009   contexts:
1010    - name: yardstick
1011      type: Node
1012      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
1013    - type: StandaloneOvsDpdk
1014      name: yardstick
1015      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
1016      vm_deploy: True
1017      ovs_properties:
1018        version:
1019          ovs: 2.7.0
1020          dpdk: 16.11.1
1021        pmd_threads: 2
1022        ram:
1023          socket_0: 2048
1024          socket_1: 2048
1025        queues: 4
1026        vpath: "/usr/local"
1027
1028      flavor:
1029        images: "/var/lib/libvirt/images/ubuntu.qcow2"
1030        ram: 4096
1031        extra_specs:
1032          hw:cpu_sockets: 1
1033          hw:cpu_cores: 6
1034          hw:cpu_threads: 2
1035        user: "" # update VM username
1036        password: "" # update password
1037      servers:
1038        vnf_0:
1039          network_ports:
1040            mgmt:
1041              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
1042            xe0:
1043              - uplink_0
1044            xe1:
1045              - downlink_0
1046      networks:
1047        uplink_0:
1048          phy_port: "0000:05:00.0"
1049          vpci: "0000:00:07.0"
1050          cidr: '152.16.100.10/24'
1051          gateway_ip: '152.16.100.20'
1052        downlink_0:
1053          phy_port: "0000:05:00.1"
1054          vpci: "0000:00:08.0"
1055          cidr: '152.16.40.10/24'
1056          gateway_ip: '152.16.100.20'
1057
1058 OVS-DPDK configuration options
1059 ++++++++++++++++++++++++++++++
1060
1061 There are number of configuration options available for OVS-DPDK context in
1062 test case. Mostly they are used for performance tuning.
1063
1064 OVS-DPDK properties:
1065 ''''''''''''''''''''
1066
1067 OVS-DPDK properties example under *ovs_properties* section:
1068
1069   .. code-block:: console
1070
1071       ovs_properties:
1072         version:
1073           ovs: 2.8.1
1074           dpdk: 17.05.2
1075         pmd_threads: 4
1076         pmd_cpu_mask: "0x3c"
1077         ram:
1078          socket_0: 2048
1079          socket_1: 2048
1080         queues: 2
1081         vpath: "/usr/local"
1082         max_idle: 30000
1083         lcore_mask: 0x02
1084         dpdk_pmd-rxq-affinity:
1085           0: "0:2,1:2"
1086           1: "0:2,1:2"
1087           2: "0:3,1:3"
1088           3: "0:3,1:3"
1089         vhost_pmd-rxq-affinity:
1090           0: "0:3,1:3"
1091           1: "0:3,1:3"
1092           2: "0:4,1:4"
1093           3: "0:4,1:4"
1094
1095 OVS-DPDK properties description:
1096
1097   +-------------------------+-------------------------------------------------+
1098   | Parameters              | Detail                                          |
1099   +=========================+=================================================+
1100   | version                 || Version of OVS and DPDK to be installed        |
1101   |                         || There is a relation between OVS and DPDK       |
1102   |                         |  version which can be found at                  |
1103   |                         | `OVS-DPDK-versions`_                            |
1104   |                         || By default OVS: 2.6.0, DPDK: 16.07.2           |
1105   +-------------------------+-------------------------------------------------+
1106   | lcore_mask              || Core bitmask used during DPDK initialization   |
1107   |                         |  where the non-datapath OVS-DPDK threads such   |
1108   |                         |  as handler and revalidator threads run         |
1109   +-------------------------+-------------------------------------------------+
1110   | pmd_cpu_mask            || Core bitmask that sets which cores are used by |
1111   |                         || OVS-DPDK for datapath packet processing        |
1112   +-------------------------+-------------------------------------------------+
1113   | pmd_threads             || Number of PMD threads used by OVS-DPDK for     |
1114   |                         |  datapath                                       |
1115   |                         || This core mask is evaluated in Yardstick       |
1116   |                         || It will be used if pmd_cpu_mask is not given   |
1117   |                         || Default is 2                                   |
1118   +-------------------------+-------------------------------------------------+
1119   | ram                     || Amount of RAM to be used for each socket, MB   |
1120   |                         || Default is 2048 MB                             |
1121   +-------------------------+-------------------------------------------------+
1122   | queues                  || Number of RX queues used for DPDK physical     |
1123   |                         |  interface                                      |
1124   +-------------------------+-------------------------------------------------+
1125   | dpdk_pmd-rxq-affinity   || RX queue assignment to PMD threads for DPDK    |
1126   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1127   +-------------------------+-------------------------------------------------+
1128   | vhost_pmd-rxq-affinity  || RX queue assignment to PMD threads for vhost   |
1129   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1130   +-------------------------+-------------------------------------------------+
1131   | vpath                   || User path for openvswitch files                |
1132   |                         || Default is ``/usr/local``                      |
1133   +-------------------------+-------------------------------------------------+
1134   | max_idle                || The maximum time that idle flows will remain   |
1135   |                         |  cached in the datapath, ms                     |
1136   +-------------------------+-------------------------------------------------+
1137
1138
1139 VM image properties
1140 '''''''''''''''''''
1141
1142 VM image properties are same as for SRIOV :ref:`VM image properties label`.
1143
1144
1145 OpenStack with SR-IOV support
1146 -----------------------------
1147
1148 This section describes how to run a Sample VNF test case, using Heat context,
1149 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1150 DevStack, with SR-IOV support.
1151
1152
1153 Single node OpenStack with external TG
1154 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1155
1156 .. code-block:: console
1157
1158                                  +----------------------------+
1159                                  |OpenStack(DevStack)         |
1160                                  |                            |
1161                                  |   +--------------------+   |
1162                                  |   |sample-VNF VM       |   |
1163                                  |   |                    |   |
1164                                  |   |        DUT         |   |
1165                                  |   |       (VNF)        |   |
1166                                  |   |                    |   |
1167                                  |   +--------+  +--------+   |
1168                                  |   | VF NIC |  | VF NIC |   |
1169                                  |   +-----+--+--+----+---+   |
1170                                  |         ^          ^       |
1171                                  |         |          |       |
1172   +----------+                   +---------+----------+-------+
1173   |          |                   |        VF0        VF1      |
1174   |          |                   |         ^          ^       |
1175   |          |                   |         |   SUT    |       |
1176   |    TG    | (PF0)<----->(PF0) +---------+          |       |
1177   |          |                   |                    |       |
1178   |          | (PF1)<----->(PF1) +--------------------+       |
1179   |          |                   |                            |
1180   +----------+                   +----------------------------+
1181   trafficgen_0                                 host
1182
1183
1184 Host pre-configuration
1185 ++++++++++++++++++++++
1186
1187 .. warning:: The following configuration requires sudo access to the system.
1188    Make sure that your user have the access.
1189
1190 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1191 manufacturers disable this extension by default.
1192
1193 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1194 config file ``/etc/default/grub``.
1195
1196 For the Intel platform::
1197
1198   ...
1199   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1200   ...
1201
1202 For the AMD platform::
1203
1204   ...
1205   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1206   ...
1207
1208 Update the grub configuration file and restart the system:
1209
1210 .. warning:: The following command will reboot the system.
1211
1212 .. code:: bash
1213
1214   sudo update-grub
1215   sudo reboot
1216
1217 Make sure the extension has been enabled::
1218
1219   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1220
1221   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
1222   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1223   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1224   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1225   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1226   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1227   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1228   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1229
1230 .. TODO: Refer to the yardstick installation guide for proxy set up
1231
1232 Setup system proxy (if needed). Add the following configuration into the
1233 ``/etc/environment`` file:
1234
1235 .. note:: The proxy server name/port and IPs should be changed according to
1236   actual/current proxy configuration in the lab.
1237
1238 .. code:: bash
1239
1240   export http_proxy=http://proxy.company.com:port
1241   export https_proxy=http://proxy.company.com:port
1242   export ftp_proxy=http://proxy.company.com:port
1243   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1244   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1245
1246 Upgrade the system:
1247
1248 .. code:: bash
1249
1250   sudo -EH apt-get update
1251   sudo -EH apt-get upgrade
1252   sudo -EH apt-get dist-upgrade
1253
1254 Install dependencies needed for DevStack
1255
1256 .. code:: bash
1257
1258   sudo -EH apt-get install python python-dev python-pip
1259
1260 Setup SR-IOV ports on the host:
1261
1262 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1263   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1264   interface names should be changed according to the HW environment used for
1265   testing.
1266
1267 .. code:: bash
1268
1269   sudo ip link set dev enp24s0f0 up
1270   sudo ip link set dev enp24s0f1 up
1271   sudo ip link set dev enp24s0f3 up
1272
1273   # Create VFs on PF
1274   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1275   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1276
1277
1278 DevStack installation
1279 +++++++++++++++++++++
1280
1281 If you want to try out NSB, but don't have OpenStack set-up, you can use
1282 `Devstack`_ to install OpenStack on a host. Please note, that the
1283 ``stable/pike`` branch of devstack repo should be used during the installation.
1284 The required ``local.conf`` configuration file are described below.
1285
1286 DevStack configuration file:
1287
1288 .. note:: Update the devstack configuration file by replacing angluar brackets
1289   with a short description inside.
1290
1291 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1292   commands to get device and vendor id of the virtual function (VF).
1293
1294 .. literalinclude:: code/single-devstack-local.conf
1295    :language: console
1296
1297 Start the devstack installation on a host.
1298
1299 TG host configuration
1300 +++++++++++++++++++++
1301
1302 Yardstick automatically installs and configures Trex traffic generator on TG
1303 host based on provided POD file (see below). Anyway, it's recommended to check
1304 the compatibility of the installed NIC on the TG server with software Trex
1305 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1306
1307 Run the Sample VNF test case
1308 ++++++++++++++++++++++++++++
1309
1310 There is an example of Sample VNF test case ready to be executed in an
1311 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1312 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1313
1314 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1315 context.
1316
1317 Create pod file for TG in the yardstick repo folder located in the yardstick
1318 container:
1319
1320 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1321   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1322   command to get the PF PCI address for ``vpci`` field.
1323
1324 .. literalinclude:: code/single-yardstick-pod.conf
1325    :language: console
1326
1327 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1328 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1329 context using steps described in `NS testing - using yardstick CLI`_ section.
1330
1331
1332 Multi node OpenStack TG and VNF setup (two nodes)
1333 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1334
1335 .. code-block:: console
1336
1337   +----------------------------+                   +----------------------------+
1338   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1339   |                            |                   |                            |
1340   |   +--------------------+   |                   |   +--------------------+   |
1341   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1342   |   |                    |   |                   |   |                    |   |
1343   |   |         TG         |   |                   |   |        DUT         |   |
1344   |   |    trafficgen_0    |   |                   |   |       (VNF)        |   |
1345   |   |                    |   |                   |   |                    |   |
1346   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1347   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1348   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1349   |        ^           ^       |                   |         ^          ^       |
1350   |        |           |       |                   |         |          |       |
1351   +--------+-----------+-------+                   +---------+----------+-------+
1352   |       VF0         VF1      |                   |        VF0        VF1      |
1353   |        ^           ^       |                   |         ^          ^       |
1354   |        |    SUT2   |       |                   |         |   SUT1   |       |
1355   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1356   |        |                   |                   |                    |       |
1357   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1358   |                            |                   |                            |
1359   +----------------------------+                   +----------------------------+
1360            host2 (compute)                               host1 (controller)
1361
1362
1363 Controller/Compute pre-configuration
1364 ++++++++++++++++++++++++++++++++++++
1365
1366 Pre-configuration of the controller and compute hosts are the same as
1367 described in `Host pre-configuration`_ section.
1368
1369 DevStack configuration
1370 ++++++++++++++++++++++
1371
1372 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1373 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1374 devstack repo should be used during the installation.
1375
1376 .. note:: Update the devstack configuration files by replacing angluar brackets
1377   with a short description inside.
1378
1379 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1380   commands to get device and vendor id of the virtual function (VF).
1381
1382 DevStack configuration file for controller host:
1383
1384 .. literalinclude:: code/multi-devstack-controller-local.conf
1385    :language: console
1386
1387 DevStack configuration file for compute host:
1388
1389 .. literalinclude:: code/multi-devstack-compute-local.conf
1390    :language: console
1391
1392 Start the devstack installation on the controller and compute hosts.
1393
1394 Run the sample vFW TC
1395 +++++++++++++++++++++
1396
1397 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1398 context.
1399
1400 Run the sample vFW RFC2544 SR-IOV test case
1401 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1402 in the heat context using steps described in
1403 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1404 line arguments:
1405
1406 .. code:: bash
1407
1408   yardstick -d task start --task-args='{"provider": "sriov"}' \
1409   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1410
1411
1412 Enabling other Traffic generators
1413 ---------------------------------
1414
1415 IxLoad
1416 ^^^^^^
1417
1418 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1419    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1420    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1421    ``<IxOS version>Linux64.bin.tar.gz``
1422    If the installation was not done inside the container, after installing
1423    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1424    sure you can run this cmd inside the yardstick container. Usually user is
1425    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1426    ``/usr/bin/ixiapython<ver>`` inside the container.
1427
1428 2. Update ``pod_ixia.yaml`` file with ixia details.
1429
1430   .. code-block:: console
1431
1432     cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1433       /etc/yardstick/nodes/pod_ixia.yaml
1434
1435   Config ``pod_ixia.yaml``
1436
1437   .. literalinclude:: code/pod_ixia.yaml
1438      :language: console
1439
1440   for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1441   for ovs-dpdk/sriov configuration
1442
1443 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1444    You will also need to configure the IxLoad machine to start the IXIA
1445    IxosTclServer. This can be started like so:
1446
1447    * Connect to the IxLoad machine using RDP
1448    * Go to:
1449      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1450      or
1451      ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1452
1453 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1454
1455 5. Execute testcase in samplevnf folder e.g.
1456    ``./samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1457
1458 IxNetwork
1459 ^^^^^^^^^
1460
1461 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1462 installed as part of the requirements of the project.
1463
1464 1. Update ``pod_ixia.yaml`` file with ixia details.
1465
1466   .. code-block:: console
1467
1468     cp ./etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1469     /etc/yardstick/nodes/pod_ixia.yaml
1470
1471   Configure ``pod_ixia.yaml``
1472
1473   .. literalinclude:: code/pod_ixia.yaml
1474      :language: console
1475
1476   for sriov/ovs_dpdk pod files, please refer to above
1477   `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1478
1479 2. Start IxNetwork TCL Server
1480    You will also need to configure the IxNetwork machine to start the IXIA
1481    IxNetworkTclServer. This can be started like so:
1482
1483     * Connect to the IxNetwork machine using RDP
1484     * Go to:
1485       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1486       (or ``IxNetworkApiServer``)
1487
1488 3. Execute testcase in samplevnf folder e.g.
1489    ``./samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1490
1491 Spirent Landslide
1492 -----------------
1493
1494 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1495 to be preinstalled and properly configured.
1496
1497 - Java
1498
1499     32-bit Java installation is required for the Spirent Landslide TCL API.
1500
1501     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1502
1503     .. important::
1504       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1505       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1506       section of installation instructions.
1507
1508 - LsApi (Tcl API module)
1509
1510     Follow Landslide documentation for detailed instructions on Linux
1511     installation of Tcl API and its dependencies
1512     ``http://TAS_HOST_IP/tclapiinstall.html``.
1513     For working with LsApi Python wrapper only steps 1-5 are required.
1514
1515     .. note:: After installation make sure your API home path is included in
1516       ``PYTHONPATH`` environment variable.
1517
1518     .. important::
1519       The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1520       For LsApi module to initialize correctly following lines (184-186) in
1521       lsapi.py
1522
1523     .. code-block:: python
1524
1525         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1526         if ldpath == '':
1527          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1528
1529     should be changed to:
1530
1531     .. code-block:: python
1532
1533         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1534         if not ldpath == '':
1535                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1536
1537 .. note:: The Spirent landslide TCL software package needs to be updated in case
1538   the user upgrades to a new version of Spirent landslide software.