Merge "Update to support using external heat template"
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
5
6 ..
7    Convention for heading levels in Yardstick documentation:
8
9    =======  Heading 0 (reserved for the title in a document)
10    -------  Heading 1
11    ^^^^^^^  Heading 2
12    +++++++  Heading 3
13    '''''''  Heading 4
14
15    Avoid deeper levels because they do not render well.
16
17
18 ================
19 NSB Installation
20 ================
21
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
25
26 Abstract
27 --------
28
29 The steps needed to run Yardstick with NSB testing are:
30
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
34 * Run the test case.
35
36 Prerequisites
37 -------------
38
39 Refer to :doc:`04-installation` for more information on Yardstick
40 prerequisites.
41
42 Several prerequisites are needed for Yardstick (VNF testing):
43
44   * Python Modules: pyzmq, pika.
45   * flex
46   * bison
47   * build-essential
48   * automake
49   * libtool
50   * librabbitmq-dev
51   * rabbitmq-server
52   * collectd
53   * intel-cmt-cat
54
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57
58 SUT requirements:
59
60    ======= ===================
61    Item    Description
62    ======= ===================
63    Memory  Min 20GB
64    NICs    2 x 10G
65    OS      Ubuntu 16.04.3 LTS
66    kernel  4.4.0-34-generic
67    DPDK    17.02
68    ======= ===================
69
70 Boot and BIOS settings:
71
72    ============= =================================================
73    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76                  iommu=on iommu=pt intel_iommu=on
77                  Note: nohz_full and rcu_nocbs is to disable Linux
78                  kernel interrupts
79    BIOS          CPU Power and Performance Policy <Performance>
80                  CPU C-state Disabled
81                  CPU P-state Disabled
82                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83                  Hyper-Threading Technology (If supported) Enabled
84                  Virtualization Techology Enabled
85                  Intel(R) VT for Direct I/O Enabled
86                  Coherency Enabled
87                  Turbo Boost Disabled
88    ============= =================================================
89
90 Install Yardstick (NSB Testing)
91 -------------------------------
92
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
95
96 1. Install Yardstick in specified mode: bare metal or container.
97    Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99    sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100    Add such servers to ``install-inventory.ini`` file to either
101    ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102    It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104    Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105    The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
107
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
110
111 Set environment::
112
113     http_proxy='http://proxy.company.com:port'
114     https_proxy='http://proxy.company.com:port'
115
116 .. code-block:: console
117
118     export http_proxy='http://proxy.company.com:port'
119     export https_proxy='http://proxy.company.com:port'
120
121 Download the source code and check out the latest stable branch
122
123 .. code-block:: console
124
125   git clone https://gerrit.opnfv.org/gerrit/yardstick
126   cd yardstick
127   # Switch to latest stable branch
128   git checkout stable/gambia
129
130 Modify the Yardstick installation inventory used by Ansible::
131
132   cat ./ansible/install-inventory.ini
133   [jumphost]
134   localhost ansible_connection=local
135
136   # section below is only due backward compatibility.
137   # it will be removed later
138   [yardstick:children]
139   jumphost
140
141   [yardstick-baremetal]
142   baremetal ansible_host=192.168.2.51 ansible_connection=ssh
143
144   [yardstick-standalone]
145   standalone ansible_host=192.168.2.52 ansible_connection=ssh
146
147   [all:vars]
148   # Uncomment credentials below if needed
149     ansible_user=root
150     ansible_ssh_pass=root
151   # ansible_ssh_private_key_file=/root/.ssh/id_rsa
152   # When IMG_PROPERTY is passed neither normal nor nsb set
153   # "path_to_vm=/path/to/image" to add it to OpenStack
154   # path_to_img=/tmp/workspace/yardstick-image.img
155
156   # List of CPUs to be isolated (not used by default)
157   # Grub line will be extended with:
158   # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
159   # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
160
161 .. warning::
162
163    Before running ``nsb_setup.sh`` make sure python is installed on servers
164    added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
165
166 .. note::
167
168    SSH access without password needs to be configured for all your nodes
169    defined in ``install-inventory.ini`` file.
170    If you want to use password authentication you need to install ``sshpass``::
171
172      sudo -EH apt-get install sshpass
173
174
175 .. note::
176
177    A VM image built by other means than Yardstick can be added to OpenStack.
178    Uncomment and set correct path to the VM image in the
179    ``install-inventory.ini`` file::
180
181      path_to_img=/tmp/workspace/yardstick-image.img
182
183
184 .. note::
185
186    CPU isolation can be applied to the remote servers, like:
187    ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
188    ``install-inventory.ini`` file.
189
190 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
191 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
192 installs packages to the servers given in ``yardstick-standalone`` and
193 ``yardstick-baremetal`` host groups.
194
195 To pull Yardstick built based on Ubuntu 18 run::
196
197     ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
198
199 To change default behavior modify parameters for ``install.yaml`` in
200 ``nsb_setup.sh`` file.
201
202 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
203 parameters.
204
205 To execute an installation for a **BareMetal** or a **Standalone context**::
206
207     ./nsb_setup.sh
208
209 To execute an installation for an **OpenStack** context::
210
211     ./nsb_setup.sh <path to admin-openrc.sh>
212
213 .. note::
214
215    Yardstick may not be operational after distributive linux kernel update if
216    it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
217
218 .. warning::
219
220    The Yardstick VM image (NSB or normal) cannot be built inside a VM.
221
222 .. warning::
223
224    The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
225    Reboot of the servers from ``yardstick-standalone`` or
226    ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
227    required to apply those changes.
228
229 The above commands will set up Docker with the latest Yardstick code. To
230 execute::
231
232   docker exec -it yardstick bash
233
234 .. note::
235
236    It may be needed to configure tty in docker container to extend commandline
237    character length, for example:
238
239    stty size rows 58 cols 234
240
241 It will also automatically download all the packages needed for NSB Testing
242 setup. Refer chapter :doc:`04-installation` for more on Docker.
243
244 **Install Yardstick using Docker (recommended)**
245
246 Bare Metal context example
247 ^^^^^^^^^^^^^^^^^^^^^^^^^^
248
249 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
250
251 Perform following steps to install NSB:
252
253 1. Clone Yardstick repo to jump host.
254 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
255    ``install-inventory.ini`` file to install NSB and dependencies. Install
256    python on servers.
257 3. Start deployment using docker image based on Ubuntu 16:
258
259 .. code-block:: console
260
261    ./nsb_setup.sh
262
263 4. Reboot bare metal servers.
264 5. Enter to yardstick container and modify pod yaml file and run tests.
265
266 Standalone context example for Ubuntu 18
267 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
268
269 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
270 Ubuntu 18 is installed on all servers.
271
272 Perform following steps to install NSB:
273
274 1. Clone Yardstick repo to jump host.
275 2. Add TG server to ``yardstick-baremetal`` group in
276    ``install-inventory.ini`` file to install NSB and dependencies.
277    Add server where VM with sample VNF will be deployed to
278    ``yardstick-standalone`` group in ``install-inventory.ini`` file.
279    Target VM image named ``yardstick-nsb-image.img`` will be placed to
280    ``/var/lib/libvirt/images/``.
281    Install python on servers.
282 3. Modify ``nsb_setup.sh`` on jump host:
283
284 .. code-block:: console
285
286    ansible-playbook \
287    -e IMAGE_PROPERTY='nsb' \
288    -e OS_RELEASE='bionic' \
289    -e INSTALLATION_MODE='container_pull' \
290    -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
291    -i install-inventory.ini install.yaml
292
293 4. Start deployment with Yardstick docker images based on Ubuntu 18:
294
295 .. code-block:: console
296
297    ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
298
299 5. Reboot servers.
300 6. Enter to yardstick container and modify pod yaml file and run tests.
301
302
303 System Topology
304 ---------------
305
306 .. code-block:: console
307
308   +----------+              +----------+
309   |          |              |          |
310   |          | (0)----->(0) |          |
311   |    TG1   |              |    DUT   |
312   |          |              |          |
313   |          | (1)<-----(1) |          |
314   +----------+              +----------+
315   trafficgen_0                   vnf
316
317
318 Environment parameters and credentials
319 --------------------------------------
320
321 Configure yardstick.conf
322 ^^^^^^^^^^^^^^^^^^^^^^^^
323
324 If you did not run ``yardstick env influxdb`` inside the container to generate
325 ``yardstick.conf``, then create the config file manually (run inside the
326 container)::
327
328     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
329     vi /etc/yardstick/yardstick.conf
330
331 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
332 section::
333
334   [DEFAULT]
335   debug = True
336   dispatcher = influxdb
337
338   [dispatcher_influxdb]
339   timeout = 5
340   target = http://{YOUR_IP_HERE}:8086
341   db_name = yardstick
342   username = root
343   password = root
344
345   [nsb]
346   trex_path=/opt/nsb_bin/trex/scripts
347   bin_path=/opt/nsb_bin
348   trex_client_lib=/opt/nsb_bin/trex_client/stl
349
350 Run Yardstick - Network Service Testcases
351 -----------------------------------------
352
353 NS testing - using yardstick CLI
354 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
355
356   See :doc:`04-installation`
357
358 Connect to the Yardstick container::
359
360   docker exec -it yardstick /bin/bash
361
362 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
363   source /etc/yardstick/openstack.creds
364
365 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
366 OpenStack::
367
368   export EXTERNAL_NETWORK="<openstack public network>"
369
370 Finally, you should be able to run the testcase::
371
372   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
373
374 Network Service Benchmarking - Bare-Metal
375 -----------------------------------------
376
377 Bare-Metal Config pod.yaml describing Topology
378 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
379
380 Bare-Metal 2-Node setup
381 +++++++++++++++++++++++
382 .. code-block:: console
383
384   +----------+              +----------+
385   |          |              |          |
386   |          | (0)----->(0) |          |
387   |    TG1   |              |    DUT   |
388   |          |              |          |
389   |          | (n)<-----(n) |          |
390   +----------+              +----------+
391   trafficgen_0                   vnf
392
393 Bare-Metal 3-Node setup - Correlated Traffic
394 ++++++++++++++++++++++++++++++++++++++++++++
395 .. code-block:: console
396
397   +----------+              +----------+            +------------+
398   |          |              |          |            |            |
399   |          |              |          |            |            |
400   |          | (0)----->(0) |          |            |    UDP     |
401   |    TG1   |              |    DUT   |            |   Replay   |
402   |          |              |          |            |            |
403   |          |              |          |(1)<---->(0)|            |
404   +----------+              +----------+            +------------+
405   trafficgen_0                   vnf                 trafficgen_1
406
407
408 Bare-Metal Config pod.yaml
409 ^^^^^^^^^^^^^^^^^^^^^^^^^^
410 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
411 topology and update all the required fields.::
412
413     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
414
415 .. code-block:: YAML
416
417     nodes:
418     -
419         name: trafficgen_0
420         role: TrafficGen
421         ip: 1.1.1.1
422         user: root
423         password: r00t
424         interfaces:
425             xe0:  # logical name from topology.yaml and vnfd.yaml
426                 vpci:      "0000:07:00.0"
427                 driver:    i40e # default kernel driver
428                 dpdk_port_num: 0
429                 local_ip: "152.16.100.20"
430                 netmask:   "255.255.255.0"
431                 local_mac: "00:00:00:00:00:01"
432             xe1:  # logical name from topology.yaml and vnfd.yaml
433                 vpci:      "0000:07:00.1"
434                 driver:    i40e # default kernel driver
435                 dpdk_port_num: 1
436                 local_ip: "152.16.40.20"
437                 netmask:   "255.255.255.0"
438                 local_mac: "00:00.00:00:00:02"
439
440     -
441         name: vnf
442         role: vnf
443         ip: 1.1.1.2
444         user: root
445         password: r00t
446         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
447         interfaces:
448             xe0:  # logical name from topology.yaml and vnfd.yaml
449                 vpci:      "0000:07:00.0"
450                 driver:    i40e # default kernel driver
451                 dpdk_port_num: 0
452                 local_ip: "152.16.100.19"
453                 netmask:   "255.255.255.0"
454                 local_mac: "00:00:00:00:00:03"
455
456             xe1:  # logical name from topology.yaml and vnfd.yaml
457                 vpci:      "0000:07:00.1"
458                 driver:    i40e # default kernel driver
459                 dpdk_port_num: 1
460                 local_ip: "152.16.40.19"
461                 netmask:   "255.255.255.0"
462                 local_mac: "00:00:00:00:00:04"
463         routing_table:
464         - network: "152.16.100.20"
465           netmask: "255.255.255.0"
466           gateway: "152.16.100.20"
467           if: "xe0"
468         - network: "152.16.40.20"
469           netmask: "255.255.255.0"
470           gateway: "152.16.40.20"
471           if: "xe1"
472         nd_route_tbl:
473         - network: "0064:ff9b:0:0:0:0:9810:6414"
474           netmask: "112"
475           gateway: "0064:ff9b:0:0:0:0:9810:6414"
476           if: "xe0"
477         - network: "0064:ff9b:0:0:0:0:9810:2814"
478           netmask: "112"
479           gateway: "0064:ff9b:0:0:0:0:9810:2814"
480           if: "xe1"
481
482
483 Standalone Virtualization
484 -------------------------
485
486 SR-IOV
487 ^^^^^^
488
489 SR-IOV Pre-requisites
490 +++++++++++++++++++++
491
492 On Host, where VM is created:
493  a) Create and configure a bridge named ``br-int`` for VM to connect to
494     external network. Currently this can be done using VXLAN tunnel.
495
496     Execute the following on host, where VM is created::
497
498       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
499       brctl addbr br-int
500       brctl addif br-int vxlan0
501       ip link set dev vxlan0 up
502       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
503       ip link set dev br-int up
504
505   .. note:: You may need to add extra rules to iptable to forward traffic.
506
507   .. code-block:: console
508
509     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
510     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
511
512   Execute the following on a jump host:
513
514   .. code-block:: console
515
516       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
517       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
518       ip link set dev vxlan0 up
519
520   .. note:: Host and jump host are different baremetal servers.
521
522  b) Modify test case management CIDR.
523     IP addresses IP#1, IP#2 and CIDR must be in the same network.
524
525   .. code-block:: YAML
526
527     servers:
528       vnf_0:
529         network_ports:
530           mgmt:
531             cidr: '1.1.1.7/24'
532
533  c) Build guest image for VNF to run.
534     Most of the sample test cases in Yardstick are using a guest image called
535     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
536     Yardstick has a tool for building this custom image with SampleVNF.
537     It is necessary to have ``sudo`` rights to use this tool.
538
539    Also you may need to install several additional packages to use this tool, by
540    following the commands below::
541
542       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
543
544    This image can be built using the following command in the directory where
545    Yardstick is installed::
546
547       export YARD_IMG_ARCH='amd64'
548       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
549
550    For instructions on generating a cloud image using Ansible, refer to
551    :doc:`04-installation`.
552
553    for more details refer to chapter :doc:`04-installation`
554
555    .. note:: VM should be build with static IP and be accessible from the
556       Yardstick host.
557
558
559 SR-IOV Config pod.yaml describing Topology
560 ++++++++++++++++++++++++++++++++++++++++++
561
562 SR-IOV 2-Node setup
563 +++++++++++++++++++
564 .. code-block:: console
565
566                                +--------------------+
567                                |                    |
568                                |                    |
569                                |        DUT         |
570                                |       (VNF)        |
571                                |                    |
572                                +--------------------+
573                                | VF NIC |  | VF NIC |
574                                +--------+  +--------+
575                                      ^          ^
576                                      |          |
577                                      |          |
578   +----------+               +-------------------------+
579   |          |               |       ^          ^      |
580   |          |               |       |          |      |
581   |          | (0)<----->(0) | ------    SUT    |      |
582   |    TG1   |               |                  |      |
583   |          | (n)<----->(n) | -----------------       |
584   |          |               |                         |
585   +----------+               +-------------------------+
586   trafficgen_0                          host
587
588
589
590 SR-IOV 3-Node setup - Correlated Traffic
591 ++++++++++++++++++++++++++++++++++++++++
592 .. code-block:: console
593
594                              +--------------------+
595                              |                    |
596                              |                    |
597                              |        DUT         |
598                              |       (VNF)        |
599                              |                    |
600                              +--------------------+
601                              | VF NIC |  | VF NIC |
602                              +--------+  +--------+
603                                    ^          ^
604                                    |          |
605                                    |          |
606   +----------+               +---------------------+            +--------------+
607   |          |               |     ^          ^    |            |              |
608   |          |               |     |          |    |            |              |
609   |          | (0)<----->(0) |-----           |    |            |     TG2      |
610   |    TG1   |               |         SUT    |    |            | (UDP Replay) |
611   |          |               |                |    |            |              |
612   |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
613   +----------+               +---------------------+            +--------------+
614   trafficgen_0                          host                      trafficgen_1
615
616 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
617 topology and update all the required fields.
618
619 .. code-block:: console
620
621     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
622     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
623
624 .. note:: Update all the required fields like ip, user, password, pcis, etc...
625
626 SR-IOV Config pod_trex.yaml
627 +++++++++++++++++++++++++++
628
629 .. code-block:: YAML
630
631     nodes:
632     -
633         name: trafficgen_0
634         role: TrafficGen
635         ip: 1.1.1.1
636         user: root
637         password: r00t
638         key_filename: /root/.ssh/id_rsa
639         interfaces:
640             xe0:  # logical name from topology.yaml and vnfd.yaml
641                 vpci:      "0000:07:00.0"
642                 driver:    i40e # default kernel driver
643                 dpdk_port_num: 0
644                 local_ip: "152.16.100.20"
645                 netmask:   "255.255.255.0"
646                 local_mac: "00:00:00:00:00:01"
647             xe1:  # logical name from topology.yaml and vnfd.yaml
648                 vpci:      "0000:07:00.1"
649                 driver:    i40e # default kernel driver
650                 dpdk_port_num: 1
651                 local_ip: "152.16.40.20"
652                 netmask:   "255.255.255.0"
653                 local_mac: "00:00.00:00:00:02"
654
655 SR-IOV Config host_sriov.yaml
656 +++++++++++++++++++++++++++++
657
658 .. code-block:: YAML
659
660     nodes:
661     -
662        name: sriov
663        role: Sriov
664        ip: 192.168.100.101
665        user: ""
666        password: ""
667
668 SR-IOV testcase update:
669 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
670
671 Update contexts section
672 '''''''''''''''''''''''
673
674 .. code-block:: YAML
675
676   contexts:
677    - name: yardstick
678      type: Node
679      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
680    - type: StandaloneSriov
681      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
682      name: yardstick
683      vm_deploy: True
684      flavor:
685        images: "/var/lib/libvirt/images/ubuntu.qcow2"
686        ram: 4096
687        extra_specs:
688          hw:cpu_sockets: 1
689          hw:cpu_cores: 6
690          hw:cpu_threads: 2
691        user: "" # update VM username
692        password: "" # update password
693      servers:
694        vnf_0:
695          network_ports:
696            mgmt:
697              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
698            xe0:
699              - uplink_0
700            xe1:
701              - downlink_0
702      networks:
703        uplink_0:
704          phy_port: "0000:05:00.0"
705          vpci: "0000:00:07.0"
706          cidr: '152.16.100.10/24'
707          gateway_ip: '152.16.100.20'
708        downlink_0:
709          phy_port: "0000:05:00.1"
710          vpci: "0000:00:08.0"
711          cidr: '152.16.40.10/24'
712          gateway_ip: '152.16.100.20'
713
714
715 OVS-DPDK
716 ^^^^^^^^
717
718 OVS-DPDK Pre-requisites
719 +++++++++++++++++++++++
720
721 On Host, where VM is created:
722  a) Create and configure a bridge named ``br-int`` for VM to connect to
723     external network. Currently this can be done using VXLAN tunnel.
724
725     Execute the following on host, where VM is created:
726
727   .. code-block:: console
728
729       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
730       brctl addbr br-int
731       brctl addif br-int vxlan0
732       ip link set dev vxlan0 up
733       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
734       ip link set dev br-int up
735
736   .. note:: May be needed to add extra rules to iptable to forward traffic.
737
738   .. code-block:: console
739
740     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
741     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
742
743   Execute the following on a jump host:
744
745   .. code-block:: console
746
747       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
748       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
749       ip link set dev vxlan0 up
750
751   .. note:: Host and jump host are different baremetal servers.
752
753  b) Modify test case management CIDR.
754     IP addresses IP#1, IP#2 and CIDR must be in the same network.
755
756   .. code-block:: YAML
757
758     servers:
759       vnf_0:
760         network_ports:
761           mgmt:
762             cidr: '1.1.1.7/24'
763
764  c) Build guest image for VNF to run.
765     Most of the sample test cases in Yardstick are using a guest image called
766     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
767     Yardstick has a tool for building this custom image with SampleVNF.
768     It is necessary to have ``sudo`` rights to use this tool.
769
770    You may need to install several additional packages to use this tool, by
771    following the commands below::
772
773       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
774
775    This image can be built using the following command in the directory where
776    Yardstick is installed::
777
778       export YARD_IMG_ARCH='amd64'
779       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
780       sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
781
782    for more details refer to chapter :doc:`04-installation`
783
784    .. note::  VM should be build with static IP and should be accessible from
785       yardstick host.
786
787 3. OVS & DPDK version.
788    * OVS 2.7 and DPDK 16.11.1 above version is supported
789
790 4. Setup `OVS-DPDK`_ on host.
791
792
793 OVS-DPDK Config pod.yaml describing Topology
794 ++++++++++++++++++++++++++++++++++++++++++++
795
796 OVS-DPDK 2-Node setup
797 +++++++++++++++++++++
798
799 .. code-block:: console
800
801                                +--------------------+
802                                |                    |
803                                |                    |
804                                |        DUT         |
805                                |       (VNF)        |
806                                |                    |
807                                +--------------------+
808                                | virtio |  | virtio |
809                                +--------+  +--------+
810                                     ^          ^
811                                     |          |
812                                     |          |
813                                +--------+  +--------+
814                                | vHOST0 |  | vHOST1 |
815   +----------+               +-------------------------+
816   |          |               |       ^          ^      |
817   |          |               |       |          |      |
818   |          | (0)<----->(0) | ------           |      |
819   |    TG1   |               |          SUT     |      |
820   |          |               |       (ovs-dpdk) |      |
821   |          | (n)<----->(n) |------------------       |
822   +----------+               +-------------------------+
823   trafficgen_0                          host
824
825
826 OVS-DPDK 3-Node setup - Correlated Traffic
827 ++++++++++++++++++++++++++++++++++++++++++
828
829 .. code-block:: console
830
831                                +--------------------+
832                                |                    |
833                                |                    |
834                                |        DUT         |
835                                |       (VNF)        |
836                                |                    |
837                                +--------------------+
838                                | virtio |  | virtio |
839                                +--------+  +--------+
840                                     ^          ^
841                                     |          |
842                                     |          |
843                                +--------+  +--------+
844                                | vHOST0 |  | vHOST1 |
845   +----------+               +-------------------------+          +------------+
846   |          |               |       ^          ^      |          |            |
847   |          |               |       |          |      |          |            |
848   |          | (0)<----->(0) | ------           |      |          |    TG2     |
849   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
850   |          |               |      (ovs-dpdk)  |      |          |            |
851   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
852   +----------+               +-------------------------+          +------------+
853   trafficgen_0                          host                       trafficgen_1
854
855
856 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
857 the topology and update all the required fields::
858
859   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
860   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
861
862 .. note:: Update all the required fields like ip, user, password, pcis, etc...
863
864 OVS-DPDK Config pod_trex.yaml
865 +++++++++++++++++++++++++++++
866
867 .. code-block:: YAML
868
869     nodes:
870     -
871       name: trafficgen_0
872       role: TrafficGen
873       ip: 1.1.1.1
874       user: root
875       password: r00t
876       interfaces:
877           xe0:  # logical name from topology.yaml and vnfd.yaml
878               vpci:      "0000:07:00.0"
879               driver:    i40e # default kernel driver
880               dpdk_port_num: 0
881               local_ip: "152.16.100.20"
882               netmask:   "255.255.255.0"
883               local_mac: "00:00:00:00:00:01"
884           xe1:  # logical name from topology.yaml and vnfd.yaml
885               vpci:      "0000:07:00.1"
886               driver:    i40e # default kernel driver
887               dpdk_port_num: 1
888               local_ip: "152.16.40.20"
889               netmask:   "255.255.255.0"
890               local_mac: "00:00.00:00:00:02"
891
892 OVS-DPDK Config host_ovs.yaml
893 +++++++++++++++++++++++++++++
894
895 .. code-block:: YAML
896
897     nodes:
898     -
899        name: ovs_dpdk
900        role: OvsDpdk
901        ip: 192.168.100.101
902        user: ""
903        password: ""
904
905 ovs_dpdk testcase update:
906 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
907
908 Update contexts section
909 '''''''''''''''''''''''
910
911 .. code-block:: YAML
912
913   contexts:
914    - name: yardstick
915      type: Node
916      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
917    - type: StandaloneOvsDpdk
918      name: yardstick
919      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
920      vm_deploy: True
921      ovs_properties:
922        version:
923          ovs: 2.7.0
924          dpdk: 16.11.1
925        pmd_threads: 2
926        ram:
927          socket_0: 2048
928          socket_1: 2048
929        queues: 4
930        vpath: "/usr/local"
931
932      flavor:
933        images: "/var/lib/libvirt/images/ubuntu.qcow2"
934        ram: 4096
935        extra_specs:
936          hw:cpu_sockets: 1
937          hw:cpu_cores: 6
938          hw:cpu_threads: 2
939        user: "" # update VM username
940        password: "" # update password
941      servers:
942        vnf_0:
943          network_ports:
944            mgmt:
945              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
946            xe0:
947              - uplink_0
948            xe1:
949              - downlink_0
950      networks:
951        uplink_0:
952          phy_port: "0000:05:00.0"
953          vpci: "0000:00:07.0"
954          cidr: '152.16.100.10/24'
955          gateway_ip: '152.16.100.20'
956        downlink_0:
957          phy_port: "0000:05:00.1"
958          vpci: "0000:00:08.0"
959          cidr: '152.16.40.10/24'
960          gateway_ip: '152.16.100.20'
961
962 OVS-DPDK configuration options
963 ++++++++++++++++++++++++++++++
964
965 There are number of configuration options available for OVS-DPDK context in
966 test case. Mostly they are used for performance tuning.
967
968 OVS-DPDK properties:
969 ''''''''''''''''''''
970
971 OVS-DPDK properties example under *ovs_properties* section:
972
973   .. code-block:: console
974
975       ovs_properties:
976         version:
977           ovs: 2.8.1
978           dpdk: 17.05.2
979         pmd_threads: 4
980         pmd_cpu_mask: "0x3c"
981         ram:
982          socket_0: 2048
983          socket_1: 2048
984         queues: 2
985         vpath: "/usr/local"
986         max_idle: 30000
987         lcore_mask: 0x02
988         dpdk_pmd-rxq-affinity:
989           0: "0:2,1:2"
990           1: "0:2,1:2"
991           2: "0:3,1:3"
992           3: "0:3,1:3"
993         vhost_pmd-rxq-affinity:
994           0: "0:3,1:3"
995           1: "0:3,1:3"
996           2: "0:4,1:4"
997           3: "0:4,1:4"
998
999 OVS-DPDK properties description:
1000
1001   +-------------------------+-------------------------------------------------+
1002   | Parameters              | Detail                                          |
1003   +=========================+=================================================+
1004   | version                 || Version of OVS and DPDK to be installed        |
1005   |                         || There is a relation between OVS and DPDK       |
1006   |                         |  version which can be found at                  |
1007   |                         | `OVS-DPDK-versions`_                            |
1008   |                         || By default OVS: 2.6.0, DPDK: 16.07.2           |
1009   +-------------------------+-------------------------------------------------+
1010   | lcore_mask              || Core bitmask used during DPDK initialization   |
1011   |                         |  where the non-datapath OVS-DPDK threads such   |
1012   |                         |  as handler and revalidator threads run         |
1013   +-------------------------+-------------------------------------------------+
1014   | pmd_cpu_mask            || Core bitmask that sets which cores are used by |
1015   |                         || OVS-DPDK for datapath packet processing        |
1016   +-------------------------+-------------------------------------------------+
1017   | pmd_threads             || Number of PMD threads used by OVS-DPDK for     |
1018   |                         |  datapath                                       |
1019   |                         || This core mask is evaluated in Yardstick       |
1020   |                         || It will be used if pmd_cpu_mask is not given   |
1021   |                         || Default is 2                                   |
1022   +-------------------------+-------------------------------------------------+
1023   | ram                     || Amount of RAM to be used for each socket, MB   |
1024   |                         || Default is 2048 MB                             |
1025   +-------------------------+-------------------------------------------------+
1026   | queues                  || Number of RX queues used for DPDK physical     |
1027   |                         |  interface                                      |
1028   +-------------------------+-------------------------------------------------+
1029   | dpdk_pmd-rxq-affinity   || RX queue assignment to PMD threads for DPDK    |
1030   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1031   +-------------------------+-------------------------------------------------+
1032   | vhost_pmd-rxq-affinity  || RX queue assignment to PMD threads for vhost   |
1033   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1034   +-------------------------+-------------------------------------------------+
1035   | vpath                   || User path for openvswitch files                |
1036   |                         || Default is ``/usr/local``                      |
1037   +-------------------------+-------------------------------------------------+
1038   | max_idle                || The maximum time that idle flows will remain   |
1039   |                         |  cached in the datapath, ms                     |
1040   +-------------------------+-------------------------------------------------+
1041
1042
1043 VM image properties
1044 '''''''''''''''''''
1045
1046 VM image properties example under *flavor* section:
1047
1048   .. code-block:: console
1049
1050       flavor:
1051         images: <path>
1052         ram: 8192
1053         extra_specs:
1054            machine_type: 'pc-i440fx-xenial'
1055            hw:cpu_sockets: 1
1056            hw:cpu_cores: 6
1057            hw:cpu_threads: 2
1058            hw_socket: 0
1059            cputune: |
1060              <cputune>
1061                <vcpupin vcpu="0" cpuset="7"/>
1062                <vcpupin vcpu="1" cpuset="8"/>
1063                ...
1064                <vcpupin vcpu="11" cpuset="18"/>
1065                <emulatorpin cpuset="11"/>
1066              </cputune>
1067
1068 VM image properties description:
1069
1070   +-------------------------+-------------------------------------------------+
1071   | Parameters              | Detail                                          |
1072   +=========================+=================================================+
1073   | images                  || Path to the VM image generated by              |
1074   |                         |  ``nsb_setup.sh``                               |
1075   |                         || Default path is ``/var/lib/libvirt/images/``   |
1076   |                         || Default file name ``yardstick-nsb-image.img``  |
1077   |                         |  or ``yardstick-image.img``                     |
1078   +-------------------------+-------------------------------------------------+
1079   | ram                     || Amount of RAM to be used for VM                |
1080   |                         || Default is 4096 MB                             |
1081   +-------------------------+-------------------------------------------------+
1082   | hw:cpu_sockets          || Number of sockets provided to the guest VM     |
1083   |                         || Default is 1                                   |
1084   +-------------------------+-------------------------------------------------+
1085   | hw:cpu_cores            || Number of cores provided to the guest VM       |
1086   |                         || Default is 2                                   |
1087   +-------------------------+-------------------------------------------------+
1088   | hw:cpu_threads          || Number of threads provided to the guest VM     |
1089   |                         || Default is 2                                   |
1090   +-------------------------+-------------------------------------------------+
1091   | hw_socket               || Generate vcpu cpuset from given HW socket      |
1092   |                         || Default is 0                                   |
1093   +-------------------------+-------------------------------------------------+
1094   | cputune                 || Maps virtual cpu with logical cpu              |
1095   +-------------------------+-------------------------------------------------+
1096   | machine_type            || Machine type to be emulated in VM              |
1097   |                         || Default is 'pc-i440fx-xenial'                  |
1098   +-------------------------+-------------------------------------------------+
1099
1100
1101 OpenStack with SR-IOV support
1102 -----------------------------
1103
1104 This section describes how to run a Sample VNF test case, using Heat context,
1105 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1106 DevStack, with SR-IOV support.
1107
1108
1109 Single node OpenStack with external TG
1110 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1111
1112 .. code-block:: console
1113
1114                                  +----------------------------+
1115                                  |OpenStack(DevStack)         |
1116                                  |                            |
1117                                  |   +--------------------+   |
1118                                  |   |sample-VNF VM       |   |
1119                                  |   |                    |   |
1120                                  |   |        DUT         |   |
1121                                  |   |       (VNF)        |   |
1122                                  |   |                    |   |
1123                                  |   +--------+  +--------+   |
1124                                  |   | VF NIC |  | VF NIC |   |
1125                                  |   +-----+--+--+----+---+   |
1126                                  |         ^          ^       |
1127                                  |         |          |       |
1128   +----------+                   +---------+----------+-------+
1129   |          |                   |        VF0        VF1      |
1130   |          |                   |         ^          ^       |
1131   |          |                   |         |   SUT    |       |
1132   |    TG    | (PF0)<----->(PF0) +---------+          |       |
1133   |          |                   |                    |       |
1134   |          | (PF1)<----->(PF1) +--------------------+       |
1135   |          |                   |                            |
1136   +----------+                   +----------------------------+
1137   trafficgen_0                                 host
1138
1139
1140 Host pre-configuration
1141 ++++++++++++++++++++++
1142
1143 .. warning:: The following configuration requires sudo access to the system.
1144    Make sure that your user have the access.
1145
1146 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1147 manufacturers disable this extension by default.
1148
1149 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1150 config file ``/etc/default/grub``.
1151
1152 For the Intel platform::
1153
1154   ...
1155   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1156   ...
1157
1158 For the AMD platform::
1159
1160   ...
1161   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1162   ...
1163
1164 Update the grub configuration file and restart the system:
1165
1166 .. warning:: The following command will reboot the system.
1167
1168 .. code:: bash
1169
1170   sudo update-grub
1171   sudo reboot
1172
1173 Make sure the extension has been enabled::
1174
1175   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1176
1177   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
1178   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1179   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1180   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1181   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1182   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1183   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1184   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1185
1186 .. TODO: Refer to the yardstick installation guide for proxy set up
1187
1188 Setup system proxy (if needed). Add the following configuration into the
1189 ``/etc/environment`` file:
1190
1191 .. note:: The proxy server name/port and IPs should be changed according to
1192   actual/current proxy configuration in the lab.
1193
1194 .. code:: bash
1195
1196   export http_proxy=http://proxy.company.com:port
1197   export https_proxy=http://proxy.company.com:port
1198   export ftp_proxy=http://proxy.company.com:port
1199   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1200   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1201
1202 Upgrade the system:
1203
1204 .. code:: bash
1205
1206   sudo -EH apt-get update
1207   sudo -EH apt-get upgrade
1208   sudo -EH apt-get dist-upgrade
1209
1210 Install dependencies needed for DevStack
1211
1212 .. code:: bash
1213
1214   sudo -EH apt-get install python python-dev python-pip
1215
1216 Setup SR-IOV ports on the host:
1217
1218 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1219   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1220   interface names should be changed according to the HW environment used for
1221   testing.
1222
1223 .. code:: bash
1224
1225   sudo ip link set dev enp24s0f0 up
1226   sudo ip link set dev enp24s0f1 up
1227   sudo ip link set dev enp24s0f3 up
1228
1229   # Create VFs on PF
1230   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1231   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1232
1233
1234 DevStack installation
1235 +++++++++++++++++++++
1236
1237 If you want to try out NSB, but don't have OpenStack set-up, you can use
1238 `Devstack`_ to install OpenStack on a host. Please note, that the
1239 ``stable/pike`` branch of devstack repo should be used during the installation.
1240 The required ``local.conf`` configuration file are described below.
1241
1242 DevStack configuration file:
1243
1244 .. note:: Update the devstack configuration file by replacing angluar brackets
1245   with a short description inside.
1246
1247 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1248   commands to get device and vendor id of the virtual function (VF).
1249
1250 .. literalinclude:: code/single-devstack-local.conf
1251    :language: console
1252
1253 Start the devstack installation on a host.
1254
1255 TG host configuration
1256 +++++++++++++++++++++
1257
1258 Yardstick automatically installs and configures Trex traffic generator on TG
1259 host based on provided POD file (see below). Anyway, it's recommended to check
1260 the compatibility of the installed NIC on the TG server with software Trex
1261 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1262
1263 Run the Sample VNF test case
1264 ++++++++++++++++++++++++++++
1265
1266 There is an example of Sample VNF test case ready to be executed in an
1267 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1268 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1269
1270 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1271 context.
1272
1273 Create pod file for TG in the yardstick repo folder located in the yardstick
1274 container:
1275
1276 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1277   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1278   command to get the PF PCI address for ``vpci`` field.
1279
1280 .. literalinclude:: code/single-yardstick-pod.conf
1281    :language: console
1282
1283 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1284 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1285 context using steps described in `NS testing - using yardstick CLI`_ section.
1286
1287
1288 Multi node OpenStack TG and VNF setup (two nodes)
1289 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1290
1291 .. code-block:: console
1292
1293   +----------------------------+                   +----------------------------+
1294   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1295   |                            |                   |                            |
1296   |   +--------------------+   |                   |   +--------------------+   |
1297   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1298   |   |                    |   |                   |   |                    |   |
1299   |   |         TG         |   |                   |   |        DUT         |   |
1300   |   |    trafficgen_0    |   |                   |   |       (VNF)        |   |
1301   |   |                    |   |                   |   |                    |   |
1302   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1303   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1304   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1305   |        ^           ^       |                   |         ^          ^       |
1306   |        |           |       |                   |         |          |       |
1307   +--------+-----------+-------+                   +---------+----------+-------+
1308   |       VF0         VF1      |                   |        VF0        VF1      |
1309   |        ^           ^       |                   |         ^          ^       |
1310   |        |    SUT2   |       |                   |         |   SUT1   |       |
1311   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1312   |        |                   |                   |                    |       |
1313   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1314   |                            |                   |                            |
1315   +----------------------------+                   +----------------------------+
1316            host2 (compute)                               host1 (controller)
1317
1318
1319 Controller/Compute pre-configuration
1320 ++++++++++++++++++++++++++++++++++++
1321
1322 Pre-configuration of the controller and compute hosts are the same as
1323 described in `Host pre-configuration`_ section.
1324
1325 DevStack configuration
1326 ++++++++++++++++++++++
1327
1328 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1329 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1330 devstack repo should be used during the installation.
1331
1332 .. note:: Update the devstack configuration files by replacing angluar brackets
1333   with a short description inside.
1334
1335 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1336   commands to get device and vendor id of the virtual function (VF).
1337
1338 DevStack configuration file for controller host:
1339
1340 .. literalinclude:: code/multi-devstack-controller-local.conf
1341    :language: console
1342
1343 DevStack configuration file for compute host:
1344
1345 .. literalinclude:: code/multi-devstack-compute-local.conf
1346    :language: console
1347
1348 Start the devstack installation on the controller and compute hosts.
1349
1350 Run the sample vFW TC
1351 +++++++++++++++++++++
1352
1353 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1354 context.
1355
1356 Run the sample vFW RFC2544 SR-IOV test case
1357 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1358 in the heat context using steps described in
1359 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1360 line arguments:
1361
1362 .. code:: bash
1363
1364   yardstick -d task start --task-args='{"provider": "sriov"}' \
1365   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1366
1367
1368 Enabling other Traffic generators
1369 ---------------------------------
1370
1371 IxLoad
1372 ^^^^^^
1373
1374 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1375    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1376    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1377    ``<IxOS version>Linux64.bin.tar.gz``
1378    If the installation was not done inside the container, after installing
1379    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1380    sure you can run this cmd inside the yardstick container. Usually user is
1381    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1382    ``/usr/bin/ixiapython<ver>`` inside the container.
1383
1384 2. Update ``pod_ixia.yaml`` file with ixia details.
1385
1386   .. code-block:: console
1387
1388     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1389       etc/yardstick/nodes/pod_ixia.yaml
1390
1391   Config ``pod_ixia.yaml``
1392
1393   .. literalinclude:: code/pod_ixia.yaml
1394      :language: console
1395
1396   for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1397   for ovs-dpdk/sriov configuration
1398
1399 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1400    You will also need to configure the IxLoad machine to start the IXIA
1401    IxosTclServer. This can be started like so:
1402
1403    * Connect to the IxLoad machine using RDP
1404    * Go to:
1405      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1406      or
1407      ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1408
1409 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1410
1411 5. Execute testcase in samplevnf folder e.g.
1412    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1413
1414 IxNetwork
1415 ^^^^^^^^^
1416
1417 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1418 installed as part of the requirements of the project.
1419
1420 1. Update ``pod_ixia.yaml`` file with ixia details.
1421
1422   .. code-block:: console
1423
1424     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1425     etc/yardstick/nodes/pod_ixia.yaml
1426
1427   Configure ``pod_ixia.yaml``
1428
1429   .. literalinclude:: code/pod_ixia.yaml
1430      :language: console
1431
1432   for sriov/ovs_dpdk pod files, please refer to above
1433   `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1434
1435 2. Start IxNetwork TCL Server
1436    You will also need to configure the IxNetwork machine to start the IXIA
1437    IxNetworkTclServer. This can be started like so:
1438
1439     * Connect to the IxNetwork machine using RDP
1440     * Go to:
1441       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1442       (or ``IxNetworkApiServer``)
1443
1444 3. Execute testcase in samplevnf folder e.g.
1445    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1446
1447 Spirent Landslide
1448 -----------------
1449
1450 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1451 to be preinstalled and properly configured.
1452
1453 - Java
1454
1455     32-bit Java installation is required for the Spirent Landslide TCL API.
1456
1457     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1458
1459     .. important::
1460       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1461       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1462       section of installation instructions.
1463
1464 - LsApi (Tcl API module)
1465
1466     Follow Landslide documentation for detailed instructions on Linux
1467     installation of Tcl API and its dependencies
1468     ``http://TAS_HOST_IP/tclapiinstall.html``.
1469     For working with LsApi Python wrapper only steps 1-5 are required.
1470
1471     .. note:: After installation make sure your API home path is included in
1472       ``PYTHONPATH`` environment variable.
1473
1474     .. important::
1475       The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1476       For LsApi module to initialize correctly following lines (184-186) in
1477       lsapi.py
1478
1479     .. code-block:: python
1480
1481         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1482         if ldpath == '':
1483          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1484
1485     should be changed to:
1486
1487     .. code-block:: python
1488
1489         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1490         if not ldpath == '':
1491                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1492
1493 .. note:: The Spirent landslide TCL software package needs to be updated in case
1494   the user upgrades to a new version of Spirent landslide software.