Merge "Log VM OS version, Sample VNF branch/commit ID"
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2019 Intel Corporation.
5
6 ..
7    Convention for heading levels in Yardstick documentation:
8
9    =======  Heading 0 (reserved for the title in a document)
10    -------  Heading 1
11    ^^^^^^^  Heading 2
12    +++++++  Heading 3
13    '''''''  Heading 4
14
15    Avoid deeper levels because they do not render well.
16
17
18 ================
19 NSB Installation
20 ================
21
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
25
26 Abstract
27 --------
28
29 The steps needed to run Yardstick with NSB testing are:
30
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
34 * Run the test case.
35
36 Prerequisites
37 -------------
38
39 Refer to :doc:`04-installation` for more information on Yardstick
40 prerequisites.
41
42 Several prerequisites are needed for Yardstick (VNF testing):
43
44   * Python Modules: pyzmq, pika.
45   * flex
46   * bison
47   * build-essential
48   * automake
49   * libtool
50   * librabbitmq-dev
51   * rabbitmq-server
52   * collectd
53   * intel-cmt-cat
54
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57
58 SUT requirements:
59
60    ======= ===================
61    Item    Description
62    ======= ===================
63    Memory  Min 20GB
64    NICs    2 x 10G
65    OS      Ubuntu 16.04.3 LTS
66    kernel  4.4.0-34-generic
67    DPDK    17.02
68    ======= ===================
69
70 Boot and BIOS settings:
71
72    ============= =================================================
73    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76                  iommu=on iommu=pt intel_iommu=on
77                  Note: nohz_full and rcu_nocbs is to disable Linux
78                  kernel interrupts
79    BIOS          CPU Power and Performance Policy <Performance>
80                  CPU C-state Disabled
81                  CPU P-state Disabled
82                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83                  Hyper-Threading Technology (If supported) Enabled
84                  Virtualization Techology Enabled
85                  Intel(R) VT for Direct I/O Enabled
86                  Coherency Enabled
87                  Turbo Boost Disabled
88    ============= =================================================
89
90 Install Yardstick (NSB Testing)
91 -------------------------------
92
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
95
96 1. Install Yardstick in specified mode: bare metal or container.
97    Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99    sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100    Add such servers to ``install-inventory.ini`` file to either
101    ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102    It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104    Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105    The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
107
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
110
111 Set environment in the file::
112
113     http_proxy='http://proxy.company.com:port'
114     https_proxy='http://proxy.company.com:port'
115
116 Set environment variables:
117
118 .. code-block:: console
119
120     export http_proxy='http://proxy.company.com:port'
121     export https_proxy='http://proxy.company.com:port'
122
123 Download the source code and check out the latest stable branch:
124
125 .. code-block:: console
126
127   git clone https://gerrit.opnfv.org/gerrit/yardstick
128   cd yardstick
129   # Switch to latest stable branch
130   git checkout stable/gambia
131
132 Modify the Yardstick installation inventory used by Ansible:
133
134 .. code-block:: ini
135
136   cat ./ansible/install-inventory.ini
137   [jumphost]
138   localhost ansible_connection=local
139
140   # section below is only due backward compatibility.
141   # it will be removed later
142   [yardstick:children]
143   jumphost
144
145   [yardstick-baremetal]
146   baremetal ansible_host=192.168.2.51 ansible_connection=ssh
147
148   [yardstick-standalone]
149   standalone ansible_host=192.168.2.52 ansible_connection=ssh
150
151   [all:vars]
152   # Uncomment credentials below if needed
153     ansible_user=root
154     ansible_ssh_pass=root
155   # ansible_ssh_private_key_file=/root/.ssh/id_rsa
156   # When IMG_PROPERTY is passed neither normal nor nsb set
157   # "path_to_vm=/path/to/image" to add it to OpenStack
158   # path_to_img=/tmp/workspace/yardstick-image.img
159
160   # List of CPUs to be isolated (not used by default)
161   # Grub line will be extended with:
162   # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
163   # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
164
165 .. warning::
166
167    Before running ``nsb_setup.sh`` make sure python is installed on servers
168    added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
169
170 .. note::
171
172    SSH access without password needs to be configured for all your nodes
173    defined in ``install-inventory.ini`` file.
174    If you want to use password authentication you need to install ``sshpass``::
175
176      sudo -EH apt-get install sshpass
177
178
179 .. note::
180
181    A VM image built by other means than Yardstick can be added to OpenStack.
182    Uncomment and set correct path to the VM image in the
183    ``install-inventory.ini`` file::
184
185      path_to_img=/tmp/workspace/yardstick-image.img
186
187
188 .. note::
189
190    CPU isolation can be applied to the remote servers, like:
191    ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
192    ``install-inventory.ini`` file.
193
194 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
195 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
196 installs packages to the servers given in ``yardstick-standalone`` and
197 ``yardstick-baremetal`` host groups.
198
199 To pull Yardstick built based on Ubuntu 18 run::
200
201     ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
202
203 To change default behavior modify parameters for ``install.yaml`` in
204 ``nsb_setup.sh`` file.
205
206 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
207 parameters.
208
209 To execute an installation for a **BareMetal** or a **Standalone context**::
210
211     ./nsb_setup.sh
212
213 To execute an installation for an **OpenStack** context::
214
215     ./nsb_setup.sh <path to admin-openrc.sh>
216
217 .. note::
218
219    Yardstick may not be operational after distributive linux kernel update if
220    it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
221
222 .. warning::
223
224    The Yardstick VM image (NSB or normal) cannot be built inside a VM.
225
226 .. warning::
227
228    The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
229    Reboot of the servers from ``yardstick-standalone`` or
230    ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
231    required to apply those changes.
232
233 The above commands will set up Docker with the latest Yardstick code. To
234 execute::
235
236   docker exec -it yardstick bash
237
238 .. note::
239
240    It may be needed to configure tty in docker container to extend commandline
241    character length, for example:
242
243    stty size rows 58 cols 234
244
245 It will also automatically download all the packages needed for NSB Testing
246 setup. Refer chapter :doc:`04-installation` for more on Docker:
247 :ref:`Install Yardstick using Docker`
248
249 Bare Metal context example
250 ^^^^^^^^^^^^^^^^^^^^^^^^^^
251
252 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
253
254 Perform following steps to install NSB:
255
256 1. Clone Yardstick repo to jump host.
257 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
258    ``install-inventory.ini`` file to install NSB and dependencies. Install
259    python on servers.
260 3. Start deployment using docker image based on Ubuntu 16:
261
262 .. code-block:: console
263
264    ./nsb_setup.sh
265
266 4. Reboot bare metal servers.
267 5. Enter to yardstick container and modify pod yaml file and run tests.
268
269 Standalone context example for Ubuntu 18
270 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
271
272 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
273 Ubuntu 18 is installed on all servers.
274
275 Perform following steps to install NSB:
276
277 1. Clone Yardstick repo to jump host.
278 2. Add TG server to ``yardstick-baremetal`` group in
279    ``install-inventory.ini`` file to install NSB and dependencies.
280    Add server where VM with sample VNF will be deployed to
281    ``yardstick-standalone`` group in ``install-inventory.ini`` file.
282    Target VM image named ``yardstick-nsb-image.img`` will be placed to
283    ``/var/lib/libvirt/images/``.
284    Install python on servers.
285 3. Modify ``nsb_setup.sh`` on jump host:
286
287 .. code-block:: console
288
289    ansible-playbook \
290    -e IMAGE_PROPERTY='nsb' \
291    -e OS_RELEASE='bionic' \
292    -e INSTALLATION_MODE='container_pull' \
293    -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
294    -i install-inventory.ini install.yaml
295
296 4. Start deployment with Yardstick docker images based on Ubuntu 18:
297
298 .. code-block:: console
299
300    ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
301
302 5. Reboot servers.
303 6. Enter to yardstick container and modify pod yaml file and run tests.
304
305
306 System Topology
307 ---------------
308
309 .. code-block:: console
310
311   +----------+              +----------+
312   |          |              |          |
313   |          | (0)----->(0) |          |
314   |    TG1   |              |    DUT   |
315   |          |              |          |
316   |          | (1)<-----(1) |          |
317   +----------+              +----------+
318   trafficgen_0                   vnf
319
320
321 Environment parameters and credentials
322 --------------------------------------
323
324 Configure yardstick.conf
325 ^^^^^^^^^^^^^^^^^^^^^^^^
326
327 If you did not run ``yardstick env influxdb`` inside the container to generate
328 ``yardstick.conf``, then create the config file manually (run inside the
329 container)::
330
331     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
332     vi /etc/yardstick/yardstick.conf
333
334 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
335 section:
336
337 .. code-block:: ini
338
339   [DEFAULT]
340   debug = True
341   dispatcher = influxdb
342
343   [dispatcher_influxdb]
344   timeout = 5
345   target = http://{YOUR_IP_HERE}:8086
346   db_name = yardstick
347   username = root
348   password = root
349
350   [nsb]
351   trex_path=/opt/nsb_bin/trex/scripts
352   bin_path=/opt/nsb_bin
353   trex_client_lib=/opt/nsb_bin/trex_client/stl
354
355 Run Yardstick - Network Service Testcases
356 -----------------------------------------
357
358 NS testing - using yardstick CLI
359 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
360
361   See :doc:`04-installation`
362
363 Connect to the Yardstick container::
364
365   docker exec -it yardstick /bin/bash
366
367 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
368
369   source /etc/yardstick/openstack.creds
370
371 In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
372 OpenStack::
373
374   export EXTERNAL_NETWORK="<openstack public network>"
375
376 Finally, you should be able to run the testcase::
377
378   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
379
380 Network Service Benchmarking - Bare-Metal
381 -----------------------------------------
382
383 Bare-Metal Config pod.yaml describing Topology
384 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
385
386 Bare-Metal 2-Node setup
387 +++++++++++++++++++++++
388 .. code-block:: console
389
390   +----------+              +----------+
391   |          |              |          |
392   |          | (0)----->(0) |          |
393   |    TG1   |              |    DUT   |
394   |          |              |          |
395   |          | (n)<-----(n) |          |
396   +----------+              +----------+
397   trafficgen_0                   vnf
398
399 Bare-Metal 3-Node setup - Correlated Traffic
400 ++++++++++++++++++++++++++++++++++++++++++++
401 .. code-block:: console
402
403   +----------+              +----------+            +------------+
404   |          |              |          |            |            |
405   |          |              |          |            |            |
406   |          | (0)----->(0) |          |            |    UDP     |
407   |    TG1   |              |    DUT   |            |   Replay   |
408   |          |              |          |            |            |
409   |          |              |          |(1)<---->(0)|            |
410   +----------+              +----------+            +------------+
411   trafficgen_0                   vnf                 trafficgen_1
412
413
414 Bare-Metal Config pod.yaml
415 ^^^^^^^^^^^^^^^^^^^^^^^^^^
416 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
417 topology and update all the required fields.::
418
419     cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
420
421 .. code-block:: YAML
422
423     nodes:
424     -
425         name: trafficgen_0
426         role: TrafficGen
427         ip: 1.1.1.1
428         user: root
429         password: r00t
430         interfaces:
431             xe0:  # logical name from topology.yaml and vnfd.yaml
432                 vpci:      "0000:07:00.0"
433                 driver:    i40e # default kernel driver
434                 dpdk_port_num: 0
435                 local_ip: "152.16.100.20"
436                 netmask:   "255.255.255.0"
437                 local_mac: "00:00:00:00:00:01"
438             xe1:  # logical name from topology.yaml and vnfd.yaml
439                 vpci:      "0000:07:00.1"
440                 driver:    i40e # default kernel driver
441                 dpdk_port_num: 1
442                 local_ip: "152.16.40.20"
443                 netmask:   "255.255.255.0"
444                 local_mac: "00:00:00:00:00:02"
445
446     -
447         name: vnf
448         role: vnf
449         ip: 1.1.1.2
450         user: root
451         password: r00t
452         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
453         interfaces:
454             xe0:  # logical name from topology.yaml and vnfd.yaml
455                 vpci:      "0000:07:00.0"
456                 driver:    i40e # default kernel driver
457                 dpdk_port_num: 0
458                 local_ip: "152.16.100.19"
459                 netmask:   "255.255.255.0"
460                 local_mac: "00:00:00:00:00:03"
461
462             xe1:  # logical name from topology.yaml and vnfd.yaml
463                 vpci:      "0000:07:00.1"
464                 driver:    i40e # default kernel driver
465                 dpdk_port_num: 1
466                 local_ip: "152.16.40.19"
467                 netmask:   "255.255.255.0"
468                 local_mac: "00:00:00:00:00:04"
469         routing_table:
470         - network: "152.16.100.20"
471           netmask: "255.255.255.0"
472           gateway: "152.16.100.20"
473           if: "xe0"
474         - network: "152.16.40.20"
475           netmask: "255.255.255.0"
476           gateway: "152.16.40.20"
477           if: "xe1"
478         nd_route_tbl:
479         - network: "0064:ff9b:0:0:0:0:9810:6414"
480           netmask: "112"
481           gateway: "0064:ff9b:0:0:0:0:9810:6414"
482           if: "xe0"
483         - network: "0064:ff9b:0:0:0:0:9810:2814"
484           netmask: "112"
485           gateway: "0064:ff9b:0:0:0:0:9810:2814"
486           if: "xe1"
487
488
489 Standalone Virtualization
490 -------------------------
491
492 VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
493 to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
494 manually. Test case example, context section::
495
496     contexts:
497      ...
498      vm_deploy: True
499
500
501 SR-IOV
502 ^^^^^^
503
504 SR-IOV Pre-requisites
505 +++++++++++++++++++++
506
507 On Host, where VM is created:
508  1. Create and configure a bridge named ``br-int`` for VM to connect to
509     external network. Currently this can be done using VXLAN tunnel.
510
511     Execute the following on host, where VM is created::
512
513       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
514       brctl addbr br-int
515       brctl addif br-int vxlan0
516       ip link set dev vxlan0 up
517       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
518       ip link set dev br-int up
519
520   .. note:: You may need to add extra rules to iptable to forward traffic.
521
522   .. code-block:: console
523
524     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
525     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
526
527   Execute the following on a jump host:
528
529   .. code-block:: console
530
531       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
532       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
533       ip link set dev vxlan0 up
534
535   .. note:: Host and jump host are different baremetal servers.
536
537  2. Modify test case management CIDR.
538     IP addresses IP#1, IP#2 and CIDR must be in the same network.
539
540   .. code-block:: YAML
541
542     servers:
543       vnf_0:
544         network_ports:
545           mgmt:
546             cidr: '1.1.1.7/24'
547
548  3. Build guest image for VNF to run.
549     Most of the sample test cases in Yardstick are using a guest image called
550     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
551     Yardstick has a tool for building this custom image with SampleVNF.
552     It is necessary to have ``sudo`` rights to use this tool.
553
554    Also you may need to install several additional packages to use this tool, by
555    following the commands below::
556
557       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
558
559    This image can be built using the following command in the directory where
560    Yardstick is installed::
561
562       export YARD_IMG_ARCH='amd64'
563       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
564
565    For instructions on generating a cloud image using Ansible, refer to
566    :doc:`04-installation`.
567
568    .. note:: VM should be build with static IP and be accessible from the
569       Yardstick host.
570
571
572 SR-IOV Config pod.yaml describing Topology
573 ++++++++++++++++++++++++++++++++++++++++++
574
575 SR-IOV 2-Node setup
576 +++++++++++++++++++
577 .. code-block:: console
578
579                                +--------------------+
580                                |                    |
581                                |                    |
582                                |        DUT         |
583                                |       (VNF)        |
584                                |                    |
585                                +--------------------+
586                                | VF NIC |  | VF NIC |
587                                +--------+  +--------+
588                                      ^          ^
589                                      |          |
590                                      |          |
591   +----------+               +-------------------------+
592   |          |               |       ^          ^      |
593   |          |               |       |          |      |
594   |          | (0)<----->(0) | ------    SUT    |      |
595   |    TG1   |               |                  |      |
596   |          | (n)<----->(n) | -----------------       |
597   |          |               |                         |
598   +----------+               +-------------------------+
599   trafficgen_0                          host
600
601
602
603 SR-IOV 3-Node setup - Correlated Traffic
604 ++++++++++++++++++++++++++++++++++++++++
605 .. code-block:: console
606
607                              +--------------------+
608                              |                    |
609                              |                    |
610                              |        DUT         |
611                              |       (VNF)        |
612                              |                    |
613                              +--------------------+
614                              | VF NIC |  | VF NIC |
615                              +--------+  +--------+
616                                    ^          ^
617                                    |          |
618                                    |          |
619   +----------+               +---------------------+            +--------------+
620   |          |               |     ^          ^    |            |              |
621   |          |               |     |          |    |            |              |
622   |          | (0)<----->(0) |-----           |    |            |     TG2      |
623   |    TG1   |               |         SUT    |    |            | (UDP Replay) |
624   |          |               |                |    |            |              |
625   |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
626   +----------+               +---------------------+            +--------------+
627   trafficgen_0                          host                      trafficgen_1
628
629 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
630 topology and update all the required fields.
631
632 .. code-block:: console
633
634     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
635     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
636
637 .. note:: Update all the required fields like ip, user, password, pcis, etc...
638
639 SR-IOV Config pod_trex.yaml
640 +++++++++++++++++++++++++++
641
642 .. code-block:: YAML
643
644     nodes:
645     -
646         name: trafficgen_0
647         role: TrafficGen
648         ip: 1.1.1.1
649         user: root
650         password: r00t
651         key_filename: /root/.ssh/id_rsa
652         interfaces:
653             xe0:  # logical name from topology.yaml and vnfd.yaml
654                 vpci:      "0000:07:00.0"
655                 driver:    i40e # default kernel driver
656                 dpdk_port_num: 0
657                 local_ip: "152.16.100.20"
658                 netmask:   "255.255.255.0"
659                 local_mac: "00:00:00:00:00:01"
660             xe1:  # logical name from topology.yaml and vnfd.yaml
661                 vpci:      "0000:07:00.1"
662                 driver:    i40e # default kernel driver
663                 dpdk_port_num: 1
664                 local_ip: "152.16.40.20"
665                 netmask:   "255.255.255.0"
666                 local_mac: "00:00:00:00:00:02"
667
668 SR-IOV Config host_sriov.yaml
669 +++++++++++++++++++++++++++++
670
671 .. code-block:: YAML
672
673     nodes:
674     -
675        name: sriov
676        role: Sriov
677        ip: 192.168.100.101
678        user: ""
679        password: ""
680
681 SR-IOV testcase update:
682 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
683
684 Update contexts section
685 '''''''''''''''''''''''
686
687 .. code-block:: YAML
688
689   contexts:
690    - name: yardstick
691      type: Node
692      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
693    - type: StandaloneSriov
694      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
695      name: yardstick
696      vm_deploy: True
697      flavor:
698        images: "/var/lib/libvirt/images/ubuntu.qcow2"
699        ram: 4096
700        extra_specs:
701          hw:cpu_sockets: 1
702          hw:cpu_cores: 6
703          hw:cpu_threads: 2
704        user: "" # update VM username
705        password: "" # update password
706      servers:
707        vnf_0:
708          network_ports:
709            mgmt:
710              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
711            xe0:
712              - uplink_0
713            xe1:
714              - downlink_0
715      networks:
716        uplink_0:
717          phy_port: "0000:05:00.0"
718          vpci: "0000:00:07.0"
719          cidr: '152.16.100.10/24'
720          gateway_ip: '152.16.100.20'
721        downlink_0:
722          phy_port: "0000:05:00.1"
723          vpci: "0000:00:08.0"
724          cidr: '152.16.40.10/24'
725          gateway_ip: '152.16.100.20'
726
727
728 SRIOV configuration options
729 +++++++++++++++++++++++++++
730
731 The only configuration option available for SRIOV is *vpci*. It is used as base
732 address for VFs that are created during SRIOV test case execution.
733
734   .. code-block:: yaml+jinja
735
736     networks:
737       uplink_0:
738         phy_port: "0000:05:00.0"
739         vpci: "0000:00:07.0"
740         cidr: '152.16.100.10/24'
741         gateway_ip: '152.16.100.20'
742       downlink_0:
743         phy_port: "0000:05:00.1"
744         vpci: "0000:00:08.0"
745         cidr: '152.16.40.10/24'
746         gateway_ip: '152.16.100.20'
747
748 .. _`VM image properties label`:
749
750 VM image properties
751 '''''''''''''''''''
752
753 VM image properties example under *flavor* section:
754
755   .. code-block:: console
756
757       flavor:
758         images: <path>
759         ram: 8192
760         extra_specs:
761            machine_type: 'pc-i440fx-xenial'
762            hw:cpu_sockets: 1
763            hw:cpu_cores: 6
764            hw:cpu_threads: 2
765            hw_socket: 0
766            cputune: |
767              <cputune>
768                <vcpupin vcpu="0" cpuset="7"/>
769                <vcpupin vcpu="1" cpuset="8"/>
770                ...
771                <vcpupin vcpu="11" cpuset="18"/>
772                <emulatorpin cpuset="11"/>
773              </cputune>
774         user: ""
775         password: ""
776
777 VM image properties description:
778
779   +-------------------------+-------------------------------------------------+
780   | Parameters              | Detail                                          |
781   +=========================+=================================================+
782   | images                  || Path to the VM image generated by              |
783   |                         |  ``nsb_setup.sh``                               |
784   |                         || Default path is ``/var/lib/libvirt/images/``   |
785   |                         || Default file name ``yardstick-nsb-image.img``  |
786   |                         |  or ``yardstick-image.img``                     |
787   +-------------------------+-------------------------------------------------+
788   | ram                     || Amount of RAM to be used for VM                |
789   |                         || Default is 4096 MB                             |
790   +-------------------------+-------------------------------------------------+
791   | hw:cpu_sockets          || Number of sockets provided to the guest VM     |
792   |                         || Default is 1                                   |
793   +-------------------------+-------------------------------------------------+
794   | hw:cpu_cores            || Number of cores provided to the guest VM       |
795   |                         || Default is 2                                   |
796   +-------------------------+-------------------------------------------------+
797   | hw:cpu_threads          || Number of threads provided to the guest VM     |
798   |                         || Default is 2                                   |
799   +-------------------------+-------------------------------------------------+
800   | hw_socket               || Generate vcpu cpuset from given HW socket      |
801   |                         || Default is 0                                   |
802   +-------------------------+-------------------------------------------------+
803   | cputune                 || Maps virtual cpu with logical cpu              |
804   +-------------------------+-------------------------------------------------+
805   | machine_type            || Machine type to be emulated in VM              |
806   |                         || Default is 'pc-i440fx-xenial'                  |
807   +-------------------------+-------------------------------------------------+
808   | user                    || User name to access the VM                     |
809   |                         || Default value is 'root'                        |
810   +-------------------------+-------------------------------------------------+
811   | password                || Password to access the VM                      |
812   +-------------------------+-------------------------------------------------+
813
814
815 OVS-DPDK
816 ^^^^^^^^
817
818 OVS-DPDK Pre-requisites
819 +++++++++++++++++++++++
820
821 On Host, where VM is created:
822  1. Create and configure a bridge named ``br-int`` for VM to connect to
823     external network. Currently this can be done using VXLAN tunnel.
824
825     Execute the following on host, where VM is created:
826
827   .. code-block:: console
828
829       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
830       brctl addbr br-int
831       brctl addif br-int vxlan0
832       ip link set dev vxlan0 up
833       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
834       ip link set dev br-int up
835
836   .. note:: May be needed to add extra rules to iptable to forward traffic.
837
838   .. code-block:: console
839
840     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
841     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
842
843   Execute the following on a jump host:
844
845   .. code-block:: console
846
847       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
848       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
849       ip link set dev vxlan0 up
850
851   .. note:: Host and jump host are different baremetal servers.
852
853  2. Modify test case management CIDR.
854     IP addresses IP#1, IP#2 and CIDR must be in the same network.
855
856   .. code-block:: YAML
857
858     servers:
859       vnf_0:
860         network_ports:
861           mgmt:
862             cidr: '1.1.1.7/24'
863
864  3. Build guest image for VNF to run.
865     Most of the sample test cases in Yardstick are using a guest image called
866     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
867     Yardstick has a tool for building this custom image with SampleVNF.
868     It is necessary to have ``sudo`` rights to use this tool.
869
870    You may need to install several additional packages to use this tool, by
871    following the commands below::
872
873       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
874
875    This image can be built using the following command in the directory where
876    Yardstick is installed::
877
878       export YARD_IMG_ARCH='amd64'
879       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
880       sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
881
882    for more details refer to chapter :doc:`04-installation`
883
884    .. note::  VM should be build with static IP and should be accessible from
885       yardstick host.
886
887 4. OVS & DPDK version:
888
889   * OVS 2.7 and DPDK 16.11.1 above version is supported
890
891 Refer setup instructions at `OVS-DPDK`_ on host.
892
893 OVS-DPDK Config pod.yaml describing Topology
894 ++++++++++++++++++++++++++++++++++++++++++++
895
896 OVS-DPDK 2-Node setup
897 +++++++++++++++++++++
898
899 .. code-block:: console
900
901                                +--------------------+
902                                |                    |
903                                |                    |
904                                |        DUT         |
905                                |       (VNF)        |
906                                |                    |
907                                +--------------------+
908                                | virtio |  | virtio |
909                                +--------+  +--------+
910                                     ^          ^
911                                     |          |
912                                     |          |
913                                +--------+  +--------+
914                                | vHOST0 |  | vHOST1 |
915   +----------+               +-------------------------+
916   |          |               |       ^          ^      |
917   |          |               |       |          |      |
918   |          | (0)<----->(0) | ------           |      |
919   |    TG1   |               |          SUT     |      |
920   |          |               |       (ovs-dpdk) |      |
921   |          | (n)<----->(n) |------------------       |
922   +----------+               +-------------------------+
923   trafficgen_0                          host
924
925
926 OVS-DPDK 3-Node setup - Correlated Traffic
927 ++++++++++++++++++++++++++++++++++++++++++
928
929 .. code-block:: console
930
931                                +--------------------+
932                                |                    |
933                                |                    |
934                                |        DUT         |
935                                |       (VNF)        |
936                                |                    |
937                                +--------------------+
938                                | virtio |  | virtio |
939                                +--------+  +--------+
940                                     ^          ^
941                                     |          |
942                                     |          |
943                                +--------+  +--------+
944                                | vHOST0 |  | vHOST1 |
945   +----------+               +-------------------------+          +------------+
946   |          |               |       ^          ^      |          |            |
947   |          |               |       |          |      |          |            |
948   |          | (0)<----->(0) | ------           |      |          |    TG2     |
949   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
950   |          |               |      (ovs-dpdk)  |      |          |            |
951   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
952   +----------+               +-------------------------+          +------------+
953   trafficgen_0                          host                       trafficgen_1
954
955
956 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
957 the topology and update all the required fields::
958
959   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
960   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
961
962 .. note:: Update all the required fields like ip, user, password, pcis, etc...
963
964 OVS-DPDK Config pod_trex.yaml
965 +++++++++++++++++++++++++++++
966
967 .. code-block:: YAML
968
969     nodes:
970     -
971       name: trafficgen_0
972       role: TrafficGen
973       ip: 1.1.1.1
974       user: root
975       password: r00t
976       interfaces:
977           xe0:  # logical name from topology.yaml and vnfd.yaml
978               vpci:      "0000:07:00.0"
979               driver:    i40e # default kernel driver
980               dpdk_port_num: 0
981               local_ip: "152.16.100.20"
982               netmask:   "255.255.255.0"
983               local_mac: "00:00:00:00:00:01"
984           xe1:  # logical name from topology.yaml and vnfd.yaml
985               vpci:      "0000:07:00.1"
986               driver:    i40e # default kernel driver
987               dpdk_port_num: 1
988               local_ip: "152.16.40.20"
989               netmask:   "255.255.255.0"
990               local_mac: "00:00:00:00:00:02"
991
992 OVS-DPDK Config host_ovs.yaml
993 +++++++++++++++++++++++++++++
994
995 .. code-block:: YAML
996
997     nodes:
998     -
999        name: ovs_dpdk
1000        role: OvsDpdk
1001        ip: 192.168.100.101
1002        user: ""
1003        password: ""
1004
1005 ovs_dpdk testcase update:
1006 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
1007
1008 Update contexts section
1009 '''''''''''''''''''''''
1010
1011 .. code-block:: YAML
1012
1013   contexts:
1014    - name: yardstick
1015      type: Node
1016      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
1017    - type: StandaloneOvsDpdk
1018      name: yardstick
1019      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
1020      vm_deploy: True
1021      ovs_properties:
1022        version:
1023          ovs: 2.7.0
1024          dpdk: 16.11.1
1025        pmd_threads: 2
1026        ram:
1027          socket_0: 2048
1028          socket_1: 2048
1029        queues: 4
1030        vpath: "/usr/local"
1031
1032      flavor:
1033        images: "/var/lib/libvirt/images/ubuntu.qcow2"
1034        ram: 4096
1035        extra_specs:
1036          hw:cpu_sockets: 1
1037          hw:cpu_cores: 6
1038          hw:cpu_threads: 2
1039        user: "" # update VM username
1040        password: "" # update password
1041      servers:
1042        vnf_0:
1043          network_ports:
1044            mgmt:
1045              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
1046            xe0:
1047              - uplink_0
1048            xe1:
1049              - downlink_0
1050      networks:
1051        uplink_0:
1052          phy_port: "0000:05:00.0"
1053          vpci: "0000:00:07.0"
1054          cidr: '152.16.100.10/24'
1055          gateway_ip: '152.16.100.20'
1056        downlink_0:
1057          phy_port: "0000:05:00.1"
1058          vpci: "0000:00:08.0"
1059          cidr: '152.16.40.10/24'
1060          gateway_ip: '152.16.100.20'
1061
1062 OVS-DPDK configuration options
1063 ++++++++++++++++++++++++++++++
1064
1065 There are number of configuration options available for OVS-DPDK context in
1066 test case. Mostly they are used for performance tuning.
1067
1068 OVS-DPDK properties:
1069 ''''''''''''''''''''
1070
1071 OVS-DPDK properties example under *ovs_properties* section:
1072
1073   .. code-block:: console
1074
1075       ovs_properties:
1076         version:
1077           ovs: 2.8.1
1078           dpdk: 17.05.2
1079         pmd_threads: 4
1080         pmd_cpu_mask: "0x3c"
1081         ram:
1082          socket_0: 2048
1083          socket_1: 2048
1084         queues: 2
1085         vpath: "/usr/local"
1086         max_idle: 30000
1087         lcore_mask: 0x02
1088         dpdk_pmd-rxq-affinity:
1089           0: "0:2,1:2"
1090           1: "0:2,1:2"
1091           2: "0:3,1:3"
1092           3: "0:3,1:3"
1093         vhost_pmd-rxq-affinity:
1094           0: "0:3,1:3"
1095           1: "0:3,1:3"
1096           2: "0:4,1:4"
1097           3: "0:4,1:4"
1098
1099 OVS-DPDK properties description:
1100
1101   +-------------------------+-------------------------------------------------+
1102   | Parameters              | Detail                                          |
1103   +=========================+=================================================+
1104   | version                 || Version of OVS and DPDK to be installed        |
1105   |                         || There is a relation between OVS and DPDK       |
1106   |                         |  version which can be found at                  |
1107   |                         | `OVS-DPDK-versions`_                            |
1108   |                         || By default OVS: 2.6.0, DPDK: 16.07.2           |
1109   +-------------------------+-------------------------------------------------+
1110   | lcore_mask              || Core bitmask used during DPDK initialization   |
1111   |                         |  where the non-datapath OVS-DPDK threads such   |
1112   |                         |  as handler and revalidator threads run         |
1113   +-------------------------+-------------------------------------------------+
1114   | pmd_cpu_mask            || Core bitmask that sets which cores are used by |
1115   |                         || OVS-DPDK for datapath packet processing        |
1116   +-------------------------+-------------------------------------------------+
1117   | pmd_threads             || Number of PMD threads used by OVS-DPDK for     |
1118   |                         |  datapath                                       |
1119   |                         || This core mask is evaluated in Yardstick       |
1120   |                         || It will be used if pmd_cpu_mask is not given   |
1121   |                         || Default is 2                                   |
1122   +-------------------------+-------------------------------------------------+
1123   | ram                     || Amount of RAM to be used for each socket, MB   |
1124   |                         || Default is 2048 MB                             |
1125   +-------------------------+-------------------------------------------------+
1126   | queues                  || Number of RX queues used for DPDK physical     |
1127   |                         |  interface                                      |
1128   +-------------------------+-------------------------------------------------+
1129   | dpdk_pmd-rxq-affinity   || RX queue assignment to PMD threads for DPDK    |
1130   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1131   +-------------------------+-------------------------------------------------+
1132   | vhost_pmd-rxq-affinity  || RX queue assignment to PMD threads for vhost   |
1133   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1134   +-------------------------+-------------------------------------------------+
1135   | vpath                   || User path for openvswitch files                |
1136   |                         || Default is ``/usr/local``                      |
1137   +-------------------------+-------------------------------------------------+
1138   | max_idle                || The maximum time that idle flows will remain   |
1139   |                         |  cached in the datapath, ms                     |
1140   +-------------------------+-------------------------------------------------+
1141
1142
1143 VM image properties
1144 '''''''''''''''''''
1145
1146 VM image properties are same as for SRIOV :ref:`VM image properties label`.
1147
1148
1149 OpenStack with SR-IOV support
1150 -----------------------------
1151
1152 This section describes how to run a Sample VNF test case, using Heat context,
1153 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1154 DevStack, with SR-IOV support.
1155
1156
1157 Single node OpenStack with external TG
1158 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1159
1160 .. code-block:: console
1161
1162                                  +----------------------------+
1163                                  |OpenStack(DevStack)         |
1164                                  |                            |
1165                                  |   +--------------------+   |
1166                                  |   |sample-VNF VM       |   |
1167                                  |   |                    |   |
1168                                  |   |        DUT         |   |
1169                                  |   |       (VNF)        |   |
1170                                  |   |                    |   |
1171                                  |   +--------+  +--------+   |
1172                                  |   | VF NIC |  | VF NIC |   |
1173                                  |   +-----+--+--+----+---+   |
1174                                  |         ^          ^       |
1175                                  |         |          |       |
1176   +----------+                   +---------+----------+-------+
1177   |          |                   |        VF0        VF1      |
1178   |          |                   |         ^          ^       |
1179   |          |                   |         |   SUT    |       |
1180   |    TG    | (PF0)<----->(PF0) +---------+          |       |
1181   |          |                   |                    |       |
1182   |          | (PF1)<----->(PF1) +--------------------+       |
1183   |          |                   |                            |
1184   +----------+                   +----------------------------+
1185   trafficgen_0                                 host
1186
1187
1188 Host pre-configuration
1189 ++++++++++++++++++++++
1190
1191 .. warning:: The following configuration requires sudo access to the system.
1192    Make sure that your user have the access.
1193
1194 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1195 manufacturers disable this extension by default.
1196
1197 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1198 config file ``/etc/default/grub``.
1199
1200 For the Intel platform::
1201
1202   ...
1203   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1204   ...
1205
1206 For the AMD platform::
1207
1208   ...
1209   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1210   ...
1211
1212 Update the grub configuration file and restart the system:
1213
1214 .. warning:: The following command will reboot the system.
1215
1216 .. code:: bash
1217
1218   sudo update-grub
1219   sudo reboot
1220
1221 Make sure the extension has been enabled::
1222
1223   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1224
1225   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
1226   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1227   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1228   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1229   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1230   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1231   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1232   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1233
1234 .. TODO: Refer to the yardstick installation guide for proxy set up
1235
1236 Setup system proxy (if needed). Add the following configuration into the
1237 ``/etc/environment`` file:
1238
1239 .. note:: The proxy server name/port and IPs should be changed according to
1240   actual/current proxy configuration in the lab.
1241
1242 .. code:: bash
1243
1244   export http_proxy=http://proxy.company.com:port
1245   export https_proxy=http://proxy.company.com:port
1246   export ftp_proxy=http://proxy.company.com:port
1247   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1248   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1249
1250 Upgrade the system:
1251
1252 .. code:: bash
1253
1254   sudo -EH apt-get update
1255   sudo -EH apt-get upgrade
1256   sudo -EH apt-get dist-upgrade
1257
1258 Install dependencies needed for DevStack
1259
1260 .. code:: bash
1261
1262   sudo -EH apt-get install python python-dev python-pip
1263
1264 Setup SR-IOV ports on the host:
1265
1266 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1267   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1268   interface names should be changed according to the HW environment used for
1269   testing.
1270
1271 .. code:: bash
1272
1273   sudo ip link set dev enp24s0f0 up
1274   sudo ip link set dev enp24s0f1 up
1275   sudo ip link set dev enp24s0f3 up
1276
1277   # Create VFs on PF
1278   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1279   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1280
1281
1282 DevStack installation
1283 +++++++++++++++++++++
1284
1285 If you want to try out NSB, but don't have OpenStack set-up, you can use
1286 `Devstack`_ to install OpenStack on a host. Please note, that the
1287 ``stable/pike`` branch of devstack repo should be used during the installation.
1288 The required ``local.conf`` configuration file is described below.
1289
1290 DevStack configuration file:
1291
1292 .. note:: Update the devstack configuration file by replacing angluar brackets
1293   with a short description inside.
1294
1295 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1296   commands to get device and vendor id of the virtual function (VF).
1297
1298 .. literalinclude:: code/single-devstack-local.conf
1299    :language: ini
1300
1301 Start the devstack installation on a host.
1302
1303 TG host configuration
1304 +++++++++++++++++++++
1305
1306 Yardstick automatically installs and configures Trex traffic generator on TG
1307 host based on provided POD file (see below). Anyway, it's recommended to check
1308 the compatibility of the installed NIC on the TG server with software Trex
1309 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1310
1311 Run the Sample VNF test case
1312 ++++++++++++++++++++++++++++
1313
1314 There is an example of Sample VNF test case ready to be executed in an
1315 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1316 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
1317
1318 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1319 context.
1320
1321 Create pod file for TG in the yardstick repo folder located in the yardstick
1322 container:
1323
1324 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1325   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1326   command to get the PF PCI address for ``vpci`` field.
1327
1328 .. literalinclude:: code/single-yardstick-pod.conf
1329    :language: ini
1330
1331 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1332 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1333 context using steps described in `NS testing - using yardstick CLI`_ section.
1334
1335
1336 Multi node OpenStack TG and VNF setup (two nodes)
1337 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1338
1339 .. code-block:: console
1340
1341   +----------------------------+                   +----------------------------+
1342   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1343   |                            |                   |                            |
1344   |   +--------------------+   |                   |   +--------------------+   |
1345   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1346   |   |                    |   |                   |   |                    |   |
1347   |   |         TG         |   |                   |   |        DUT         |   |
1348   |   |    trafficgen_0    |   |                   |   |       (VNF)        |   |
1349   |   |                    |   |                   |   |                    |   |
1350   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1351   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1352   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1353   |        ^           ^       |                   |         ^          ^       |
1354   |        |           |       |                   |         |          |       |
1355   +--------+-----------+-------+                   +---------+----------+-------+
1356   |       VF0         VF1      |                   |        VF0        VF1      |
1357   |        ^           ^       |                   |         ^          ^       |
1358   |        |    SUT2   |       |                   |         |   SUT1   |       |
1359   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1360   |        |                   |                   |                    |       |
1361   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1362   |                            |                   |                            |
1363   +----------------------------+                   +----------------------------+
1364            host2 (compute)                               host1 (controller)
1365
1366
1367 Controller/Compute pre-configuration
1368 ++++++++++++++++++++++++++++++++++++
1369
1370 Pre-configuration of the controller and compute hosts are the same as
1371 described in `Host pre-configuration`_ section.
1372
1373 DevStack configuration
1374 ++++++++++++++++++++++
1375
1376 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1377 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1378 devstack repo should be used during the installation.
1379
1380 .. note:: Update the devstack configuration files by replacing angluar brackets
1381   with a short description inside.
1382
1383 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1384   commands to get device and vendor id of the virtual function (VF).
1385
1386 DevStack configuration file for controller host:
1387
1388 .. literalinclude:: code/multi-devstack-controller-local.conf
1389    :language: ini
1390
1391 DevStack configuration file for compute host:
1392
1393 .. literalinclude:: code/multi-devstack-compute-local.conf
1394    :language: ini
1395
1396 Start the devstack installation on the controller and compute hosts.
1397
1398 Run the sample vFW TC
1399 +++++++++++++++++++++
1400
1401 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1402 context.
1403
1404 Run the sample vFW RFC2544 SR-IOV test case
1405 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1406 in the heat context using steps described in
1407 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1408 line arguments:
1409
1410 .. code:: bash
1411
1412   yardstick -d task start --task-args='{"provider": "sriov"}' \
1413   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1414
1415
1416 Enabling other Traffic generators
1417 ---------------------------------
1418
1419 IxLoad
1420 ^^^^^^
1421
1422 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1423    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1424    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1425    ``<IxOS version>Linux64.bin.tar.gz``
1426    If the installation was not done inside the container, after installing
1427    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1428    sure you can run this cmd inside the yardstick container. Usually user is
1429    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1430    ``/usr/bin/ixiapython<ver>`` inside the container.
1431
1432 2. Update ``pod_ixia.yaml`` file with ixia details.
1433
1434   .. code-block:: console
1435
1436     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1437       etc/yardstick/nodes/pod_ixia.yaml
1438
1439   Config ``pod_ixia.yaml``
1440
1441   .. literalinclude:: code/pod_ixia.yaml
1442      :language: yaml
1443
1444   for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1445   for ovs-dpdk/sriov configuration
1446
1447 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1448    You will also need to configure the IxLoad machine to start the IXIA
1449    IxosTclServer. This can be started like so:
1450
1451    * Connect to the IxLoad machine using RDP
1452    * Go to:
1453      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1454      or
1455      ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1456
1457 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1458
1459 5. Execute testcase in samplevnf folder e.g.
1460    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1461
1462 IxNetwork
1463 ^^^^^^^^^
1464
1465 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1466 installed as part of the requirements of the project.
1467
1468 1. Update ``pod_ixia.yaml`` file with ixia details.
1469
1470   .. code-block:: console
1471
1472     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1473     etc/yardstick/nodes/pod_ixia.yaml
1474
1475   Configure ``pod_ixia.yaml``
1476
1477   .. literalinclude:: code/pod_ixia.yaml
1478      :language: yaml
1479
1480   for sriov/ovs_dpdk pod files, please refer to above
1481   `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1482
1483 2. Start IxNetwork TCL Server
1484    You will also need to configure the IxNetwork machine to start the IXIA
1485    IxNetworkTclServer. This can be started like so:
1486
1487     * Connect to the IxNetwork machine using RDP
1488     * Go to:
1489       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1490       (or ``IxNetworkApiServer``)
1491
1492 3. Execute testcase in samplevnf folder e.g.
1493    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1494
1495 Spirent Landslide
1496 -----------------
1497
1498 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1499 to be preinstalled and properly configured.
1500
1501 - Java
1502
1503     32-bit Java installation is required for the Spirent Landslide TCL API.
1504
1505     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1506
1507     .. important::
1508       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1509       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1510       section of installation instructions.
1511
1512 - LsApi (Tcl API module)
1513
1514     Follow Landslide documentation for detailed instructions on Linux
1515     installation of Tcl API and its dependencies
1516     ``http://TAS_HOST_IP/tclapiinstall.html``.
1517     For working with LsApi Python wrapper only steps 1-5 are required.
1518
1519     .. note:: After installation make sure your API home path is included in
1520       ``PYTHONPATH`` environment variable.
1521
1522     .. important::
1523       The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1524       For LsApi module to initialize correctly following lines (184-186) in
1525       lsapi.py
1526
1527     .. code-block:: python
1528
1529         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1530         if ldpath == '':
1531          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1532
1533     should be changed to:
1534
1535     .. code-block:: python
1536
1537         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1538         if not ldpath == '':
1539                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1540
1541 .. note:: The Spirent landslide TCL software package needs to be updated in case
1542   the user upgrades to a new version of Spirent landslide software.