74f76c9c95dbc66c6f6d28ec257ff7e8e4d67788
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2019 Intel Corporation.
5
6 ..
7    Convention for heading levels in Yardstick documentation:
8
9    =======  Heading 0 (reserved for the title in a document)
10    -------  Heading 1
11    ^^^^^^^  Heading 2
12    +++++++  Heading 3
13    '''''''  Heading 4
14
15    Avoid deeper levels because they do not render well.
16
17
18 ================
19 NSB Installation
20 ================
21
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24 .. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
25
26 Abstract
27 --------
28
29 The steps needed to run Yardstick with NSB testing are:
30
31 * Install Yardstick (NSB Testing).
32 * Setup/reference ``pod.yaml`` describing Test topology.
33 * Create/reference the test configuration yaml file.
34 * Run the test case.
35
36 Prerequisites
37 -------------
38
39 Refer to :doc:`04-installation` for more information on Yardstick
40 prerequisites.
41
42 Several prerequisites are needed for Yardstick (VNF testing):
43
44   * Python Modules: pyzmq, pika.
45   * flex
46   * bison
47   * build-essential
48   * automake
49   * libtool
50   * librabbitmq-dev
51   * rabbitmq-server
52   * collectd
53   * intel-cmt-cat
54
55 Hardware & Software Ingredients
56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
57
58 SUT requirements:
59
60    ======= ===================
61    Item    Description
62    ======= ===================
63    Memory  Min 20GB
64    NICs    2 x 10G
65    OS      Ubuntu 16.04.3 LTS
66    kernel  4.4.0-34-generic
67    DPDK    17.02
68    ======= ===================
69
70 Boot and BIOS settings:
71
72    ============= =================================================
73    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
74                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
75                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
76                  iommu=on iommu=pt intel_iommu=on
77                  Note: nohz_full and rcu_nocbs is to disable Linux
78                  kernel interrupts
79    BIOS          CPU Power and Performance Policy <Performance>
80                  CPU C-state Disabled
81                  CPU P-state Disabled
82                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
83                  Hyper-Threading Technology (If supported) Enabled
84                  Virtualization Techology Enabled
85                  Intel(R) VT for Direct I/O Enabled
86                  Coherency Enabled
87                  Turbo Boost Disabled
88    ============= =================================================
89
90 Install Yardstick (NSB Testing)
91 -------------------------------
92
93 Yardstick with NSB can be installed using ``nsb_setup.sh``.
94 The ``nsb_setup.sh`` allows to:
95
96 1. Install Yardstick in specified mode: bare metal or container.
97    Refer :doc:`04-installation`.
98 2. Install package dependencies on remote servers used as traffic generator or
99    sample VNF. Install DPDK, sample VNFs, TREX, collectd.
100    Add such servers to ``install-inventory.ini`` file to either
101    ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
102    It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
103 3. Build VM image either nsb or normal. The nsb VM image is used to run
104    Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
105    The normal VM image is used to run Yardstick ping tests in OpenStack context.
106 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
107
108 Firstly, configure the network proxy, either using the environment variables or
109 setting the global environment file.
110
111 Set environment in the file::
112
113     http_proxy='http://proxy.company.com:port'
114     https_proxy='http://proxy.company.com:port'
115
116 Set environment variables:
117
118 .. code-block:: console
119
120     export http_proxy='http://proxy.company.com:port'
121     export https_proxy='http://proxy.company.com:port'
122
123 Download the source code and check out the latest stable branch:
124
125 .. code-block:: console
126
127   git clone https://gerrit.opnfv.org/gerrit/yardstick
128   cd yardstick
129   # Switch to latest stable branch
130   git checkout stable/gambia
131
132 Modify the Yardstick installation inventory used by Ansible:
133
134 .. code-block:: ini
135
136   cat ./ansible/install-inventory.ini
137   [jumphost]
138   localhost ansible_connection=local
139
140   # section below is only due backward compatibility.
141   # it will be removed later
142   [yardstick:children]
143   jumphost
144
145   [yardstick-baremetal]
146   baremetal ansible_host=192.168.2.51 ansible_connection=ssh
147
148   [yardstick-standalone]
149   standalone ansible_host=192.168.2.52 ansible_connection=ssh
150
151   [all:vars]
152   # Uncomment credentials below if needed
153     ansible_user=root
154     ansible_ssh_pass=root
155   # ansible_ssh_private_key_file=/root/.ssh/id_rsa
156   # When IMG_PROPERTY is passed neither normal nor nsb set
157   # "path_to_vm=/path/to/image" to add it to OpenStack
158   # path_to_img=/tmp/workspace/yardstick-image.img
159
160   # List of CPUs to be isolated (not used by default)
161   # Grub line will be extended with:
162   # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
163   # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
164
165 .. warning::
166
167    Before running ``nsb_setup.sh`` make sure python is installed on servers
168    added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
169
170 .. note::
171
172    SSH access without password needs to be configured for all your nodes
173    defined in ``install-inventory.ini`` file.
174    If you want to use password authentication you need to install ``sshpass``::
175
176      sudo -EH apt-get install sshpass
177
178
179 .. note::
180
181    A VM image built by other means than Yardstick can be added to OpenStack.
182    Uncomment and set correct path to the VM image in the
183    ``install-inventory.ini`` file::
184
185      path_to_img=/tmp/workspace/yardstick-image.img
186
187
188 .. note::
189
190    CPU isolation can be applied to the remote servers, like:
191    ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
192    ``install-inventory.ini`` file.
193
194 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
195 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
196 installs packages to the servers given in ``yardstick-standalone`` and
197 ``yardstick-baremetal`` host groups.
198
199 To pull Yardstick built based on Ubuntu 18 run::
200
201     ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
202
203 To change default behavior modify parameters for ``install.yaml`` in
204 ``nsb_setup.sh`` file.
205
206 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
207 parameters.
208
209 To execute an installation for a **BareMetal** or a **Standalone context**::
210
211     ./nsb_setup.sh
212
213 To execute an installation for an **OpenStack** context::
214
215     ./nsb_setup.sh <path to admin-openrc.sh>
216
217 .. note::
218
219    Yardstick may not be operational after distributive linux kernel update if
220    it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
221
222 .. warning::
223
224    The Yardstick VM image (NSB or normal) cannot be built inside a VM.
225
226 .. warning::
227
228    The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
229    Reboot of the servers from ``yardstick-standalone`` or
230    ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
231    required to apply those changes.
232
233 The above commands will set up Docker with the latest Yardstick code. To
234 execute::
235
236   docker exec -it yardstick bash
237
238 .. note::
239
240    It may be needed to configure tty in docker container to extend commandline
241    character length, for example:
242
243    stty size rows 58 cols 234
244
245 It will also automatically download all the packages needed for NSB Testing
246 setup. Refer chapter :doc:`04-installation` for more on Docker:
247 :ref:`Install Yardstick using Docker`
248
249 Bare Metal context example
250 ^^^^^^^^^^^^^^^^^^^^^^^^^^
251
252 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
253
254 Perform following steps to install NSB:
255
256 1. Clone Yardstick repo to jump host.
257 2. Add TG and DUT servers to ``yardstick-baremetal`` group in
258    ``install-inventory.ini`` file to install NSB and dependencies. Install
259    python on servers.
260 3. Start deployment using docker image based on Ubuntu 16:
261
262 .. code-block:: console
263
264    ./nsb_setup.sh
265
266 4. Reboot bare metal servers.
267 5. Enter to yardstick container and modify pod yaml file and run tests.
268
269 Standalone context example for Ubuntu 18
270 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
271
272 Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
273 Ubuntu 18 is installed on all servers.
274
275 Perform following steps to install NSB:
276
277 1. Clone Yardstick repo to jump host.
278 2. Add TG server to ``yardstick-baremetal`` group in
279    ``install-inventory.ini`` file to install NSB and dependencies.
280    Add server where VM with sample VNF will be deployed to
281    ``yardstick-standalone`` group in ``install-inventory.ini`` file.
282    Target VM image named ``yardstick-nsb-image.img`` will be placed to
283    ``/var/lib/libvirt/images/``.
284    Install python on servers.
285 3. Modify ``nsb_setup.sh`` on jump host:
286
287 .. code-block:: console
288
289    ansible-playbook \
290    -e IMAGE_PROPERTY='nsb' \
291    -e OS_RELEASE='bionic' \
292    -e INSTALLATION_MODE='container_pull' \
293    -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
294    -i install-inventory.ini install.yaml
295
296 4. Start deployment with Yardstick docker images based on Ubuntu 18:
297
298 .. code-block:: console
299
300    ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
301
302 5. Reboot servers.
303 6. Enter to yardstick container and modify pod yaml file and run tests.
304
305
306 System Topology
307 ---------------
308
309 .. code-block:: console
310
311   +----------+              +----------+
312   |          |              |          |
313   |          | (0)----->(0) |          |
314   |    TG1   |              |    DUT   |
315   |          |              |          |
316   |          | (1)<-----(1) |          |
317   +----------+              +----------+
318   trafficgen_0                   vnf
319
320
321 Environment parameters and credentials
322 --------------------------------------
323
324 Configure yardstick.conf
325 ^^^^^^^^^^^^^^^^^^^^^^^^
326
327 If you did not run ``yardstick env influxdb`` inside the container to generate
328 ``yardstick.conf``, then create the config file manually (run inside the
329 container)::
330
331     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
332     vi /etc/yardstick/yardstick.conf
333
334 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
335 section:
336
337 .. code-block:: ini
338
339   [DEFAULT]
340   debug = True
341   dispatcher = influxdb
342
343   [dispatcher_influxdb]
344   timeout = 5
345   target = http://{YOUR_IP_HERE}:8086
346   db_name = yardstick
347   username = root
348   password = root
349
350   [nsb]
351   trex_path=/opt/nsb_bin/trex/scripts
352   bin_path=/opt/nsb_bin
353   trex_client_lib=/opt/nsb_bin/trex_client/stl
354
355 Run Yardstick - Network Service Testcases
356 -----------------------------------------
357
358 NS testing - using yardstick CLI
359 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
360
361   See :doc:`04-installation`
362
363 Connect to the Yardstick container::
364
365   docker exec -it yardstick /bin/bash
366
367 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
368
369   source /etc/yardstick/openstack.creds
370
371 In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
372 OpenStack::
373
374   export EXTERNAL_NETWORK="<openstack public network>"
375
376 Finally, you should be able to run the testcase::
377
378   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
379
380 Network Service Benchmarking - Bare-Metal
381 -----------------------------------------
382
383 Bare-Metal Config pod.yaml describing Topology
384 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
385
386 Bare-Metal 2-Node setup
387 +++++++++++++++++++++++
388 .. code-block:: console
389
390   +----------+              +----------+
391   |          |              |          |
392   |          | (0)----->(0) |          |
393   |    TG1   |              |    DUT   |
394   |          |              |          |
395   |          | (n)<-----(n) |          |
396   +----------+              +----------+
397   trafficgen_0                   vnf
398
399 Bare-Metal 3-Node setup - Correlated Traffic
400 ++++++++++++++++++++++++++++++++++++++++++++
401 .. code-block:: console
402
403   +----------+              +----------+            +------------+
404   |          |              |          |            |            |
405   |          |              |          |            |            |
406   |          | (0)----->(0) |          |            |    UDP     |
407   |    TG1   |              |    DUT   |            |   Replay   |
408   |          |              |          |            |            |
409   |          |              |          |(1)<---->(0)|            |
410   +----------+              +----------+            +------------+
411   trafficgen_0                   vnf                 trafficgen_1
412
413
414 Bare-Metal Config pod.yaml
415 ^^^^^^^^^^^^^^^^^^^^^^^^^^
416 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
417 topology and update all the required fields.::
418
419     cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
420
421 .. code-block:: YAML
422
423     nodes:
424     -
425         name: trafficgen_0
426         role: TrafficGen
427         ip: 1.1.1.1
428         user: root
429         password: r00t
430         interfaces:
431             xe0:  # logical name from topology.yaml and vnfd.yaml
432                 vpci:      "0000:07:00.0"
433                 driver:    i40e # default kernel driver
434                 dpdk_port_num: 0
435                 local_ip: "152.16.100.20"
436                 netmask:   "255.255.255.0"
437                 local_mac: "00:00:00:00:00:01"
438             xe1:  # logical name from topology.yaml and vnfd.yaml
439                 vpci:      "0000:07:00.1"
440                 driver:    i40e # default kernel driver
441                 dpdk_port_num: 1
442                 local_ip: "152.16.40.20"
443                 netmask:   "255.255.255.0"
444                 local_mac: "00:00:00:00:00:02"
445
446     -
447         name: vnf
448         role: vnf
449         ip: 1.1.1.2
450         user: root
451         password: r00t
452         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
453         interfaces:
454             xe0:  # logical name from topology.yaml and vnfd.yaml
455                 vpci:      "0000:07:00.0"
456                 driver:    i40e # default kernel driver
457                 dpdk_port_num: 0
458                 local_ip: "152.16.100.19"
459                 netmask:   "255.255.255.0"
460                 local_mac: "00:00:00:00:00:03"
461
462             xe1:  # logical name from topology.yaml and vnfd.yaml
463                 vpci:      "0000:07:00.1"
464                 driver:    i40e # default kernel driver
465                 dpdk_port_num: 1
466                 local_ip: "152.16.40.19"
467                 netmask:   "255.255.255.0"
468                 local_mac: "00:00:00:00:00:04"
469         routing_table:
470         - network: "152.16.100.20"
471           netmask: "255.255.255.0"
472           gateway: "152.16.100.20"
473           if: "xe0"
474         - network: "152.16.40.20"
475           netmask: "255.255.255.0"
476           gateway: "152.16.40.20"
477           if: "xe1"
478         nd_route_tbl:
479         - network: "0064:ff9b:0:0:0:0:9810:6414"
480           netmask: "112"
481           gateway: "0064:ff9b:0:0:0:0:9810:6414"
482           if: "xe0"
483         - network: "0064:ff9b:0:0:0:0:9810:2814"
484           netmask: "112"
485           gateway: "0064:ff9b:0:0:0:0:9810:2814"
486           if: "xe1"
487
488
489 Standalone Virtualization
490 -------------------------
491
492 SR-IOV
493 ^^^^^^
494
495 SR-IOV Pre-requisites
496 +++++++++++++++++++++
497
498 On Host, where VM is created:
499  1. Create and configure a bridge named ``br-int`` for VM to connect to
500     external network. Currently this can be done using VXLAN tunnel.
501
502     Execute the following on host, where VM is created::
503
504       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
505       brctl addbr br-int
506       brctl addif br-int vxlan0
507       ip link set dev vxlan0 up
508       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
509       ip link set dev br-int up
510
511   .. note:: You may need to add extra rules to iptable to forward traffic.
512
513   .. code-block:: console
514
515     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
516     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
517
518   Execute the following on a jump host:
519
520   .. code-block:: console
521
522       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
523       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
524       ip link set dev vxlan0 up
525
526   .. note:: Host and jump host are different baremetal servers.
527
528  2. Modify test case management CIDR.
529     IP addresses IP#1, IP#2 and CIDR must be in the same network.
530
531   .. code-block:: YAML
532
533     servers:
534       vnf_0:
535         network_ports:
536           mgmt:
537             cidr: '1.1.1.7/24'
538
539  3. Build guest image for VNF to run.
540     Most of the sample test cases in Yardstick are using a guest image called
541     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
542     Yardstick has a tool for building this custom image with SampleVNF.
543     It is necessary to have ``sudo`` rights to use this tool.
544
545    Also you may need to install several additional packages to use this tool, by
546    following the commands below::
547
548       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
549
550    This image can be built using the following command in the directory where
551    Yardstick is installed::
552
553       export YARD_IMG_ARCH='amd64'
554       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
555
556    For instructions on generating a cloud image using Ansible, refer to
557    :doc:`04-installation`.
558
559    .. note:: VM should be build with static IP and be accessible from the
560       Yardstick host.
561
562
563 SR-IOV Config pod.yaml describing Topology
564 ++++++++++++++++++++++++++++++++++++++++++
565
566 SR-IOV 2-Node setup
567 +++++++++++++++++++
568 .. code-block:: console
569
570                                +--------------------+
571                                |                    |
572                                |                    |
573                                |        DUT         |
574                                |       (VNF)        |
575                                |                    |
576                                +--------------------+
577                                | VF NIC |  | VF NIC |
578                                +--------+  +--------+
579                                      ^          ^
580                                      |          |
581                                      |          |
582   +----------+               +-------------------------+
583   |          |               |       ^          ^      |
584   |          |               |       |          |      |
585   |          | (0)<----->(0) | ------    SUT    |      |
586   |    TG1   |               |                  |      |
587   |          | (n)<----->(n) | -----------------       |
588   |          |               |                         |
589   +----------+               +-------------------------+
590   trafficgen_0                          host
591
592
593
594 SR-IOV 3-Node setup - Correlated Traffic
595 ++++++++++++++++++++++++++++++++++++++++
596 .. code-block:: console
597
598                              +--------------------+
599                              |                    |
600                              |                    |
601                              |        DUT         |
602                              |       (VNF)        |
603                              |                    |
604                              +--------------------+
605                              | VF NIC |  | VF NIC |
606                              +--------+  +--------+
607                                    ^          ^
608                                    |          |
609                                    |          |
610   +----------+               +---------------------+            +--------------+
611   |          |               |     ^          ^    |            |              |
612   |          |               |     |          |    |            |              |
613   |          | (0)<----->(0) |-----           |    |            |     TG2      |
614   |    TG1   |               |         SUT    |    |            | (UDP Replay) |
615   |          |               |                |    |            |              |
616   |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
617   +----------+               +---------------------+            +--------------+
618   trafficgen_0                          host                      trafficgen_1
619
620 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
621 topology and update all the required fields.
622
623 .. code-block:: console
624
625     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
626     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
627
628 .. note:: Update all the required fields like ip, user, password, pcis, etc...
629
630 SR-IOV Config pod_trex.yaml
631 +++++++++++++++++++++++++++
632
633 .. code-block:: YAML
634
635     nodes:
636     -
637         name: trafficgen_0
638         role: TrafficGen
639         ip: 1.1.1.1
640         user: root
641         password: r00t
642         key_filename: /root/.ssh/id_rsa
643         interfaces:
644             xe0:  # logical name from topology.yaml and vnfd.yaml
645                 vpci:      "0000:07:00.0"
646                 driver:    i40e # default kernel driver
647                 dpdk_port_num: 0
648                 local_ip: "152.16.100.20"
649                 netmask:   "255.255.255.0"
650                 local_mac: "00:00:00:00:00:01"
651             xe1:  # logical name from topology.yaml and vnfd.yaml
652                 vpci:      "0000:07:00.1"
653                 driver:    i40e # default kernel driver
654                 dpdk_port_num: 1
655                 local_ip: "152.16.40.20"
656                 netmask:   "255.255.255.0"
657                 local_mac: "00:00:00:00:00:02"
658
659 SR-IOV Config host_sriov.yaml
660 +++++++++++++++++++++++++++++
661
662 .. code-block:: YAML
663
664     nodes:
665     -
666        name: sriov
667        role: Sriov
668        ip: 192.168.100.101
669        user: ""
670        password: ""
671
672 SR-IOV testcase update:
673 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
674
675 Update contexts section
676 '''''''''''''''''''''''
677
678 .. code-block:: YAML
679
680   contexts:
681    - name: yardstick
682      type: Node
683      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
684    - type: StandaloneSriov
685      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
686      name: yardstick
687      vm_deploy: True
688      flavor:
689        images: "/var/lib/libvirt/images/ubuntu.qcow2"
690        ram: 4096
691        extra_specs:
692          hw:cpu_sockets: 1
693          hw:cpu_cores: 6
694          hw:cpu_threads: 2
695        user: "" # update VM username
696        password: "" # update password
697      servers:
698        vnf_0:
699          network_ports:
700            mgmt:
701              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
702            xe0:
703              - uplink_0
704            xe1:
705              - downlink_0
706      networks:
707        uplink_0:
708          phy_port: "0000:05:00.0"
709          vpci: "0000:00:07.0"
710          cidr: '152.16.100.10/24'
711          gateway_ip: '152.16.100.20'
712        downlink_0:
713          phy_port: "0000:05:00.1"
714          vpci: "0000:00:08.0"
715          cidr: '152.16.40.10/24'
716          gateway_ip: '152.16.100.20'
717
718
719 OVS-DPDK
720 ^^^^^^^^
721
722 OVS-DPDK Pre-requisites
723 +++++++++++++++++++++++
724
725 On Host, where VM is created:
726  1. Create and configure a bridge named ``br-int`` for VM to connect to
727     external network. Currently this can be done using VXLAN tunnel.
728
729     Execute the following on host, where VM is created:
730
731   .. code-block:: console
732
733       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
734       brctl addbr br-int
735       brctl addif br-int vxlan0
736       ip link set dev vxlan0 up
737       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
738       ip link set dev br-int up
739
740   .. note:: May be needed to add extra rules to iptable to forward traffic.
741
742   .. code-block:: console
743
744     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
745     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
746
747   Execute the following on a jump host:
748
749   .. code-block:: console
750
751       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
752       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
753       ip link set dev vxlan0 up
754
755   .. note:: Host and jump host are different baremetal servers.
756
757  2. Modify test case management CIDR.
758     IP addresses IP#1, IP#2 and CIDR must be in the same network.
759
760   .. code-block:: YAML
761
762     servers:
763       vnf_0:
764         network_ports:
765           mgmt:
766             cidr: '1.1.1.7/24'
767
768  3. Build guest image for VNF to run.
769     Most of the sample test cases in Yardstick are using a guest image called
770     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
771     Yardstick has a tool for building this custom image with SampleVNF.
772     It is necessary to have ``sudo`` rights to use this tool.
773
774    You may need to install several additional packages to use this tool, by
775    following the commands below::
776
777       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
778
779    This image can be built using the following command in the directory where
780    Yardstick is installed::
781
782       export YARD_IMG_ARCH='amd64'
783       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
784       sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
785
786    for more details refer to chapter :doc:`04-installation`
787
788    .. note::  VM should be build with static IP and should be accessible from
789       yardstick host.
790
791 4. OVS & DPDK version:
792
793   * OVS 2.7 and DPDK 16.11.1 above version is supported
794
795 Refer setup instructions at `OVS-DPDK`_ on host.
796
797 OVS-DPDK Config pod.yaml describing Topology
798 ++++++++++++++++++++++++++++++++++++++++++++
799
800 OVS-DPDK 2-Node setup
801 +++++++++++++++++++++
802
803 .. code-block:: console
804
805                                +--------------------+
806                                |                    |
807                                |                    |
808                                |        DUT         |
809                                |       (VNF)        |
810                                |                    |
811                                +--------------------+
812                                | virtio |  | virtio |
813                                +--------+  +--------+
814                                     ^          ^
815                                     |          |
816                                     |          |
817                                +--------+  +--------+
818                                | vHOST0 |  | vHOST1 |
819   +----------+               +-------------------------+
820   |          |               |       ^          ^      |
821   |          |               |       |          |      |
822   |          | (0)<----->(0) | ------           |      |
823   |    TG1   |               |          SUT     |      |
824   |          |               |       (ovs-dpdk) |      |
825   |          | (n)<----->(n) |------------------       |
826   +----------+               +-------------------------+
827   trafficgen_0                          host
828
829
830 OVS-DPDK 3-Node setup - Correlated Traffic
831 ++++++++++++++++++++++++++++++++++++++++++
832
833 .. code-block:: console
834
835                                +--------------------+
836                                |                    |
837                                |                    |
838                                |        DUT         |
839                                |       (VNF)        |
840                                |                    |
841                                +--------------------+
842                                | virtio |  | virtio |
843                                +--------+  +--------+
844                                     ^          ^
845                                     |          |
846                                     |          |
847                                +--------+  +--------+
848                                | vHOST0 |  | vHOST1 |
849   +----------+               +-------------------------+          +------------+
850   |          |               |       ^          ^      |          |            |
851   |          |               |       |          |      |          |            |
852   |          | (0)<----->(0) | ------           |      |          |    TG2     |
853   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
854   |          |               |      (ovs-dpdk)  |      |          |            |
855   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
856   +----------+               +-------------------------+          +------------+
857   trafficgen_0                          host                       trafficgen_1
858
859
860 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
861 the topology and update all the required fields::
862
863   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
864   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
865
866 .. note:: Update all the required fields like ip, user, password, pcis, etc...
867
868 OVS-DPDK Config pod_trex.yaml
869 +++++++++++++++++++++++++++++
870
871 .. code-block:: YAML
872
873     nodes:
874     -
875       name: trafficgen_0
876       role: TrafficGen
877       ip: 1.1.1.1
878       user: root
879       password: r00t
880       interfaces:
881           xe0:  # logical name from topology.yaml and vnfd.yaml
882               vpci:      "0000:07:00.0"
883               driver:    i40e # default kernel driver
884               dpdk_port_num: 0
885               local_ip: "152.16.100.20"
886               netmask:   "255.255.255.0"
887               local_mac: "00:00:00:00:00:01"
888           xe1:  # logical name from topology.yaml and vnfd.yaml
889               vpci:      "0000:07:00.1"
890               driver:    i40e # default kernel driver
891               dpdk_port_num: 1
892               local_ip: "152.16.40.20"
893               netmask:   "255.255.255.0"
894               local_mac: "00:00:00:00:00:02"
895
896 OVS-DPDK Config host_ovs.yaml
897 +++++++++++++++++++++++++++++
898
899 .. code-block:: YAML
900
901     nodes:
902     -
903        name: ovs_dpdk
904        role: OvsDpdk
905        ip: 192.168.100.101
906        user: ""
907        password: ""
908
909 ovs_dpdk testcase update:
910 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
911
912 Update contexts section
913 '''''''''''''''''''''''
914
915 .. code-block:: YAML
916
917   contexts:
918    - name: yardstick
919      type: Node
920      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
921    - type: StandaloneOvsDpdk
922      name: yardstick
923      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
924      vm_deploy: True
925      ovs_properties:
926        version:
927          ovs: 2.7.0
928          dpdk: 16.11.1
929        pmd_threads: 2
930        ram:
931          socket_0: 2048
932          socket_1: 2048
933        queues: 4
934        vpath: "/usr/local"
935
936      flavor:
937        images: "/var/lib/libvirt/images/ubuntu.qcow2"
938        ram: 4096
939        extra_specs:
940          hw:cpu_sockets: 1
941          hw:cpu_cores: 6
942          hw:cpu_threads: 2
943        user: "" # update VM username
944        password: "" # update password
945      servers:
946        vnf_0:
947          network_ports:
948            mgmt:
949              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
950            xe0:
951              - uplink_0
952            xe1:
953              - downlink_0
954      networks:
955        uplink_0:
956          phy_port: "0000:05:00.0"
957          vpci: "0000:00:07.0"
958          cidr: '152.16.100.10/24'
959          gateway_ip: '152.16.100.20'
960        downlink_0:
961          phy_port: "0000:05:00.1"
962          vpci: "0000:00:08.0"
963          cidr: '152.16.40.10/24'
964          gateway_ip: '152.16.100.20'
965
966 OVS-DPDK configuration options
967 ++++++++++++++++++++++++++++++
968
969 There are number of configuration options available for OVS-DPDK context in
970 test case. Mostly they are used for performance tuning.
971
972 OVS-DPDK properties:
973 ''''''''''''''''''''
974
975 OVS-DPDK properties example under *ovs_properties* section:
976
977   .. code-block:: console
978
979       ovs_properties:
980         version:
981           ovs: 2.8.1
982           dpdk: 17.05.2
983         pmd_threads: 4
984         pmd_cpu_mask: "0x3c"
985         ram:
986          socket_0: 2048
987          socket_1: 2048
988         queues: 2
989         vpath: "/usr/local"
990         max_idle: 30000
991         lcore_mask: 0x02
992         dpdk_pmd-rxq-affinity:
993           0: "0:2,1:2"
994           1: "0:2,1:2"
995           2: "0:3,1:3"
996           3: "0:3,1:3"
997         vhost_pmd-rxq-affinity:
998           0: "0:3,1:3"
999           1: "0:3,1:3"
1000           2: "0:4,1:4"
1001           3: "0:4,1:4"
1002
1003 OVS-DPDK properties description:
1004
1005   +-------------------------+-------------------------------------------------+
1006   | Parameters              | Detail                                          |
1007   +=========================+=================================================+
1008   | version                 || Version of OVS and DPDK to be installed        |
1009   |                         || There is a relation between OVS and DPDK       |
1010   |                         |  version which can be found at                  |
1011   |                         | `OVS-DPDK-versions`_                            |
1012   |                         || By default OVS: 2.6.0, DPDK: 16.07.2           |
1013   +-------------------------+-------------------------------------------------+
1014   | lcore_mask              || Core bitmask used during DPDK initialization   |
1015   |                         |  where the non-datapath OVS-DPDK threads such   |
1016   |                         |  as handler and revalidator threads run         |
1017   +-------------------------+-------------------------------------------------+
1018   | pmd_cpu_mask            || Core bitmask that sets which cores are used by |
1019   |                         || OVS-DPDK for datapath packet processing        |
1020   +-------------------------+-------------------------------------------------+
1021   | pmd_threads             || Number of PMD threads used by OVS-DPDK for     |
1022   |                         |  datapath                                       |
1023   |                         || This core mask is evaluated in Yardstick       |
1024   |                         || It will be used if pmd_cpu_mask is not given   |
1025   |                         || Default is 2                                   |
1026   +-------------------------+-------------------------------------------------+
1027   | ram                     || Amount of RAM to be used for each socket, MB   |
1028   |                         || Default is 2048 MB                             |
1029   +-------------------------+-------------------------------------------------+
1030   | queues                  || Number of RX queues used for DPDK physical     |
1031   |                         |  interface                                      |
1032   +-------------------------+-------------------------------------------------+
1033   | dpdk_pmd-rxq-affinity   || RX queue assignment to PMD threads for DPDK    |
1034   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1035   +-------------------------+-------------------------------------------------+
1036   | vhost_pmd-rxq-affinity  || RX queue assignment to PMD threads for vhost   |
1037   |                         || e.g.: <port number> : <queue-id>:<core-id>     |
1038   +-------------------------+-------------------------------------------------+
1039   | vpath                   || User path for openvswitch files                |
1040   |                         || Default is ``/usr/local``                      |
1041   +-------------------------+-------------------------------------------------+
1042   | max_idle                || The maximum time that idle flows will remain   |
1043   |                         |  cached in the datapath, ms                     |
1044   +-------------------------+-------------------------------------------------+
1045
1046
1047 VM image properties
1048 '''''''''''''''''''
1049
1050 VM image properties example under *flavor* section:
1051
1052   .. code-block:: console
1053
1054       flavor:
1055         images: <path>
1056         ram: 8192
1057         extra_specs:
1058            machine_type: 'pc-i440fx-xenial'
1059            hw:cpu_sockets: 1
1060            hw:cpu_cores: 6
1061            hw:cpu_threads: 2
1062            hw_socket: 0
1063            cputune: |
1064              <cputune>
1065                <vcpupin vcpu="0" cpuset="7"/>
1066                <vcpupin vcpu="1" cpuset="8"/>
1067                ...
1068                <vcpupin vcpu="11" cpuset="18"/>
1069                <emulatorpin cpuset="11"/>
1070              </cputune>
1071
1072 VM image properties description:
1073
1074   +-------------------------+-------------------------------------------------+
1075   | Parameters              | Detail                                          |
1076   +=========================+=================================================+
1077   | images                  || Path to the VM image generated by              |
1078   |                         |  ``nsb_setup.sh``                               |
1079   |                         || Default path is ``/var/lib/libvirt/images/``   |
1080   |                         || Default file name ``yardstick-nsb-image.img``  |
1081   |                         |  or ``yardstick-image.img``                     |
1082   +-------------------------+-------------------------------------------------+
1083   | ram                     || Amount of RAM to be used for VM                |
1084   |                         || Default is 4096 MB                             |
1085   +-------------------------+-------------------------------------------------+
1086   | hw:cpu_sockets          || Number of sockets provided to the guest VM     |
1087   |                         || Default is 1                                   |
1088   +-------------------------+-------------------------------------------------+
1089   | hw:cpu_cores            || Number of cores provided to the guest VM       |
1090   |                         || Default is 2                                   |
1091   +-------------------------+-------------------------------------------------+
1092   | hw:cpu_threads          || Number of threads provided to the guest VM     |
1093   |                         || Default is 2                                   |
1094   +-------------------------+-------------------------------------------------+
1095   | hw_socket               || Generate vcpu cpuset from given HW socket      |
1096   |                         || Default is 0                                   |
1097   +-------------------------+-------------------------------------------------+
1098   | cputune                 || Maps virtual cpu with logical cpu              |
1099   +-------------------------+-------------------------------------------------+
1100   | machine_type            || Machine type to be emulated in VM              |
1101   |                         || Default is 'pc-i440fx-xenial'                  |
1102   +-------------------------+-------------------------------------------------+
1103
1104
1105 OpenStack with SR-IOV support
1106 -----------------------------
1107
1108 This section describes how to run a Sample VNF test case, using Heat context,
1109 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
1110 DevStack, with SR-IOV support.
1111
1112
1113 Single node OpenStack with external TG
1114 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1115
1116 .. code-block:: console
1117
1118                                  +----------------------------+
1119                                  |OpenStack(DevStack)         |
1120                                  |                            |
1121                                  |   +--------------------+   |
1122                                  |   |sample-VNF VM       |   |
1123                                  |   |                    |   |
1124                                  |   |        DUT         |   |
1125                                  |   |       (VNF)        |   |
1126                                  |   |                    |   |
1127                                  |   +--------+  +--------+   |
1128                                  |   | VF NIC |  | VF NIC |   |
1129                                  |   +-----+--+--+----+---+   |
1130                                  |         ^          ^       |
1131                                  |         |          |       |
1132   +----------+                   +---------+----------+-------+
1133   |          |                   |        VF0        VF1      |
1134   |          |                   |         ^          ^       |
1135   |          |                   |         |   SUT    |       |
1136   |    TG    | (PF0)<----->(PF0) +---------+          |       |
1137   |          |                   |                    |       |
1138   |          | (PF1)<----->(PF1) +--------------------+       |
1139   |          |                   |                            |
1140   +----------+                   +----------------------------+
1141   trafficgen_0                                 host
1142
1143
1144 Host pre-configuration
1145 ++++++++++++++++++++++
1146
1147 .. warning:: The following configuration requires sudo access to the system.
1148    Make sure that your user have the access.
1149
1150 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
1151 manufacturers disable this extension by default.
1152
1153 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
1154 config file ``/etc/default/grub``.
1155
1156 For the Intel platform::
1157
1158   ...
1159   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
1160   ...
1161
1162 For the AMD platform::
1163
1164   ...
1165   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
1166   ...
1167
1168 Update the grub configuration file and restart the system:
1169
1170 .. warning:: The following command will reboot the system.
1171
1172 .. code:: bash
1173
1174   sudo update-grub
1175   sudo reboot
1176
1177 Make sure the extension has been enabled::
1178
1179   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
1180
1181   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
1182   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
1183   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
1184   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
1185   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1186   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
1187   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
1188   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
1189
1190 .. TODO: Refer to the yardstick installation guide for proxy set up
1191
1192 Setup system proxy (if needed). Add the following configuration into the
1193 ``/etc/environment`` file:
1194
1195 .. note:: The proxy server name/port and IPs should be changed according to
1196   actual/current proxy configuration in the lab.
1197
1198 .. code:: bash
1199
1200   export http_proxy=http://proxy.company.com:port
1201   export https_proxy=http://proxy.company.com:port
1202   export ftp_proxy=http://proxy.company.com:port
1203   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1204   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
1205
1206 Upgrade the system:
1207
1208 .. code:: bash
1209
1210   sudo -EH apt-get update
1211   sudo -EH apt-get upgrade
1212   sudo -EH apt-get dist-upgrade
1213
1214 Install dependencies needed for DevStack
1215
1216 .. code:: bash
1217
1218   sudo -EH apt-get install python python-dev python-pip
1219
1220 Setup SR-IOV ports on the host:
1221
1222 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1223   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1224   interface names should be changed according to the HW environment used for
1225   testing.
1226
1227 .. code:: bash
1228
1229   sudo ip link set dev enp24s0f0 up
1230   sudo ip link set dev enp24s0f1 up
1231   sudo ip link set dev enp24s0f3 up
1232
1233   # Create VFs on PF
1234   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1235   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1236
1237
1238 DevStack installation
1239 +++++++++++++++++++++
1240
1241 If you want to try out NSB, but don't have OpenStack set-up, you can use
1242 `Devstack`_ to install OpenStack on a host. Please note, that the
1243 ``stable/pike`` branch of devstack repo should be used during the installation.
1244 The required ``local.conf`` configuration file is described below.
1245
1246 DevStack configuration file:
1247
1248 .. note:: Update the devstack configuration file by replacing angluar brackets
1249   with a short description inside.
1250
1251 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1252   commands to get device and vendor id of the virtual function (VF).
1253
1254 .. literalinclude:: code/single-devstack-local.conf
1255    :language: ini
1256
1257 Start the devstack installation on a host.
1258
1259 TG host configuration
1260 +++++++++++++++++++++
1261
1262 Yardstick automatically installs and configures Trex traffic generator on TG
1263 host based on provided POD file (see below). Anyway, it's recommended to check
1264 the compatibility of the installed NIC on the TG server with software Trex
1265 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1266
1267 Run the Sample VNF test case
1268 ++++++++++++++++++++++++++++
1269
1270 There is an example of Sample VNF test case ready to be executed in an
1271 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1272 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
1273
1274 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1275 context.
1276
1277 Create pod file for TG in the yardstick repo folder located in the yardstick
1278 container:
1279
1280 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1281   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1282   command to get the PF PCI address for ``vpci`` field.
1283
1284 .. literalinclude:: code/single-yardstick-pod.conf
1285    :language: ini
1286
1287 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1288 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1289 context using steps described in `NS testing - using yardstick CLI`_ section.
1290
1291
1292 Multi node OpenStack TG and VNF setup (two nodes)
1293 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1294
1295 .. code-block:: console
1296
1297   +----------------------------+                   +----------------------------+
1298   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1299   |                            |                   |                            |
1300   |   +--------------------+   |                   |   +--------------------+   |
1301   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1302   |   |                    |   |                   |   |                    |   |
1303   |   |         TG         |   |                   |   |        DUT         |   |
1304   |   |    trafficgen_0    |   |                   |   |       (VNF)        |   |
1305   |   |                    |   |                   |   |                    |   |
1306   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1307   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1308   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1309   |        ^           ^       |                   |         ^          ^       |
1310   |        |           |       |                   |         |          |       |
1311   +--------+-----------+-------+                   +---------+----------+-------+
1312   |       VF0         VF1      |                   |        VF0        VF1      |
1313   |        ^           ^       |                   |         ^          ^       |
1314   |        |    SUT2   |       |                   |         |   SUT1   |       |
1315   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1316   |        |                   |                   |                    |       |
1317   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1318   |                            |                   |                            |
1319   +----------------------------+                   +----------------------------+
1320            host2 (compute)                               host1 (controller)
1321
1322
1323 Controller/Compute pre-configuration
1324 ++++++++++++++++++++++++++++++++++++
1325
1326 Pre-configuration of the controller and compute hosts are the same as
1327 described in `Host pre-configuration`_ section.
1328
1329 DevStack configuration
1330 ++++++++++++++++++++++
1331
1332 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1333 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1334 devstack repo should be used during the installation.
1335
1336 .. note:: Update the devstack configuration files by replacing angluar brackets
1337   with a short description inside.
1338
1339 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1340   commands to get device and vendor id of the virtual function (VF).
1341
1342 DevStack configuration file for controller host:
1343
1344 .. literalinclude:: code/multi-devstack-controller-local.conf
1345    :language: ini
1346
1347 DevStack configuration file for compute host:
1348
1349 .. literalinclude:: code/multi-devstack-compute-local.conf
1350    :language: ini
1351
1352 Start the devstack installation on the controller and compute hosts.
1353
1354 Run the sample vFW TC
1355 +++++++++++++++++++++
1356
1357 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1358 context.
1359
1360 Run the sample vFW RFC2544 SR-IOV test case
1361 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1362 in the heat context using steps described in
1363 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1364 line arguments:
1365
1366 .. code:: bash
1367
1368   yardstick -d task start --task-args='{"provider": "sriov"}' \
1369   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1370
1371
1372 Enabling other Traffic generators
1373 ---------------------------------
1374
1375 IxLoad
1376 ^^^^^^
1377
1378 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1379    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1380    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1381    ``<IxOS version>Linux64.bin.tar.gz``
1382    If the installation was not done inside the container, after installing
1383    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1384    sure you can run this cmd inside the yardstick container. Usually user is
1385    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1386    ``/usr/bin/ixiapython<ver>`` inside the container.
1387
1388 2. Update ``pod_ixia.yaml`` file with ixia details.
1389
1390   .. code-block:: console
1391
1392     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1393       etc/yardstick/nodes/pod_ixia.yaml
1394
1395   Config ``pod_ixia.yaml``
1396
1397   .. literalinclude:: code/pod_ixia.yaml
1398      :language: yaml
1399
1400   for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1401   for ovs-dpdk/sriov configuration
1402
1403 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1404    You will also need to configure the IxLoad machine to start the IXIA
1405    IxosTclServer. This can be started like so:
1406
1407    * Connect to the IxLoad machine using RDP
1408    * Go to:
1409      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1410      or
1411      ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1412
1413 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1414
1415 5. Execute testcase in samplevnf folder e.g.
1416    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1417
1418 IxNetwork
1419 ^^^^^^^^^
1420
1421 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1422 installed as part of the requirements of the project.
1423
1424 1. Update ``pod_ixia.yaml`` file with ixia details.
1425
1426   .. code-block:: console
1427
1428     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1429     etc/yardstick/nodes/pod_ixia.yaml
1430
1431   Configure ``pod_ixia.yaml``
1432
1433   .. literalinclude:: code/pod_ixia.yaml
1434      :language: yaml
1435
1436   for sriov/ovs_dpdk pod files, please refer to above
1437   `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1438
1439 2. Start IxNetwork TCL Server
1440    You will also need to configure the IxNetwork machine to start the IXIA
1441    IxNetworkTclServer. This can be started like so:
1442
1443     * Connect to the IxNetwork machine using RDP
1444     * Go to:
1445       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1446       (or ``IxNetworkApiServer``)
1447
1448 3. Execute testcase in samplevnf folder e.g.
1449    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1450
1451 Spirent Landslide
1452 -----------------
1453
1454 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1455 to be preinstalled and properly configured.
1456
1457 - Java
1458
1459     32-bit Java installation is required for the Spirent Landslide TCL API.
1460
1461     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1462
1463     .. important::
1464       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1465       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1466       section of installation instructions.
1467
1468 - LsApi (Tcl API module)
1469
1470     Follow Landslide documentation for detailed instructions on Linux
1471     installation of Tcl API and its dependencies
1472     ``http://TAS_HOST_IP/tclapiinstall.html``.
1473     For working with LsApi Python wrapper only steps 1-5 are required.
1474
1475     .. note:: After installation make sure your API home path is included in
1476       ``PYTHONPATH`` environment variable.
1477
1478     .. important::
1479       The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1480       For LsApi module to initialize correctly following lines (184-186) in
1481       lsapi.py
1482
1483     .. code-block:: python
1484
1485         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1486         if ldpath == '':
1487          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1488
1489     should be changed to:
1490
1491     .. code-block:: python
1492
1493         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1494         if not ldpath == '':
1495                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1496
1497 .. note:: The Spirent landslide TCL software package needs to be updated in case
1498   the user upgrades to a new version of Spirent landslide software.