0487dad9a89011b6ecd231cb1d83d6c4a1a7de87
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
5
6 ..
7    Convention for heading levels in Yardstick documentation:
8
9    =======  Heading 0 (reserved for the title in a document)
10    -------  Heading 1
11    ^^^^^^^  Heading 2
12    +++++++  Heading 3
13    '''''''  Heading 4
14
15    Avoid deeper levels because they do not render well.
16
17
18 ================
19 NSB Installation
20 ================
21
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24
25 Abstract
26 --------
27
28 The steps needed to run Yardstick with NSB testing are:
29
30 * Install Yardstick (NSB Testing).
31 * Setup/reference ``pod.yaml`` describing Test topology.
32 * Create/reference the test configuration yaml file.
33 * Run the test case.
34
35 Prerequisites
36 -------------
37
38 Refer to :doc:`04-installation` for more information on Yardstick
39 prerequisites.
40
41 Several prerequisites are needed for Yardstick (VNF testing):
42
43   * Python Modules: pyzmq, pika.
44   * flex
45   * bison
46   * build-essential
47   * automake
48   * libtool
49   * librabbitmq-dev
50   * rabbitmq-server
51   * collectd
52   * intel-cmt-cat
53
54 Hardware & Software Ingredients
55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56
57 SUT requirements:
58
59    ======= ===================
60    Item    Description
61    ======= ===================
62    Memory  Min 20GB
63    NICs    2 x 10G
64    OS      Ubuntu 16.04.3 LTS
65    kernel  4.4.0-34-generic
66    DPDK    17.02
67    ======= ===================
68
69 Boot and BIOS settings:
70
71    ============= =================================================
72    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
73                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
74                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
75                  iommu=on iommu=pt intel_iommu=on
76                  Note: nohz_full and rcu_nocbs is to disable Linux
77                  kernel interrupts
78    BIOS          CPU Power and Performance Policy <Performance>
79                  CPU C-state Disabled
80                  CPU P-state Disabled
81                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
82                  Hyper-Threading Technology (If supported) Enabled
83                  Virtualization Techology Enabled
84                  Intel(R) VT for Direct I/O Enabled
85                  Coherency Enabled
86                  Turbo Boost Disabled
87    ============= =================================================
88
89 Install Yardstick (NSB Testing)
90 -------------------------------
91
92 Yardstick with NSB can be installed using ``nsb_setup.sh``.
93 The ``nsb_setup.sh`` allows to:
94
95 1. Install Yardstick in specified mode: bare metal or container.
96    Refer :doc:`04-installation`.
97 2. Install package dependencies on remote servers used as traffic generator or
98    sample VNF. Add such servers to ``install-inventory.ini`` file to either
99    ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
100    Configures IOMMU, hugepages, open file limits, CPU isolation, etc.
101 3. Build VM image either nsb or normal. The nsb VM image is used to run
102    Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
103    The normal VM image is used to run Yardstick ping tests in OpenStack context.
104 4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
105
106 Firstly, configure the network proxy, either using the environment variables or
107 setting the global environment file.
108
109 Set environment::
110
111     http_proxy='http://proxy.company.com:port'
112     https_proxy='http://proxy.company.com:port'
113
114 .. code-block:: console
115
116     export http_proxy='http://proxy.company.com:port'
117     export https_proxy='http://proxy.company.com:port'
118
119 Download the source code and check out the latest stable branch
120
121 .. code-block:: console
122
123   git clone https://gerrit.opnfv.org/gerrit/yardstick
124   cd yardstick
125   # Switch to latest stable branch
126   git checkout stable/gambia
127
128 Modify the Yardstick installation inventory used by Ansible::
129
130   cat ./ansible/install-inventory.ini
131   [jumphost]
132   localhost ansible_connection=local
133
134   # section below is only due backward compatibility.
135   # it will be removed later
136   [yardstick:children]
137   jumphost
138
139   [yardstick-standalone]
140   standalone ansible_host=192.168.2.51 ansible_connection=ssh
141
142   [yardstick-baremetal]
143   baremetal ansible_host=192.168.2.52 ansible_connection=ssh
144
145   [all:vars]
146   arch_amd64=amd64
147   arch_arm64=arm64
148   inst_mode_baremetal=baremetal
149   inst_mode_container=container
150   inst_mode_container_pull=container_pull
151   ubuntu_archive={"amd64": "http://archive.ubuntu.com/ubuntu/", "arm64": "http://ports.ubuntu.com/ubuntu-ports/"}
152   ansible_user=root
153   ansible_ssh_pass=root  # OR ansible_ssh_private_key_file=/root/.ssh/id_rsa
154
155 .. warning::
156
157    Before running ``nsb_setup.sh`` make sure python is installed on servers
158    added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
159
160 .. note::
161
162    SSH access without password needs to be configured for all your nodes
163    defined in ``install-inventory.ini`` file.
164    If you want to use password authentication you need to install ``sshpass``::
165
166      sudo -EH apt-get install sshpass
167
168
169 .. note::
170
171    A VM image built by other means than Yardstick can be added to OpenStack.
172    Uncomment and set correct path to the VM image in the
173    ``install-inventory.ini`` file::
174
175      path_to_img=/tmp/workspace/yardstick-image.img
176
177
178 .. note::
179
180    CPU isolation can be applied to the remote servers, like:
181    ISOL_CPUS=2-27,30-55
182    Uncomment and modify accordingly in ``install-inventory.ini`` file.
183
184 By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
185 docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
186 installs packages to the servers given in ``yardstick-standalone`` and
187 ``yardstick-baremetal`` host groups.
188
189 To change default behavior modify parameters for ``install.yaml`` in
190 ``nsb_setup.sh`` file.
191
192 Refer chapter :doc:`04-installation` for more details on ``install.yaml``
193 parameters.
194
195 To execute an installation for a **BareMetal** or a **Standalone context**::
196
197     ./nsb_setup.sh
198
199
200 To execute an installation for an **OpenStack** context::
201
202     ./nsb_setup.sh <path to admin-openrc.sh>
203
204 .. warning::
205
206    The Yardstick VM image (NSB or normal) cannot be built inside a VM.
207
208 .. warning::
209
210    The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
211    Reboot of the servers from ``yardstick-standalone`` or
212    ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
213    required to apply those changes.
214
215 The above commands will set up Docker with the latest Yardstick code. To
216 execute::
217
218   docker exec -it yardstick bash
219
220 It will also automatically download all the packages needed for NSB Testing
221 setup. Refer chapter :doc:`04-installation` for more on Docker
222
223 **Install Yardstick using Docker (recommended)**
224
225 System Topology
226 ---------------
227
228 .. code-block:: console
229
230   +----------+              +----------+
231   |          |              |          |
232   |          | (0)----->(0) |          |
233   |    TG1   |              |    DUT   |
234   |          |              |          |
235   |          | (1)<-----(1) |          |
236   +----------+              +----------+
237   trafficgen_0                   vnf
238
239
240 Environment parameters and credentials
241 --------------------------------------
242
243 Configure yardstick.conf
244 ^^^^^^^^^^^^^^^^^^^^^^^^
245
246 If you did not run ``yardstick env influxdb`` inside the container to generate
247 ``yardstick.conf``, then create the config file manually (run inside the
248 container)::
249
250     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
251     vi /etc/yardstick/yardstick.conf
252
253 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
254 section::
255
256   [DEFAULT]
257   debug = True
258   dispatcher = influxdb
259
260   [dispatcher_influxdb]
261   timeout = 5
262   target = http://{YOUR_IP_HERE}:8086
263   db_name = yardstick
264   username = root
265   password = root
266
267   [nsb]
268   trex_path=/opt/nsb_bin/trex/scripts
269   bin_path=/opt/nsb_bin
270   trex_client_lib=/opt/nsb_bin/trex_client/stl
271
272 Run Yardstick - Network Service Testcases
273 -----------------------------------------
274
275 NS testing - using yardstick CLI
276 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
277
278   See :doc:`04-installation`
279
280 Connect to the Yardstick container::
281
282   docker exec -it yardstick /bin/bash
283
284 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
285   source /etc/yardstick/openstack.creds
286
287 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
288 OpenStack::
289
290   export EXTERNAL_NETWORK="<openstack public network>"
291
292 Finally, you should be able to run the testcase::
293
294   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
295
296 Network Service Benchmarking - Bare-Metal
297 -----------------------------------------
298
299 Bare-Metal Config pod.yaml describing Topology
300 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
301
302 Bare-Metal 2-Node setup
303 +++++++++++++++++++++++
304 .. code-block:: console
305
306   +----------+              +----------+
307   |          |              |          |
308   |          | (0)----->(0) |          |
309   |    TG1   |              |    DUT   |
310   |          |              |          |
311   |          | (n)<-----(n) |          |
312   +----------+              +----------+
313   trafficgen_0                   vnf
314
315 Bare-Metal 3-Node setup - Correlated Traffic
316 ++++++++++++++++++++++++++++++++++++++++++++
317 .. code-block:: console
318
319   +----------+              +----------+            +------------+
320   |          |              |          |            |            |
321   |          |              |          |            |            |
322   |          | (0)----->(0) |          |            |    UDP     |
323   |    TG1   |              |    DUT   |            |   Replay   |
324   |          |              |          |            |            |
325   |          |              |          |(1)<---->(0)|            |
326   +----------+              +----------+            +------------+
327   trafficgen_0                   vnf                 trafficgen_1
328
329
330 Bare-Metal Config pod.yaml
331 ^^^^^^^^^^^^^^^^^^^^^^^^^^
332 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
333 topology and update all the required fields.::
334
335     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
336
337 .. code-block:: YAML
338
339     nodes:
340     -
341         name: trafficgen_0
342         role: TrafficGen
343         ip: 1.1.1.1
344         user: root
345         password: r00t
346         interfaces:
347             xe0:  # logical name from topology.yaml and vnfd.yaml
348                 vpci:      "0000:07:00.0"
349                 driver:    i40e # default kernel driver
350                 dpdk_port_num: 0
351                 local_ip: "152.16.100.20"
352                 netmask:   "255.255.255.0"
353                 local_mac: "00:00:00:00:00:01"
354             xe1:  # logical name from topology.yaml and vnfd.yaml
355                 vpci:      "0000:07:00.1"
356                 driver:    i40e # default kernel driver
357                 dpdk_port_num: 1
358                 local_ip: "152.16.40.20"
359                 netmask:   "255.255.255.0"
360                 local_mac: "00:00.00:00:00:02"
361
362     -
363         name: vnf
364         role: vnf
365         ip: 1.1.1.2
366         user: root
367         password: r00t
368         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
369         interfaces:
370             xe0:  # logical name from topology.yaml and vnfd.yaml
371                 vpci:      "0000:07:00.0"
372                 driver:    i40e # default kernel driver
373                 dpdk_port_num: 0
374                 local_ip: "152.16.100.19"
375                 netmask:   "255.255.255.0"
376                 local_mac: "00:00:00:00:00:03"
377
378             xe1:  # logical name from topology.yaml and vnfd.yaml
379                 vpci:      "0000:07:00.1"
380                 driver:    i40e # default kernel driver
381                 dpdk_port_num: 1
382                 local_ip: "152.16.40.19"
383                 netmask:   "255.255.255.0"
384                 local_mac: "00:00:00:00:00:04"
385         routing_table:
386         - network: "152.16.100.20"
387           netmask: "255.255.255.0"
388           gateway: "152.16.100.20"
389           if: "xe0"
390         - network: "152.16.40.20"
391           netmask: "255.255.255.0"
392           gateway: "152.16.40.20"
393           if: "xe1"
394         nd_route_tbl:
395         - network: "0064:ff9b:0:0:0:0:9810:6414"
396           netmask: "112"
397           gateway: "0064:ff9b:0:0:0:0:9810:6414"
398           if: "xe0"
399         - network: "0064:ff9b:0:0:0:0:9810:2814"
400           netmask: "112"
401           gateway: "0064:ff9b:0:0:0:0:9810:2814"
402           if: "xe1"
403
404
405 Standalone Virtualization
406 -------------------------
407
408 SR-IOV
409 ^^^^^^
410
411 SR-IOV Pre-requisites
412 +++++++++++++++++++++
413
414 On Host, where VM is created:
415  a) Create and configure a bridge named ``br-int`` for VM to connect to
416     external network. Currently this can be done using VXLAN tunnel.
417
418     Execute the following on host, where VM is created::
419
420       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
421       brctl addbr br-int
422       brctl addif br-int vxlan0
423       ip link set dev vxlan0 up
424       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
425       ip link set dev br-int up
426
427   .. note:: You may need to add extra rules to iptable to forward traffic.
428
429   .. code-block:: console
430
431     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
432     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
433
434   Execute the following on a jump host:
435
436   .. code-block:: console
437
438       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
439       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
440       ip link set dev vxlan0 up
441
442   .. note:: Host and jump host are different baremetal servers.
443
444  b) Modify test case management CIDR.
445     IP addresses IP#1, IP#2 and CIDR must be in the same network.
446
447   .. code-block:: YAML
448
449     servers:
450       vnf_0:
451         network_ports:
452           mgmt:
453             cidr: '1.1.1.7/24'
454
455  c) Build guest image for VNF to run.
456     Most of the sample test cases in Yardstick are using a guest image called
457     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
458     Yardstick has a tool for building this custom image with SampleVNF.
459     It is necessary to have ``sudo`` rights to use this tool.
460
461    Also you may need to install several additional packages to use this tool, by
462    following the commands below::
463
464       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
465
466    This image can be built using the following command in the directory where
467    Yardstick is installed::
468
469       export YARD_IMG_ARCH='amd64'
470       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
471
472    For instructions on generating a cloud image using Ansible, refer to
473    :doc:`04-installation`.
474
475    for more details refer to chapter :doc:`04-installation`
476
477    .. note:: VM should be build with static IP and be accessible from the
478       Yardstick host.
479
480
481 SR-IOV Config pod.yaml describing Topology
482 ++++++++++++++++++++++++++++++++++++++++++
483
484 SR-IOV 2-Node setup
485 +++++++++++++++++++
486 .. code-block:: console
487
488                                +--------------------+
489                                |                    |
490                                |                    |
491                                |        DUT         |
492                                |       (VNF)        |
493                                |                    |
494                                +--------------------+
495                                | VF NIC |  | VF NIC |
496                                +--------+  +--------+
497                                      ^          ^
498                                      |          |
499                                      |          |
500   +----------+               +-------------------------+
501   |          |               |       ^          ^      |
502   |          |               |       |          |      |
503   |          | (0)<----->(0) | ------    SUT    |      |
504   |    TG1   |               |                  |      |
505   |          | (n)<----->(n) | -----------------       |
506   |          |               |                         |
507   +----------+               +-------------------------+
508   trafficgen_0                          host
509
510
511
512 SR-IOV 3-Node setup - Correlated Traffic
513 ++++++++++++++++++++++++++++++++++++++++
514 .. code-block:: console
515
516                              +--------------------+
517                              |                    |
518                              |                    |
519                              |        DUT         |
520                              |       (VNF)        |
521                              |                    |
522                              +--------------------+
523                              | VF NIC |  | VF NIC |
524                              +--------+  +--------+
525                                    ^          ^
526                                    |          |
527                                    |          |
528   +----------+               +---------------------+            +--------------+
529   |          |               |     ^          ^    |            |              |
530   |          |               |     |          |    |            |              |
531   |          | (0)<----->(0) |-----           |    |            |     TG2      |
532   |    TG1   |               |         SUT    |    |            | (UDP Replay) |
533   |          |               |                |    |            |              |
534   |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
535   +----------+               +---------------------+            +--------------+
536   trafficgen_0                          host                      trafficgen_1
537
538 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
539 topology and update all the required fields.
540
541 .. code-block:: console
542
543     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
544     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
545
546 .. note:: Update all the required fields like ip, user, password, pcis, etc...
547
548 SR-IOV Config pod_trex.yaml
549 +++++++++++++++++++++++++++
550
551 .. code-block:: YAML
552
553     nodes:
554     -
555         name: trafficgen_0
556         role: TrafficGen
557         ip: 1.1.1.1
558         user: root
559         password: r00t
560         key_filename: /root/.ssh/id_rsa
561         interfaces:
562             xe0:  # logical name from topology.yaml and vnfd.yaml
563                 vpci:      "0000:07:00.0"
564                 driver:    i40e # default kernel driver
565                 dpdk_port_num: 0
566                 local_ip: "152.16.100.20"
567                 netmask:   "255.255.255.0"
568                 local_mac: "00:00:00:00:00:01"
569             xe1:  # logical name from topology.yaml and vnfd.yaml
570                 vpci:      "0000:07:00.1"
571                 driver:    i40e # default kernel driver
572                 dpdk_port_num: 1
573                 local_ip: "152.16.40.20"
574                 netmask:   "255.255.255.0"
575                 local_mac: "00:00.00:00:00:02"
576
577 SR-IOV Config host_sriov.yaml
578 +++++++++++++++++++++++++++++
579
580 .. code-block:: YAML
581
582     nodes:
583     -
584        name: sriov
585        role: Sriov
586        ip: 192.168.100.101
587        user: ""
588        password: ""
589
590 SR-IOV testcase update:
591 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
592
593 Update contexts section
594 '''''''''''''''''''''''
595
596 .. code-block:: YAML
597
598   contexts:
599    - name: yardstick
600      type: Node
601      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
602    - type: StandaloneSriov
603      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
604      name: yardstick
605      vm_deploy: True
606      flavor:
607        images: "/var/lib/libvirt/images/ubuntu.qcow2"
608        ram: 4096
609        extra_specs:
610          hw:cpu_sockets: 1
611          hw:cpu_cores: 6
612          hw:cpu_threads: 2
613        user: "" # update VM username
614        password: "" # update password
615      servers:
616        vnf_0:
617          network_ports:
618            mgmt:
619              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
620            xe0:
621              - uplink_0
622            xe1:
623              - downlink_0
624      networks:
625        uplink_0:
626          phy_port: "0000:05:00.0"
627          vpci: "0000:00:07.0"
628          cidr: '152.16.100.10/24'
629          gateway_ip: '152.16.100.20'
630        downlink_0:
631          phy_port: "0000:05:00.1"
632          vpci: "0000:00:08.0"
633          cidr: '152.16.40.10/24'
634          gateway_ip: '152.16.100.20'
635
636
637 OVS-DPDK
638 ^^^^^^^^
639
640 OVS-DPDK Pre-requisites
641 +++++++++++++++++++++++
642
643 On Host, where VM is created:
644  a) Create and configure a bridge named ``br-int`` for VM to connect to
645     external network. Currently this can be done using VXLAN tunnel.
646
647     Execute the following on host, where VM is created:
648
649   .. code-block:: console
650
651       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
652       brctl addbr br-int
653       brctl addif br-int vxlan0
654       ip link set dev vxlan0 up
655       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
656       ip link set dev br-int up
657
658   .. note:: May be needed to add extra rules to iptable to forward traffic.
659
660   .. code-block:: console
661
662     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
663     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
664
665   Execute the following on a jump host:
666
667   .. code-block:: console
668
669       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
670       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
671       ip link set dev vxlan0 up
672
673   .. note:: Host and jump host are different baremetal servers.
674
675  b) Modify test case management CIDR.
676     IP addresses IP#1, IP#2 and CIDR must be in the same network.
677
678   .. code-block:: YAML
679
680     servers:
681       vnf_0:
682         network_ports:
683           mgmt:
684             cidr: '1.1.1.7/24'
685
686  c) Build guest image for VNF to run.
687     Most of the sample test cases in Yardstick are using a guest image called
688     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
689     Yardstick has a tool for building this custom image with SampleVNF.
690     It is necessary to have ``sudo`` rights to use this tool.
691
692    You may need to install several additional packages to use this tool, by
693    following the commands below::
694
695       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
696
697    This image can be built using the following command in the directory where
698    Yardstick is installed::
699
700       export YARD_IMG_ARCH='amd64'
701       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
702       sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
703
704    for more details refer to chapter :doc:`04-installation`
705
706    .. note::  VM should be build with static IP and should be accessible from
707       yardstick host.
708
709 3. OVS & DPDK version.
710    * OVS 2.7 and DPDK 16.11.1 above version is supported
711
712 4. Setup `OVS-DPDK`_ on host.
713
714
715 OVS-DPDK Config pod.yaml describing Topology
716 ++++++++++++++++++++++++++++++++++++++++++++
717
718 OVS-DPDK 2-Node setup
719 +++++++++++++++++++++
720
721 .. code-block:: console
722
723                                +--------------------+
724                                |                    |
725                                |                    |
726                                |        DUT         |
727                                |       (VNF)        |
728                                |                    |
729                                +--------------------+
730                                | virtio |  | virtio |
731                                +--------+  +--------+
732                                     ^          ^
733                                     |          |
734                                     |          |
735                                +--------+  +--------+
736                                | vHOST0 |  | vHOST1 |
737   +----------+               +-------------------------+
738   |          |               |       ^          ^      |
739   |          |               |       |          |      |
740   |          | (0)<----->(0) | ------           |      |
741   |    TG1   |               |          SUT     |      |
742   |          |               |       (ovs-dpdk) |      |
743   |          | (n)<----->(n) |------------------       |
744   +----------+               +-------------------------+
745   trafficgen_0                          host
746
747
748 OVS-DPDK 3-Node setup - Correlated Traffic
749 ++++++++++++++++++++++++++++++++++++++++++
750
751 .. code-block:: console
752
753                                +--------------------+
754                                |                    |
755                                |                    |
756                                |        DUT         |
757                                |       (VNF)        |
758                                |                    |
759                                +--------------------+
760                                | virtio |  | virtio |
761                                +--------+  +--------+
762                                     ^          ^
763                                     |          |
764                                     |          |
765                                +--------+  +--------+
766                                | vHOST0 |  | vHOST1 |
767   +----------+               +-------------------------+          +------------+
768   |          |               |       ^          ^      |          |            |
769   |          |               |       |          |      |          |            |
770   |          | (0)<----->(0) | ------           |      |          |    TG2     |
771   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
772   |          |               |      (ovs-dpdk)  |      |          |            |
773   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
774   +----------+               +-------------------------+          +------------+
775   trafficgen_0                          host                       trafficgen_1
776
777
778 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
779 the topology and update all the required fields::
780
781   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
782   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
783
784 .. note:: Update all the required fields like ip, user, password, pcis, etc...
785
786 OVS-DPDK Config pod_trex.yaml
787 +++++++++++++++++++++++++++++
788
789 .. code-block:: YAML
790
791     nodes:
792     -
793       name: trafficgen_0
794       role: TrafficGen
795       ip: 1.1.1.1
796       user: root
797       password: r00t
798       interfaces:
799           xe0:  # logical name from topology.yaml and vnfd.yaml
800               vpci:      "0000:07:00.0"
801               driver:    i40e # default kernel driver
802               dpdk_port_num: 0
803               local_ip: "152.16.100.20"
804               netmask:   "255.255.255.0"
805               local_mac: "00:00:00:00:00:01"
806           xe1:  # logical name from topology.yaml and vnfd.yaml
807               vpci:      "0000:07:00.1"
808               driver:    i40e # default kernel driver
809               dpdk_port_num: 1
810               local_ip: "152.16.40.20"
811               netmask:   "255.255.255.0"
812               local_mac: "00:00.00:00:00:02"
813
814 OVS-DPDK Config host_ovs.yaml
815 +++++++++++++++++++++++++++++
816
817 .. code-block:: YAML
818
819     nodes:
820     -
821        name: ovs_dpdk
822        role: OvsDpdk
823        ip: 192.168.100.101
824        user: ""
825        password: ""
826
827 ovs_dpdk testcase update:
828 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
829
830 Update contexts section
831 '''''''''''''''''''''''
832
833 .. code-block:: YAML
834
835   contexts:
836    - name: yardstick
837      type: Node
838      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
839    - type: StandaloneOvsDpdk
840      name: yardstick
841      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
842      vm_deploy: True
843      ovs_properties:
844        version:
845          ovs: 2.7.0
846          dpdk: 16.11.1
847        pmd_threads: 2
848        ram:
849          socket_0: 2048
850          socket_1: 2048
851        queues: 4
852        vpath: "/usr/local"
853
854      flavor:
855        images: "/var/lib/libvirt/images/ubuntu.qcow2"
856        ram: 4096
857        extra_specs:
858          hw:cpu_sockets: 1
859          hw:cpu_cores: 6
860          hw:cpu_threads: 2
861        user: "" # update VM username
862        password: "" # update password
863      servers:
864        vnf_0:
865          network_ports:
866            mgmt:
867              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
868            xe0:
869              - uplink_0
870            xe1:
871              - downlink_0
872      networks:
873        uplink_0:
874          phy_port: "0000:05:00.0"
875          vpci: "0000:00:07.0"
876          cidr: '152.16.100.10/24'
877          gateway_ip: '152.16.100.20'
878        downlink_0:
879          phy_port: "0000:05:00.1"
880          vpci: "0000:00:08.0"
881          cidr: '152.16.40.10/24'
882          gateway_ip: '152.16.100.20'
883
884
885 OpenStack with SR-IOV support
886 -----------------------------
887
888 This section describes how to run a Sample VNF test case, using Heat context,
889 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
890 DevStack, with SR-IOV support.
891
892
893 Single node OpenStack with external TG
894 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
895
896 .. code-block:: console
897
898                                  +----------------------------+
899                                  |OpenStack(DevStack)         |
900                                  |                            |
901                                  |   +--------------------+   |
902                                  |   |sample-VNF VM       |   |
903                                  |   |                    |   |
904                                  |   |        DUT         |   |
905                                  |   |       (VNF)        |   |
906                                  |   |                    |   |
907                                  |   +--------+  +--------+   |
908                                  |   | VF NIC |  | VF NIC |   |
909                                  |   +-----+--+--+----+---+   |
910                                  |         ^          ^       |
911                                  |         |          |       |
912   +----------+                   +---------+----------+-------+
913   |          |                   |        VF0        VF1      |
914   |          |                   |         ^          ^       |
915   |          |                   |         |   SUT    |       |
916   |    TG    | (PF0)<----->(PF0) +---------+          |       |
917   |          |                   |                    |       |
918   |          | (PF1)<----->(PF1) +--------------------+       |
919   |          |                   |                            |
920   +----------+                   +----------------------------+
921   trafficgen_0                                 host
922
923
924 Host pre-configuration
925 ++++++++++++++++++++++
926
927 .. warning:: The following configuration requires sudo access to the system.
928    Make sure that your user have the access.
929
930 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
931 manufacturers disable this extension by default.
932
933 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
934 config file ``/etc/default/grub``.
935
936 For the Intel platform::
937
938   ...
939   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
940   ...
941
942 For the AMD platform::
943
944   ...
945   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
946   ...
947
948 Update the grub configuration file and restart the system:
949
950 .. warning:: The following command will reboot the system.
951
952 .. code:: bash
953
954   sudo update-grub
955   sudo reboot
956
957 Make sure the extension has been enabled::
958
959   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
960
961   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
962   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
963   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
964   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
965   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
966   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
967   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
968   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
969
970 .. TODO: Refer to the yardstick installation guide for proxy set up
971
972 Setup system proxy (if needed). Add the following configuration into the
973 ``/etc/environment`` file:
974
975 .. note:: The proxy server name/port and IPs should be changed according to
976   actual/current proxy configuration in the lab.
977
978 .. code:: bash
979
980   export http_proxy=http://proxy.company.com:port
981   export https_proxy=http://proxy.company.com:port
982   export ftp_proxy=http://proxy.company.com:port
983   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
984   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
985
986 Upgrade the system:
987
988 .. code:: bash
989
990   sudo -EH apt-get update
991   sudo -EH apt-get upgrade
992   sudo -EH apt-get dist-upgrade
993
994 Install dependencies needed for DevStack
995
996 .. code:: bash
997
998   sudo -EH apt-get install python python-dev python-pip
999
1000 Setup SR-IOV ports on the host:
1001
1002 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
1003   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
1004   interface names should be changed according to the HW environment used for
1005   testing.
1006
1007 .. code:: bash
1008
1009   sudo ip link set dev enp24s0f0 up
1010   sudo ip link set dev enp24s0f1 up
1011   sudo ip link set dev enp24s0f3 up
1012
1013   # Create VFs on PF
1014   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
1015   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
1016
1017
1018 DevStack installation
1019 +++++++++++++++++++++
1020
1021 If you want to try out NSB, but don't have OpenStack set-up, you can use
1022 `Devstack`_ to install OpenStack on a host. Please note, that the
1023 ``stable/pike`` branch of devstack repo should be used during the installation.
1024 The required ``local.conf`` configuration file are described below.
1025
1026 DevStack configuration file:
1027
1028 .. note:: Update the devstack configuration file by replacing angluar brackets
1029   with a short description inside.
1030
1031 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1032   commands to get device and vendor id of the virtual function (VF).
1033
1034 .. literalinclude:: code/single-devstack-local.conf
1035    :language: console
1036
1037 Start the devstack installation on a host.
1038
1039 TG host configuration
1040 +++++++++++++++++++++
1041
1042 Yardstick automatically installs and configures Trex traffic generator on TG
1043 host based on provided POD file (see below). Anyway, it's recommended to check
1044 the compatibility of the installed NIC on the TG server with software Trex
1045 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
1046
1047 Run the Sample VNF test case
1048 ++++++++++++++++++++++++++++
1049
1050 There is an example of Sample VNF test case ready to be executed in an
1051 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1052 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1053
1054 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1055 context.
1056
1057 Create pod file for TG in the yardstick repo folder located in the yardstick
1058 container:
1059
1060 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1061   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1062   command to get the PF PCI address for ``vpci`` field.
1063
1064 .. literalinclude:: code/single-yardstick-pod.conf
1065    :language: console
1066
1067 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1068 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1069 context using steps described in `NS testing - using yardstick CLI`_ section.
1070
1071
1072 Multi node OpenStack TG and VNF setup (two nodes)
1073 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1074
1075 .. code-block:: console
1076
1077   +----------------------------+                   +----------------------------+
1078   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1079   |                            |                   |                            |
1080   |   +--------------------+   |                   |   +--------------------+   |
1081   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1082   |   |                    |   |                   |   |                    |   |
1083   |   |         TG         |   |                   |   |        DUT         |   |
1084   |   |    trafficgen_0    |   |                   |   |       (VNF)        |   |
1085   |   |                    |   |                   |   |                    |   |
1086   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1087   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1088   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1089   |        ^           ^       |                   |         ^          ^       |
1090   |        |           |       |                   |         |          |       |
1091   +--------+-----------+-------+                   +---------+----------+-------+
1092   |       VF0         VF1      |                   |        VF0        VF1      |
1093   |        ^           ^       |                   |         ^          ^       |
1094   |        |    SUT2   |       |                   |         |   SUT1   |       |
1095   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1096   |        |                   |                   |                    |       |
1097   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1098   |                            |                   |                            |
1099   +----------------------------+                   +----------------------------+
1100            host2 (compute)                               host1 (controller)
1101
1102
1103 Controller/Compute pre-configuration
1104 ++++++++++++++++++++++++++++++++++++
1105
1106 Pre-configuration of the controller and compute hosts are the same as
1107 described in `Host pre-configuration`_ section.
1108
1109 DevStack configuration
1110 ++++++++++++++++++++++
1111
1112 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1113 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1114 devstack repo should be used during the installation.
1115
1116 .. note:: Update the devstack configuration files by replacing angluar brackets
1117   with a short description inside.
1118
1119 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1120   commands to get device and vendor id of the virtual function (VF).
1121
1122 DevStack configuration file for controller host:
1123
1124 .. literalinclude:: code/multi-devstack-controller-local.conf
1125    :language: console
1126
1127 DevStack configuration file for compute host:
1128
1129 .. literalinclude:: code/multi-devstack-compute-local.conf
1130    :language: console
1131
1132 Start the devstack installation on the controller and compute hosts.
1133
1134 Run the sample vFW TC
1135 +++++++++++++++++++++
1136
1137 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1138 context.
1139
1140 Run the sample vFW RFC2544 SR-IOV test case
1141 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1142 in the heat context using steps described in
1143 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1144 line arguments:
1145
1146 .. code:: bash
1147
1148   yardstick -d task start --task-args='{"provider": "sriov"}' \
1149   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1150
1151
1152 Enabling other Traffic generators
1153 ---------------------------------
1154
1155 IxLoad
1156 ^^^^^^
1157
1158 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1159    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1160    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1161    ``<IxOS version>Linux64.bin.tar.gz``
1162    If the installation was not done inside the container, after installing
1163    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1164    sure you can run this cmd inside the yardstick container. Usually user is
1165    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1166    ``/usr/bin/ixiapython<ver>`` inside the container.
1167
1168 2. Update ``pod_ixia.yaml`` file with ixia details.
1169
1170   .. code-block:: console
1171
1172     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1173       etc/yardstick/nodes/pod_ixia.yaml
1174
1175   Config ``pod_ixia.yaml``
1176
1177   .. literalinclude:: code/pod_ixia.yaml
1178      :language: console
1179
1180   for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1181   for ovs-dpdk/sriov configuration
1182
1183 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1184    You will also need to configure the IxLoad machine to start the IXIA
1185    IxosTclServer. This can be started like so:
1186
1187    * Connect to the IxLoad machine using RDP
1188    * Go to:
1189      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1190      or
1191      ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1192
1193 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1194
1195 5. Execute testcase in samplevnf folder e.g.
1196    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1197
1198 IxNetwork
1199 ^^^^^^^^^
1200
1201 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1202 installed as part of the requirements of the project.
1203
1204 1. Update ``pod_ixia.yaml`` file with ixia details.
1205
1206   .. code-block:: console
1207
1208     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1209     etc/yardstick/nodes/pod_ixia.yaml
1210
1211   Configure ``pod_ixia.yaml``
1212
1213   .. literalinclude:: code/pod_ixia.yaml
1214      :language: console
1215
1216   for sriov/ovs_dpdk pod files, please refer to above
1217   `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1218
1219 2. Start IxNetwork TCL Server
1220    You will also need to configure the IxNetwork machine to start the IXIA
1221    IxNetworkTclServer. This can be started like so:
1222
1223     * Connect to the IxNetwork machine using RDP
1224     * Go to:
1225       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1226       (or ``IxNetworkApiServer``)
1227
1228 3. Execute testcase in samplevnf folder e.g.
1229    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1230
1231 Spirent Landslide
1232 -----------------
1233
1234 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1235 to be preinstalled and properly configured.
1236
1237 - Java
1238
1239     32-bit Java installation is required for the Spirent Landslide TCL API.
1240
1241     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1242
1243     .. important::
1244       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1245       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1246       section of installation instructions.
1247
1248 - LsApi (Tcl API module)
1249
1250     Follow Landslide documentation for detailed instructions on Linux
1251     installation of Tcl API and its dependencies
1252     ``http://TAS_HOST_IP/tclapiinstall.html``.
1253     For working with LsApi Python wrapper only steps 1-5 are required.
1254
1255     .. note:: After installation make sure your API home path is included in
1256       ``PYTHONPATH`` environment variable.
1257
1258     .. important::
1259       The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1260       For LsApi module to initialize correctly following lines (184-186) in
1261       lsapi.py
1262
1263     .. code-block:: python
1264
1265         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1266         if ldpath == '':
1267          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1268
1269     should be changed to:
1270
1271     .. code-block:: python
1272
1273         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1274         if not ldpath == '':
1275                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1276
1277 .. note:: The Spirent landslide TCL software package needs to be updated in case
1278   the user upgrades to a new version of Spirent landslide software.