Adding scale up feature to prox MPLS Tagging OvS-DPDK & SRIOV
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
5
6 ..
7    Convention for heading levels in Yardstick documentation:
8
9    =======  Heading 0 (reserved for the title in a document)
10    -------  Heading 1
11    ~~~~~~~  Heading 2
12    +++++++  Heading 3
13    '''''''  Heading 4
14
15    Avoid deeper levels because they do not render well.
16
17 =====================================
18 Yardstick - NSB Testing -Installation
19 =====================================
20
21 Abstract
22 --------
23
24 The Network Service Benchmarking (NSB) extends the yardstick framework to do
25 VNF characterization and benchmarking in three different execution
26 environments viz., bare metal i.e. native Linux environment, standalone virtual
27 environment and managed virtualized environment (e.g. Open stack etc.).
28 It also brings in the capability to interact with external traffic generators
29 both hardware & software based for triggering and validating the traffic
30 according to user defined profiles.
31
32 The steps needed to run Yardstick with NSB testing are:
33
34 * Install Yardstick (NSB Testing).
35 * Setup/Reference pod.yaml describing Test topology
36 * Create/Reference the test configuration yaml file.
37 * Run the test case.
38
39
40 Prerequisites
41 -------------
42
43 Refer chapter Yardstick Installation for more information on yardstick
44 prerequisites
45
46 Several prerequisites are needed for Yardstick (VNF testing):
47
48   * Python Modules: pyzmq, pika.
49   * flex
50   * bison
51   * build-essential
52   * automake
53   * libtool
54   * librabbitmq-dev
55   * rabbitmq-server
56   * collectd
57   * intel-cmt-cat
58
59 Hardware & Software Ingredients
60 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
61
62 SUT requirements:
63
64
65    ======= ===================
66    Item    Description
67    ======= ===================
68    Memory  Min 20GB
69    NICs    2 x 10G
70    OS      Ubuntu 16.04.3 LTS
71    kernel  4.4.0-34-generic
72    DPDK    17.02
73    ======= ===================
74
75 Boot and BIOS settings:
76
77
78    ============= =================================================
79    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
80                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
81                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
82                  iommu=on iommu=pt intel_iommu=on
83                  Note: nohz_full and rcu_nocbs is to disable Linux
84                  kernel interrupts
85    BIOS          CPU Power and Performance Policy <Performance>
86                  CPU C-state Disabled
87                  CPU P-state Disabled
88                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
89                  Hyper-Threading Technology (If supported) Enabled
90                  Virtualization Techology Enabled
91                  Intel(R) VT for Direct I/O Enabled
92                  Coherency Enabled
93                  Turbo Boost Disabled
94    ============= =================================================
95
96
97
98 Install Yardstick (NSB Testing)
99 -------------------------------
100
101 Download the source code and install Yardstick from it
102
103 .. code-block:: console
104
105   git clone https://gerrit.opnfv.org/gerrit/yardstick
106
107   cd yardstick
108
109   # Switch to latest stable branch
110   # git checkout <tag or stable branch>
111   git checkout stable/euphrates
112
113 Configure the network proxy, either using the environment variables or setting
114 the global environment file:
115
116 .. code-block:: ini
117
118     cat /etc/environment
119     http_proxy='http://proxy.company.com:port'
120     https_proxy='http://proxy.company.com:port'
121
122 .. code-block:: console
123
124     export http_proxy='http://proxy.company.com:port'
125     export https_proxy='http://proxy.company.com:port'
126
127 The last step is to modify the Yardstick installation inventory, used by
128 Ansible:
129
130 .. code-block:: ini
131
132   cat ./ansible/install-inventory.ini
133   [jumphost]
134   localhost  ansible_connection=local
135
136   [yardstick-standalone]
137   yardstick-standalone-node ansible_host=192.168.1.2
138   yardstick-standalone-node-2 ansible_host=192.168.1.3
139
140   # section below is only due backward compatibility.
141   # it will be removed later
142   [yardstick:children]
143   jumphost
144
145   [all:vars]
146   ansible_user=root
147   ansible_pass=root
148
149 .. note::
150
151    SSH access without password needs to be configured for all your nodes defined in
152    ``install-inventory.ini`` file.
153    If you want to use password authentication you need to install sshpass
154
155    .. code-block:: console
156
157      sudo -EH apt-get install sshpass
158
159 To execute an installation for a Bare-Metal or a Standalone context:
160
161 .. code-block:: console
162
163     ./nsb_setup.sh
164
165
166 To execute an installation for an OpenStack context:
167
168 .. code-block:: console
169
170     ./nsb_setup.sh <path to admin-openrc.sh>
171
172 Above command setup docker with latest yardstick code. To execute
173
174 .. code-block:: console
175
176   docker exec -it yardstick bash
177
178 It will also automatically download all the packages needed for NSB Testing
179 setup. Refer chapter :doc:`04-installation` for more on docker
180 **Install Yardstick using Docker (recommended)**
181
182 Another way to execute an installation for a Bare-Metal or a Standalone context
183 is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
184 for more details.
185
186 System Topology
187 ---------------
188
189 .. code-block:: console
190
191   +----------+              +----------+
192   |          |              |          |
193   |          | (0)----->(0) |          |
194   |    TG1   |              |    DUT   |
195   |          |              |          |
196   |          | (1)<-----(1) |          |
197   +----------+              +----------+
198   trafficgen_1                   vnf
199
200
201 Environment parameters and credentials
202 --------------------------------------
203
204 Config yardstick conf
205 ~~~~~~~~~~~~~~~~~~~~~
206
207 If user did not run 'yardstick env influxdb' inside the container, which will
208 generate correct ``yardstick.conf``, then create the config file manually (run
209 inside the container):
210 ::
211
212     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
213     vi /etc/yardstick/yardstick.conf
214
215 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
216
217 ::
218
219   [DEFAULT]
220   debug = True
221   dispatcher = file, influxdb
222
223   [dispatcher_influxdb]
224   timeout = 5
225   target = http://{YOUR_IP_HERE}:8086
226   db_name = yardstick
227   username = root
228   password = root
229
230   [nsb]
231   trex_path=/opt/nsb_bin/trex/scripts
232   bin_path=/opt/nsb_bin
233   trex_client_lib=/opt/nsb_bin/trex_client/stl
234
235 Run Yardstick - Network Service Testcases
236 -----------------------------------------
237
238
239 NS testing - using yardstick CLI
240 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
241
242   See :doc:`04-installation`
243
244 .. code-block:: console
245
246
247   docker exec -it yardstick /bin/bash
248   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
249   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
250   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
251
252 Network Service Benchmarking - Bare-Metal
253 -----------------------------------------
254
255 Bare-Metal Config pod.yaml describing Topology
256 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
257
258 Bare-Metal 2-Node setup
259 +++++++++++++++++++++++
260 .. code-block:: console
261
262   +----------+              +----------+
263   |          |              |          |
264   |          | (0)----->(0) |          |
265   |    TG1   |              |    DUT   |
266   |          |              |          |
267   |          | (n)<-----(n) |          |
268   +----------+              +----------+
269   trafficgen_1                   vnf
270
271 Bare-Metal 3-Node setup - Correlated Traffic
272 ++++++++++++++++++++++++++++++++++++++++++++
273 .. code-block:: console
274
275   +----------+              +----------+            +------------+
276   |          |              |          |            |            |
277   |          |              |          |            |            |
278   |          | (0)----->(0) |          |            |    UDP     |
279   |    TG1   |              |    DUT   |            |   Replay   |
280   |          |              |          |            |            |
281   |          |              |          |(1)<---->(0)|            |
282   +----------+              +----------+            +------------+
283   trafficgen_1                   vnf                 trafficgen_2
284
285
286 Bare-Metal Config pod.yaml
287 ~~~~~~~~~~~~~~~~~~~~~~~~~~
288 Before executing Yardstick test cases, make sure that pod.yaml reflects the
289 topology and update all the required fields.::
290
291     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
292
293 .. code-block:: YAML
294
295     nodes:
296     -
297         name: trafficgen_1
298         role: TrafficGen
299         ip: 1.1.1.1
300         user: root
301         password: r00t
302         interfaces:
303             xe0:  # logical name from topology.yaml and vnfd.yaml
304                 vpci:      "0000:07:00.0"
305                 driver:    i40e # default kernel driver
306                 dpdk_port_num: 0
307                 local_ip: "152.16.100.20"
308                 netmask:   "255.255.255.0"
309                 local_mac: "00:00:00:00:00:01"
310             xe1:  # logical name from topology.yaml and vnfd.yaml
311                 vpci:      "0000:07:00.1"
312                 driver:    i40e # default kernel driver
313                 dpdk_port_num: 1
314                 local_ip: "152.16.40.20"
315                 netmask:   "255.255.255.0"
316                 local_mac: "00:00.00:00:00:02"
317
318     -
319         name: vnf
320         role: vnf
321         ip: 1.1.1.2
322         user: root
323         password: r00t
324         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
325         interfaces:
326             xe0:  # logical name from topology.yaml and vnfd.yaml
327                 vpci:      "0000:07:00.0"
328                 driver:    i40e # default kernel driver
329                 dpdk_port_num: 0
330                 local_ip: "152.16.100.19"
331                 netmask:   "255.255.255.0"
332                 local_mac: "00:00:00:00:00:03"
333
334             xe1:  # logical name from topology.yaml and vnfd.yaml
335                 vpci:      "0000:07:00.1"
336                 driver:    i40e # default kernel driver
337                 dpdk_port_num: 1
338                 local_ip: "152.16.40.19"
339                 netmask:   "255.255.255.0"
340                 local_mac: "00:00:00:00:00:04"
341         routing_table:
342         - network: "152.16.100.20"
343           netmask: "255.255.255.0"
344           gateway: "152.16.100.20"
345           if: "xe0"
346         - network: "152.16.40.20"
347           netmask: "255.255.255.0"
348           gateway: "152.16.40.20"
349           if: "xe1"
350         nd_route_tbl:
351         - network: "0064:ff9b:0:0:0:0:9810:6414"
352           netmask: "112"
353           gateway: "0064:ff9b:0:0:0:0:9810:6414"
354           if: "xe0"
355         - network: "0064:ff9b:0:0:0:0:9810:2814"
356           netmask: "112"
357           gateway: "0064:ff9b:0:0:0:0:9810:2814"
358           if: "xe1"
359
360
361 Network Service Benchmarking - Standalone Virtualization
362 --------------------------------------------------------
363
364 SR-IOV
365 ~~~~~~
366
367 SR-IOV Pre-requisites
368 +++++++++++++++++++++
369
370 On Host, where VM is created:
371  a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
372     Currently this can be done using VXLAN tunnel.
373
374     Execute the following on host, where VM is created:
375
376   .. code-block:: console
377
378       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
379       brctl addbr br-int
380       brctl addif br-int vxlan0
381       ip link set dev vxlan0 up
382       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
383       ip link set dev br-int up
384
385   .. note:: May be needed to add extra rules to iptable to forward traffic.
386
387   .. code-block:: console
388
389     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
390     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
391
392   Execute the following on a jump host:
393
394   .. code-block:: console
395
396       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
397       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
398       ip link set dev vxlan0 up
399
400   .. note:: Host and jump host are different baremetal servers.
401
402  b) Modify test case management CIDR.
403     IP addresses IP#1, IP#2 and CIDR must be in the same network.
404
405   .. code-block:: YAML
406
407     servers:
408       vnf:
409         network_ports:
410           mgmt:
411             cidr: '1.1.1.7/24'
412
413  c) Build guest image for VNF to run.
414     Most of the sample test cases in Yardstick are using a guest image called
415     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
416     Yardstick has a tool for building this custom image with SampleVNF.
417     It is necessary to have ``sudo`` rights to use this tool.
418
419     Also you may need to install several additional packages to use this tool, by
420     following the commands below::
421
422        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
423
424     This image can be built using the following command in the directory where Yardstick is installed
425
426     .. code-block:: console
427
428        export YARD_IMG_ARCH='amd64'
429        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
430
431     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
432
433     for more details refer to chapter :doc:`04-installation`
434
435     .. note:: VM should be build with static IP and should be accessible from yardstick host.
436
437
438 SR-IOV Config pod.yaml describing Topology
439 ++++++++++++++++++++++++++++++++++++++++++
440
441 SR-IOV 2-Node setup
442 +++++++++++++++++++
443 .. code-block:: console
444
445                                +--------------------+
446                                |                    |
447                                |                    |
448                                |        DUT         |
449                                |       (VNF)        |
450                                |                    |
451                                +--------------------+
452                                | VF NIC |  | VF NIC |
453                                +--------+  +--------+
454                                      ^          ^
455                                      |          |
456                                      |          |
457   +----------+               +-------------------------+
458   |          |               |       ^          ^      |
459   |          |               |       |          |      |
460   |          | (0)<----->(0) | ------           |      |
461   |    TG1   |               |           SUT    |      |
462   |          |               |                  |      |
463   |          | (n)<----->(n) |------------------       |
464   +----------+               +-------------------------+
465   trafficgen_1                          host
466
467
468
469 SR-IOV 3-Node setup - Correlated Traffic
470 ++++++++++++++++++++++++++++++++++++++++
471 .. code-block:: console
472
473                                +--------------------+
474                                |                    |
475                                |                    |
476                                |        DUT         |
477                                |       (VNF)        |
478                                |                    |
479                                +--------------------+
480                                | VF NIC |  | VF NIC |
481                                +--------+  +--------+
482                                      ^          ^
483                                      |          |
484                                      |          |
485   +----------+               +-------------------------+            +--------------+
486   |          |               |       ^          ^      |            |              |
487   |          |               |       |          |      |            |              |
488   |          | (0)<----->(0) | ------           |      |            |     TG2      |
489   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
490   |          |               |                  |      |            |              |
491   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
492   +----------+               +-------------------------+            +--------------+
493   trafficgen_1                          host                       trafficgen_2
494
495 Before executing Yardstick test cases, make sure that pod.yaml reflects the
496 topology and update all the required fields.
497
498 .. code-block:: console
499
500     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
501     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
502
503 .. note:: Update all the required fields like ip, user, password, pcis, etc...
504
505 SR-IOV Config pod_trex.yaml
506 +++++++++++++++++++++++++++
507
508 .. code-block:: YAML
509
510     nodes:
511     -
512         name: trafficgen_1
513         role: TrafficGen
514         ip: 1.1.1.1
515         user: root
516         password: r00t
517         key_filename: /root/.ssh/id_rsa
518         interfaces:
519             xe0:  # logical name from topology.yaml and vnfd.yaml
520                 vpci:      "0000:07:00.0"
521                 driver:    i40e # default kernel driver
522                 dpdk_port_num: 0
523                 local_ip: "152.16.100.20"
524                 netmask:   "255.255.255.0"
525                 local_mac: "00:00:00:00:00:01"
526             xe1:  # logical name from topology.yaml and vnfd.yaml
527                 vpci:      "0000:07:00.1"
528                 driver:    i40e # default kernel driver
529                 dpdk_port_num: 1
530                 local_ip: "152.16.40.20"
531                 netmask:   "255.255.255.0"
532                 local_mac: "00:00.00:00:00:02"
533
534 SR-IOV Config host_sriov.yaml
535 +++++++++++++++++++++++++++++
536
537 .. code-block:: YAML
538
539     nodes:
540     -
541        name: sriov
542        role: Sriov
543        ip: 192.168.100.101
544        user: ""
545        password: ""
546
547 SR-IOV testcase update:
548 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
549
550 Update "contexts" section
551 '''''''''''''''''''''''''
552
553 .. code-block:: YAML
554
555   contexts:
556    - name: yardstick
557      type: Node
558      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
559    - type: StandaloneSriov
560      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
561      name: yardstick
562      vm_deploy: True
563      flavor:
564        images: "/var/lib/libvirt/images/ubuntu.qcow2"
565        ram: 4096
566        extra_specs:
567          hw:cpu_sockets: 1
568          hw:cpu_cores: 6
569          hw:cpu_threads: 2
570        user: "" # update VM username
571        password: "" # update password
572      servers:
573        vnf:
574          network_ports:
575            mgmt:
576              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
577            xe0:
578              - uplink_0
579            xe1:
580              - downlink_0
581      networks:
582        uplink_0:
583          phy_port: "0000:05:00.0"
584          vpci: "0000:00:07.0"
585          cidr: '152.16.100.10/24'
586          gateway_ip: '152.16.100.20'
587        downlink_0:
588          phy_port: "0000:05:00.1"
589          vpci: "0000:00:08.0"
590          cidr: '152.16.40.10/24'
591          gateway_ip: '152.16.100.20'
592
593
594
595 OVS-DPDK
596 ~~~~~~~~
597
598 OVS-DPDK Pre-requisites
599 ~~~~~~~~~~~~~~~~~~~~~~~
600
601 On Host, where VM is created:
602  a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
603     Currently this can be done using VXLAN tunnel.
604
605     Execute the following on host, where VM is created:
606
607   .. code-block:: console
608
609       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
610       brctl addbr br-int
611       brctl addif br-int vxlan0
612       ip link set dev vxlan0 up
613       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
614       ip link set dev br-int up
615
616   .. note:: May be needed to add extra rules to iptable to forward traffic.
617
618   .. code-block:: console
619
620     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
621     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
622
623   Execute the following on a jump host:
624
625   .. code-block:: console
626
627       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
628       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
629       ip link set dev vxlan0 up
630
631   .. note:: Host and jump host are different baremetal servers.
632
633  b) Modify test case management CIDR.
634     IP addresses IP#1, IP#2 and CIDR must be in the same network.
635
636   .. code-block:: YAML
637
638     servers:
639       vnf:
640         network_ports:
641           mgmt:
642             cidr: '1.1.1.7/24'
643
644  c) Build guest image for VNF to run.
645     Most of the sample test cases in Yardstick are using a guest image called
646     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
647     Yardstick has a tool for building this custom image with SampleVNF.
648     It is necessary to have ``sudo`` rights to use this tool.
649
650     Also you may need to install several additional packages to use this tool, by
651     following the commands below::
652
653        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
654
655     This image can be built using the following command in the directory where Yardstick is installed::
656
657        export YARD_IMG_ARCH='amd64'
658        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
659        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
660
661     for more details refer to chapter :doc:`04-installation`
662
663     .. note::  VM should be build with static IP and should be accessible from yardstick host.
664
665  c) OVS & DPDK version.
666      - OVS 2.7 and DPDK 16.11.1 above version is supported
667
668  d) Setup OVS/DPDK on host.
669      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
670
671
672 OVS-DPDK Config pod.yaml describing Topology
673 ++++++++++++++++++++++++++++++++++++++++++++
674
675 OVS-DPDK 2-Node setup
676 +++++++++++++++++++++
677
678 .. code-block:: console
679
680                                +--------------------+
681                                |                    |
682                                |                    |
683                                |        DUT         |
684                                |       (VNF)        |
685                                |                    |
686                                +--------------------+
687                                | virtio |  | virtio |
688                                +--------+  +--------+
689                                     ^          ^
690                                     |          |
691                                     |          |
692                                +--------+  +--------+
693                                | vHOST0 |  | vHOST1 |
694   +----------+               +-------------------------+
695   |          |               |       ^          ^      |
696   |          |               |       |          |      |
697   |          | (0)<----->(0) | ------           |      |
698   |    TG1   |               |          SUT     |      |
699   |          |               |       (ovs-dpdk) |      |
700   |          | (n)<----->(n) |------------------       |
701   +----------+               +-------------------------+
702   trafficgen_1                          host
703
704
705 OVS-DPDK 3-Node setup - Correlated Traffic
706 ++++++++++++++++++++++++++++++++++++++++++
707
708 .. code-block:: console
709
710                                +--------------------+
711                                |                    |
712                                |                    |
713                                |        DUT         |
714                                |       (VNF)        |
715                                |                    |
716                                +--------------------+
717                                | virtio |  | virtio |
718                                +--------+  +--------+
719                                     ^          ^
720                                     |          |
721                                     |          |
722                                +--------+  +--------+
723                                | vHOST0 |  | vHOST1 |
724   +----------+               +-------------------------+          +------------+
725   |          |               |       ^          ^      |          |            |
726   |          |               |       |          |      |          |            |
727   |          | (0)<----->(0) | ------           |      |          |    TG2     |
728   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
729   |          |               |      (ovs-dpdk)  |      |          |            |
730   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
731   +----------+               +-------------------------+          +------------+
732   trafficgen_1                          host                       trafficgen_2
733
734
735 Before executing Yardstick test cases, make sure that pod.yaml reflects the
736 topology and update all the required fields.
737
738 .. code-block:: console
739
740   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
741   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
742
743 .. note:: Update all the required fields like ip, user, password, pcis, etc...
744
745 OVS-DPDK Config pod_trex.yaml
746 +++++++++++++++++++++++++++++
747
748 .. code-block:: YAML
749
750     nodes:
751     -
752       name: trafficgen_1
753       role: TrafficGen
754       ip: 1.1.1.1
755       user: root
756       password: r00t
757       interfaces:
758           xe0:  # logical name from topology.yaml and vnfd.yaml
759               vpci:      "0000:07:00.0"
760               driver:    i40e # default kernel driver
761               dpdk_port_num: 0
762               local_ip: "152.16.100.20"
763               netmask:   "255.255.255.0"
764               local_mac: "00:00:00:00:00:01"
765           xe1:  # logical name from topology.yaml and vnfd.yaml
766               vpci:      "0000:07:00.1"
767               driver:    i40e # default kernel driver
768               dpdk_port_num: 1
769               local_ip: "152.16.40.20"
770               netmask:   "255.255.255.0"
771               local_mac: "00:00.00:00:00:02"
772
773 OVS-DPDK Config host_ovs.yaml
774 +++++++++++++++++++++++++++++
775
776 .. code-block:: YAML
777
778     nodes:
779     -
780        name: ovs_dpdk
781        role: OvsDpdk
782        ip: 192.168.100.101
783        user: ""
784        password: ""
785
786 ovs_dpdk testcase update:
787 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
788
789 Update "contexts" section
790 '''''''''''''''''''''''''
791
792 .. code-block:: YAML
793
794   contexts:
795    - name: yardstick
796      type: Node
797      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
798    - type: StandaloneOvsDpdk
799      name: yardstick
800      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
801      vm_deploy: True
802      ovs_properties:
803        version:
804          ovs: 2.7.0
805          dpdk: 16.11.1
806        pmd_threads: 2
807        ram:
808          socket_0: 2048
809          socket_1: 2048
810        queues: 4
811        vpath: "/usr/local"
812
813      flavor:
814        images: "/var/lib/libvirt/images/ubuntu.qcow2"
815        ram: 4096
816        extra_specs:
817          hw:cpu_sockets: 1
818          hw:cpu_cores: 6
819          hw:cpu_threads: 2
820        user: "" # update VM username
821        password: "" # update password
822      servers:
823        vnf:
824          network_ports:
825            mgmt:
826              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
827            xe0:
828              - uplink_0
829            xe1:
830              - downlink_0
831      networks:
832        uplink_0:
833          phy_port: "0000:05:00.0"
834          vpci: "0000:00:07.0"
835          cidr: '152.16.100.10/24'
836          gateway_ip: '152.16.100.20'
837        downlink_0:
838          phy_port: "0000:05:00.1"
839          vpci: "0000:00:08.0"
840          cidr: '152.16.40.10/24'
841          gateway_ip: '152.16.100.20'
842
843
844 Network Service Benchmarking - OpenStack with SR-IOV support
845 ------------------------------------------------------------
846
847 This section describes how to run a Sample VNF test case, using Heat context,
848 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
849 DevStack, with SR-IOV support.
850
851
852 Single node OpenStack setup with external TG
853 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
854
855 .. code-block:: console
856
857                                  +----------------------------+
858                                  |OpenStack(DevStack)         |
859                                  |                            |
860                                  |   +--------------------+   |
861                                  |   |sample-VNF VM       |   |
862                                  |   |                    |   |
863                                  |   |        DUT         |   |
864                                  |   |       (VNF)        |   |
865                                  |   |                    |   |
866                                  |   +--------+  +--------+   |
867                                  |   | VF NIC |  | VF NIC |   |
868                                  |   +-----+--+--+----+---+   |
869                                  |         ^          ^       |
870                                  |         |          |       |
871   +----------+                   +---------+----------+-------+
872   |          |                   |        VF0        VF1      |
873   |          |                   |         ^          ^       |
874   |          |                   |         |   SUT    |       |
875   |    TG    | (PF0)<----->(PF0) +---------+          |       |
876   |          |                   |                    |       |
877   |          | (PF1)<----->(PF1) +--------------------+       |
878   |          |                   |                            |
879   +----------+                   +----------------------------+
880   trafficgen_1                                 host
881
882
883 Host pre-configuration
884 ++++++++++++++++++++++
885
886 .. warning:: The following configuration requires sudo access to the system. Make
887   sure that your user have the access.
888
889 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
890 disable this extension by default.
891
892 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
893 config file ``/etc/default/grub``.
894
895 For the Intel platform:
896
897 .. code:: bash
898
899   ...
900   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
901   ...
902
903 For the AMD platform:
904
905 .. code:: bash
906
907   ...
908   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
909   ...
910
911 Update the grub configuration file and restart the system:
912
913 .. warning:: The following command will reboot the system.
914
915 .. code:: bash
916
917   sudo update-grub
918   sudo reboot
919
920 Make sure the extension has been enabled:
921
922 .. code:: bash
923
924   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
925
926   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
927   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
928   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
929   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
930   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
931   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
932   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
933   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
934
935 Setup system proxy (if needed). Add the following configuration into the
936 ``/etc/environment`` file:
937
938 .. note:: The proxy server name/port and IPs should be changed according to
939   actual/current proxy configuration in the lab.
940
941 .. code:: bash
942
943   export http_proxy=http://proxy.company.com:port
944   export https_proxy=http://proxy.company.com:port
945   export ftp_proxy=http://proxy.company.com:port
946   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
947   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
948
949 Upgrade the system:
950
951 .. code:: bash
952
953   sudo -EH apt-get update
954   sudo -EH apt-get upgrade
955   sudo -EH apt-get dist-upgrade
956
957 Install dependencies needed for the DevStack
958
959 .. code:: bash
960
961   sudo -EH apt-get install python
962   sudo -EH apt-get install python-dev
963   sudo -EH apt-get install python-pip
964
965 Setup SR-IOV ports on the host:
966
967 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
968   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
969   interface names should be changed according to the HW environment used for
970   testing.
971
972 .. code:: bash
973
974   sudo ip link set dev enp24s0f0 up
975   sudo ip link set dev enp24s0f1 up
976   sudo ip link set dev enp24s0f3 up
977
978   # Create VFs on PF
979   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
980   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
981
982
983 DevStack installation
984 +++++++++++++++++++++
985
986 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
987 documentation to install OpenStack on a host. Please note, that stable
988 ``pike`` branch of devstack repo should be used during the installation.
989 The required `local.conf`` configuration file are described below.
990
991 DevStack configuration file:
992
993 .. note:: Update the devstack configuration file by replacing angluar brackets
994   with a short description inside.
995
996 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
997   commands to get device and vendor id of the virtual function (VF).
998
999 .. literalinclude:: code/single-devstack-local.conf
1000    :language: console
1001
1002 Start the devstack installation on a host.
1003
1004
1005 TG host configuration
1006 +++++++++++++++++++++
1007
1008 Yardstick automatically install and configure Trex traffic generator on TG
1009 host based on provided POD file (see below). Anyway, it's recommended to check
1010 the compatibility of the installed NIC on the TG server with software Trex using
1011 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
1012
1013
1014 Run the Sample VNF test case
1015 ++++++++++++++++++++++++++++
1016
1017 There is an example of Sample VNF test case ready to be executed in an
1018 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1019 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1020
1021 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1022 context.
1023
1024 Create pod file for TG in the yardstick repo folder located in the yardstick
1025 container:
1026
1027 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1028   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1029   command to get the PF PCI address for ``vpci`` field.
1030
1031 .. literalinclude:: code/single-yardstick-pod.conf
1032    :language: console
1033
1034 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1035 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1036 context using steps described in `NS testing - using yardstick CLI`_ section.
1037
1038
1039 Multi node OpenStack TG and VNF setup (two nodes)
1040 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1041
1042 .. code-block:: console
1043
1044   +----------------------------+                   +----------------------------+
1045   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1046   |                            |                   |                            |
1047   |   +--------------------+   |                   |   +--------------------+   |
1048   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1049   |   |                    |   |                   |   |                    |   |
1050   |   |         TG         |   |                   |   |        DUT         |   |
1051   |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
1052   |   |                    |   |                   |   |                    |   |
1053   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1054   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1055   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1056   |        ^           ^       |                   |         ^          ^       |
1057   |        |           |       |                   |         |          |       |
1058   +--------+-----------+-------+                   +---------+----------+-------+
1059   |       VF0         VF1      |                   |        VF0        VF1      |
1060   |        ^           ^       |                   |         ^          ^       |
1061   |        |    SUT2   |       |                   |         |   SUT1   |       |
1062   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1063   |        |                   |                   |                    |       |
1064   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1065   |                            |                   |                            |
1066   +----------------------------+                   +----------------------------+
1067            host2 (compute)                               host1 (controller)
1068
1069
1070 Controller/Compute pre-configuration
1071 ++++++++++++++++++++++++++++++++++++
1072
1073 Pre-configuration of the controller and compute hosts are the same as
1074 described in `Host pre-configuration`_ section. Follow the steps in the section.
1075
1076
1077 DevStack configuration
1078 ++++++++++++++++++++++
1079
1080 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
1081 documentation to install OpenStack on a host. Please note, that stable
1082 ``pike`` branch of devstack repo should be used during the installation.
1083 The required `local.conf`` configuration file are described below.
1084
1085 .. note:: Update the devstack configuration files by replacing angluar brackets
1086   with a short description inside.
1087
1088 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1089   commands to get device and vendor id of the virtual function (VF).
1090
1091 DevStack configuration file for controller host:
1092
1093 .. literalinclude:: code/multi-devstack-controller-local.conf
1094    :language: console
1095
1096 DevStack configuration file for compute host:
1097
1098 .. literalinclude:: code/multi-devstack-compute-local.conf
1099    :language: console
1100
1101 Start the devstack installation on the controller and compute hosts.
1102
1103
1104 Run the sample vFW TC
1105 +++++++++++++++++++++
1106
1107 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1108 context.
1109
1110 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1111 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1112 context using steps described in `NS testing - using yardstick CLI`_ section
1113 and the following yardtick command line arguments:
1114
1115 .. code:: bash
1116
1117   yardstick -d task start --task-args='{"provider": "sriov"}' \
1118   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1119
1120
1121 Enabling other Traffic generator
1122 --------------------------------
1123
1124 IxLoad
1125 ~~~~~~
1126
1127 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1128    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1129    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1130    ``<IxOS version>Linux64.bin.tar.gz``
1131    If the installation was not done inside the container, after installing
1132    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1133    sure you can run this cmd inside the yardstick container. Usually user is
1134    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1135    ``/usr/bin/ixiapython<ver>`` inside the container.
1136
1137 2. Update ``pod_ixia.yaml`` file with ixia details.
1138
1139   .. code-block:: console
1140
1141     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1142
1143   Config ``pod_ixia.yaml``
1144
1145   .. literalinclude:: code/pod_ixia.yaml
1146      :language: console
1147
1148   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1149
1150 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1151    You will also need to configure the IxLoad machine to start the IXIA
1152    IxosTclServer. This can be started like so:
1153
1154    * Connect to the IxLoad machine using RDP
1155    * Go to:
1156      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1157      or
1158      ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1159
1160 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1161
1162 5. Execute testcase in samplevnf folder e.g.
1163    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1164
1165 IxNetwork
1166 ~~~~~~~~~
1167
1168 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1169 installed as part of the requirements of the project.
1170
1171 1. Update ``pod_ixia.yaml`` file with ixia details.
1172
1173   .. code-block:: console
1174
1175     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1176
1177   Config pod_ixia.yaml
1178
1179   .. literalinclude:: code/pod_ixia.yaml
1180      :language: console
1181
1182   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1183
1184 2. Start IxNetwork TCL Server
1185    You will also need to configure the IxNetwork machine to start the IXIA
1186    IxNetworkTclServer. This can be started like so:
1187
1188     * Connect to the IxNetwork machine using RDP
1189     * Go to:
1190       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1191       (or ``IxNetworkApiServer``)
1192
1193 3. Execute testcase in samplevnf folder e.g.
1194    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1195
1196 Spirent Landslide
1197 -----------------
1198
1199 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1200 to be preinstalled and properly configured.
1201
1202 - Java
1203
1204     32-bit Java installation is required for the Spirent Landslide TCL API.
1205
1206     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1207
1208     .. important::
1209       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1210       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1211       section of installation instructions.
1212
1213 - LsApi (Tcl API module)
1214
1215     Follow Landslide documentation for detailed instructions on Linux
1216     installation of Tcl API and its dependencies
1217     ``http://TAS_HOST_IP/tclapiinstall.html``.
1218     For working with LsApi Python wrapper only steps 1-5 are required.
1219
1220     .. note:: After installation make sure your API home path is included in
1221       ``PYTHONPATH`` environment variable.
1222
1223     .. important::
1224     The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1225     For LsApi module to initialize correctly following lines (184-186) in
1226     lsapi.py
1227
1228     .. code-block:: python
1229
1230         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1231         if ldpath == '':
1232          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1233
1234     should be changed to:
1235
1236     .. code-block:: python
1237
1238         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1239         if not ldpath == '':
1240                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1241
1242 .. note:: The Spirent landslide TCL software package needs to be updated in case
1243   the user upgrades to a new version of Spirent landslide software.