Merge "Document for Euphrates test case results"
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
5
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
9
10 Abstract
11 ========
12
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
20
21 The steps needed to run Yardstick with NSB testing are:
22
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
26 * Run the test case.
27
28
29 Prerequisites
30 =============
31
32 Refer chapter Yardstick Installation for more information on yardstick
33 prerequisites
34
35 Several prerequisites are needed for Yardstick (VNF testing):
36
37   * Python Modules: pyzmq, pika.
38   * flex
39   * bison
40   * build-essential
41   * automake
42   * libtool
43   * librabbitmq-dev
44   * rabbitmq-server
45   * collectd
46   * intel-cmt-cat
47
48 Hardware & Software Ingredients
49 -------------------------------
50
51 SUT requirements:
52
53
54    ======= ===================
55    Item    Description
56    ======= ===================
57    Memory  Min 20GB
58    NICs    2 x 10G
59    OS      Ubuntu 16.04.3 LTS
60    kernel  4.4.0-34-generic
61    DPDK    17.02
62    ======= ===================
63
64 Boot and BIOS settings:
65
66
67    ============= =================================================
68    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71                  iommu=on iommu=pt intel_iommu=on
72                  Note: nohz_full and rcu_nocbs is to disable Linux
73                  kernel interrupts
74    BIOS          CPU Power and Performance Policy <Performance>
75                  CPU C-state Disabled
76                  CPU P-state Disabled
77                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78                  Hyper-Threading Technology (If supported) Enabled
79                  Virtualization Techology Enabled
80                  Intel(R) VT for Direct I/O Enabled
81                  Coherency Enabled
82                  Turbo Boost Disabled
83    ============= =================================================
84
85
86
87 Install Yardstick (NSB Testing)
88 ===============================
89
90 Download the source code and install Yardstick from it
91
92 .. code-block:: console
93
94   git clone https://gerrit.opnfv.org/gerrit/yardstick
95
96   cd yardstick
97
98   # Switch to latest stable branch
99   # git checkout <tag or stable branch>
100   git checkout stable/euphrates
101
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
104
105 .. code-block:: ini
106
107     cat /etc/environment
108     http_proxy='http://proxy.company.com:port'
109     https_proxy='http://proxy.company.com:port'
110
111 .. code-block:: console
112
113     export http_proxy='http://proxy.company.com:port'
114     export https_proxy='http://proxy.company.com:port'
115
116 The last step is to modify the Yardstick installation inventory, used by
117 Ansible:
118
119 .. code-block:: ini
120
121   cat ./ansible/yardstick-install-inventory.ini
122   [jumphost]
123   localhost  ansible_connection=local
124
125   [yardstick-standalone]
126   yardstick-standalone-node ansible_host=192.168.1.2
127   yardstick-standalone-node-2 ansible_host=192.168.1.3
128
129   # section below is only due backward compatibility.
130   # it will be removed later
131   [yardstick:children]
132   jumphost
133
134   [all:vars]
135   ansible_user=root
136   ansible_pass=root
137
138 .. note::
139
140    SSH access without password needs to be configured for all your nodes defined in
141    ``yardstick-install-inventory.ini`` file.
142    If you want to use password authentication you need to install sshpass
143
144    .. code-block:: console
145
146      sudo -EH apt-get install sshpass
147
148 To execute an installation for a Bare-Metal or a Standalone context:
149
150 .. code-block:: console
151
152     ./nsb_setup.sh
153
154
155 To execute an installation for an OpenStack context:
156
157 .. code-block:: console
158
159     ./nsb_setup.sh <path to admin-openrc.sh>
160
161 Above command setup docker with latest yardstick code. To execute
162
163 .. code-block:: console
164
165   docker exec -it yardstick bash
166
167 It will also automatically download all the packages needed for NSB Testing
168 setup. Refer chapter :doc:`04-installation` for more on docker
169 **Install Yardstick using Docker (recommended)**
170
171 System Topology:
172 ================
173
174 .. code-block:: console
175
176   +----------+              +----------+
177   |          |              |          |
178   |          | (0)----->(0) |          |
179   |    TG1   |              |    DUT   |
180   |          |              |          |
181   |          | (1)<-----(1) |          |
182   +----------+              +----------+
183   trafficgen_1                   vnf
184
185
186 Environment parameters and credentials
187 ======================================
188
189 Config yardstick conf
190 ---------------------
191
192 If user did not run 'yardstick env influxdb' inside the container, which will
193 generate correct ``yardstick.conf``, then create the config file manually (run
194 inside the container):
195 ::
196
197     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
198     vi /etc/yardstick/yardstick.conf
199
200 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
201
202 ::
203
204   [DEFAULT]
205   debug = True
206   dispatcher = file, influxdb
207
208   [dispatcher_influxdb]
209   timeout = 5
210   target = http://{YOUR_IP_HERE}:8086
211   db_name = yardstick
212   username = root
213   password = root
214
215   [nsb]
216   trex_path=/opt/nsb_bin/trex/scripts
217   bin_path=/opt/nsb_bin
218   trex_client_lib=/opt/nsb_bin/trex_client/stl
219
220 Run Yardstick - Network Service Testcases
221 =========================================
222
223
224 NS testing - using yardstick CLI
225 --------------------------------
226
227   See :doc:`04-installation`
228
229 .. code-block:: console
230
231
232   docker exec -it yardstick /bin/bash
233   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
234   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
235   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
236
237 Network Service Benchmarking - Bare-Metal
238 =========================================
239
240 Bare-Metal Config pod.yaml describing Topology
241 ----------------------------------------------
242
243 Bare-Metal 2-Node setup
244 ^^^^^^^^^^^^^^^^^^^^^^^
245 .. code-block:: console
246
247   +----------+              +----------+
248   |          |              |          |
249   |          | (0)----->(0) |          |
250   |    TG1   |              |    DUT   |
251   |          |              |          |
252   |          | (n)<-----(n) |          |
253   +----------+              +----------+
254   trafficgen_1                   vnf
255
256 Bare-Metal 3-Node setup - Correlated Traffic
257 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
258 .. code-block:: console
259
260   +----------+              +----------+            +------------+
261   |          |              |          |            |            |
262   |          |              |          |            |            |
263   |          | (0)----->(0) |          |            |    UDP     |
264   |    TG1   |              |    DUT   |            |   Replay   |
265   |          |              |          |            |            |
266   |          |              |          |(1)<---->(0)|            |
267   +----------+              +----------+            +------------+
268   trafficgen_1                   vnf                 trafficgen_2
269
270
271 Bare-Metal Config pod.yaml
272 --------------------------
273 Before executing Yardstick test cases, make sure that pod.yaml reflects the
274 topology and update all the required fields.::
275
276     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
277
278 .. code-block:: YAML
279
280     nodes:
281     -
282         name: trafficgen_1
283         role: TrafficGen
284         ip: 1.1.1.1
285         user: root
286         password: r00t
287         interfaces:
288             xe0:  # logical name from topology.yaml and vnfd.yaml
289                 vpci:      "0000:07:00.0"
290                 driver:    i40e # default kernel driver
291                 dpdk_port_num: 0
292                 local_ip: "152.16.100.20"
293                 netmask:   "255.255.255.0"
294                 local_mac: "00:00:00:00:00:01"
295             xe1:  # logical name from topology.yaml and vnfd.yaml
296                 vpci:      "0000:07:00.1"
297                 driver:    i40e # default kernel driver
298                 dpdk_port_num: 1
299                 local_ip: "152.16.40.20"
300                 netmask:   "255.255.255.0"
301                 local_mac: "00:00.00:00:00:02"
302
303     -
304         name: vnf
305         role: vnf
306         ip: 1.1.1.2
307         user: root
308         password: r00t
309         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
310         interfaces:
311             xe0:  # logical name from topology.yaml and vnfd.yaml
312                 vpci:      "0000:07:00.0"
313                 driver:    i40e # default kernel driver
314                 dpdk_port_num: 0
315                 local_ip: "152.16.100.19"
316                 netmask:   "255.255.255.0"
317                 local_mac: "00:00:00:00:00:03"
318
319             xe1:  # logical name from topology.yaml and vnfd.yaml
320                 vpci:      "0000:07:00.1"
321                 driver:    i40e # default kernel driver
322                 dpdk_port_num: 1
323                 local_ip: "152.16.40.19"
324                 netmask:   "255.255.255.0"
325                 local_mac: "00:00:00:00:00:04"
326         routing_table:
327         - network: "152.16.100.20"
328           netmask: "255.255.255.0"
329           gateway: "152.16.100.20"
330           if: "xe0"
331         - network: "152.16.40.20"
332           netmask: "255.255.255.0"
333           gateway: "152.16.40.20"
334           if: "xe1"
335         nd_route_tbl:
336         - network: "0064:ff9b:0:0:0:0:9810:6414"
337           netmask: "112"
338           gateway: "0064:ff9b:0:0:0:0:9810:6414"
339           if: "xe0"
340         - network: "0064:ff9b:0:0:0:0:9810:2814"
341           netmask: "112"
342           gateway: "0064:ff9b:0:0:0:0:9810:2814"
343           if: "xe1"
344
345
346 Network Service Benchmarking - Standalone Virtualization
347 ========================================================
348
349 SR-IOV
350 ------
351
352 SR-IOV Pre-requisites
353 ^^^^^^^^^^^^^^^^^^^^^
354
355 On Host:
356  a) Create a bridge for VM to connect to external network
357
358   .. code-block:: console
359
360       brctl addbr br-int
361       brctl addif br-int <interface_name>    #This interface is connected to internet
362
363  b) Build guest image for VNF to run.
364     Most of the sample test cases in Yardstick are using a guest image called
365     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
366     Yardstick has a tool for building this custom image with samplevnf.
367     It is necessary to have ``sudo`` rights to use this tool.
368
369     Also you may need to install several additional packages to use this tool, by
370     following the commands below::
371
372        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
373
374     This image can be built using the following command in the directory where Yardstick is installed
375
376     .. code-block:: console
377
378        export YARD_IMG_ARCH='amd64'
379        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
380
381     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
382
383     for more details refer to chapter :doc:`04-installation`
384
385     .. note:: VM should be build with static IP and should be accessible from yardstick host.
386
387
388 SR-IOV Config pod.yaml describing Topology
389 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
390
391 SR-IOV 2-Node setup:
392 ^^^^^^^^^^^^^^^^^^^^
393 .. code-block:: console
394
395                                +--------------------+
396                                |                    |
397                                |                    |
398                                |        DUT         |
399                                |       (VNF)        |
400                                |                    |
401                                +--------------------+
402                                | VF NIC |  | VF NIC |
403                                +--------+  +--------+
404                                      ^          ^
405                                      |          |
406                                      |          |
407   +----------+               +-------------------------+
408   |          |               |       ^          ^      |
409   |          |               |       |          |      |
410   |          | (0)<----->(0) | ------           |      |
411   |    TG1   |               |           SUT    |      |
412   |          |               |                  |      |
413   |          | (n)<----->(n) |------------------       |
414   +----------+               +-------------------------+
415   trafficgen_1                          host
416
417
418
419 SR-IOV 3-Node setup - Correlated Traffic
420 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
421 .. code-block:: console
422
423                                +--------------------+
424                                |                    |
425                                |                    |
426                                |        DUT         |
427                                |       (VNF)        |
428                                |                    |
429                                +--------------------+
430                                | VF NIC |  | VF NIC |
431                                +--------+  +--------+
432                                      ^          ^
433                                      |          |
434                                      |          |
435   +----------+               +-------------------------+            +--------------+
436   |          |               |       ^          ^      |            |              |
437   |          |               |       |          |      |            |              |
438   |          | (0)<----->(0) | ------           |      |            |     TG2      |
439   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
440   |          |               |                  |      |            |              |
441   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
442   +----------+               +-------------------------+            +--------------+
443   trafficgen_1                          host                       trafficgen_2
444
445 Before executing Yardstick test cases, make sure that pod.yaml reflects the
446 topology and update all the required fields.
447
448 .. code-block:: console
449
450     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
451     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
452
453 .. note:: Update all the required fields like ip, user, password, pcis, etc...
454
455 SR-IOV Config pod_trex.yaml
456 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
457
458 .. code-block:: YAML
459
460     nodes:
461     -
462         name: trafficgen_1
463         role: TrafficGen
464         ip: 1.1.1.1
465         user: root
466         password: r00t
467         key_filename: /root/.ssh/id_rsa
468         interfaces:
469             xe0:  # logical name from topology.yaml and vnfd.yaml
470                 vpci:      "0000:07:00.0"
471                 driver:    i40e # default kernel driver
472                 dpdk_port_num: 0
473                 local_ip: "152.16.100.20"
474                 netmask:   "255.255.255.0"
475                 local_mac: "00:00:00:00:00:01"
476             xe1:  # logical name from topology.yaml and vnfd.yaml
477                 vpci:      "0000:07:00.1"
478                 driver:    i40e # default kernel driver
479                 dpdk_port_num: 1
480                 local_ip: "152.16.40.20"
481                 netmask:   "255.255.255.0"
482                 local_mac: "00:00.00:00:00:02"
483
484 SR-IOV Config host_sriov.yaml
485 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
486
487 .. code-block:: YAML
488
489     nodes:
490     -
491        name: sriov
492        role: Sriov
493        ip: 192.168.100.101
494        user: ""
495        password: ""
496
497 SR-IOV testcase update:
498 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
499
500 Update "contexts" section
501 """""""""""""""""""""""""
502
503 .. code-block:: YAML
504
505   contexts:
506    - name: yardstick
507      type: Node
508      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
509    - type: StandaloneSriov
510      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
511      name: yardstick
512      vm_deploy: True
513      flavor:
514        images: "/var/lib/libvirt/images/ubuntu.qcow2"
515        ram: 4096
516        extra_specs:
517          hw:cpu_sockets: 1
518          hw:cpu_cores: 6
519          hw:cpu_threads: 2
520        user: "" # update VM username
521        password: "" # update password
522      servers:
523        vnf:
524          network_ports:
525            mgmt:
526              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
527            xe0:
528              - uplink_0
529            xe1:
530              - downlink_0
531      networks:
532        uplink_0:
533          phy_port: "0000:05:00.0"
534          vpci: "0000:00:07.0"
535          cidr: '152.16.100.10/24'
536          gateway_ip: '152.16.100.20'
537        downlink_0:
538          phy_port: "0000:05:00.1"
539          vpci: "0000:00:08.0"
540          cidr: '152.16.40.10/24'
541          gateway_ip: '152.16.100.20'
542
543
544
545 OVS-DPDK
546 --------
547
548 OVS-DPDK Pre-requisites
549 ^^^^^^^^^^^^^^^^^^^^^^^
550
551 On Host:
552  a) Create a bridge for VM to connect to external network
553
554   .. code-block:: console
555
556       brctl addbr br-int
557       brctl addif br-int <interface_name>    #This interface is connected to internet
558
559  b) Build guest image for VNF to run.
560     Most of the sample test cases in Yardstick are using a guest image called
561     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
562     Yardstick has a tool for building this custom image with samplevnf.
563     It is necessary to have ``sudo`` rights to use this tool.
564
565     Also you may need to install several additional packages to use this tool, by
566     following the commands below::
567
568        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
569
570     This image can be built using the following command in the directory where Yardstick is installed::
571
572        export YARD_IMG_ARCH='amd64'
573        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
574        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
575
576     for more details refer to chapter :doc:`04-installation`
577
578     .. note::  VM should be build with static IP and should be accessible from yardstick host.
579
580  c) OVS & DPDK version.
581      - OVS 2.7 and DPDK 16.11.1 above version is supported
582
583  d) Setup OVS/DPDK on host.
584      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
585
586
587 OVS-DPDK Config pod.yaml describing Topology
588 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
589
590 OVS-DPDK 2-Node setup
591 ^^^^^^^^^^^^^^^^^^^^^
592
593
594 .. code-block:: console
595
596                                +--------------------+
597                                |                    |
598                                |                    |
599                                |        DUT         |
600                                |       (VNF)        |
601                                |                    |
602                                +--------------------+
603                                | virtio |  | virtio |
604                                +--------+  +--------+
605                                     ^          ^
606                                     |          |
607                                     |          |
608                                +--------+  +--------+
609                                | vHOST0 |  | vHOST1 |
610   +----------+               +-------------------------+
611   |          |               |       ^          ^      |
612   |          |               |       |          |      |
613   |          | (0)<----->(0) | ------           |      |
614   |    TG1   |               |          SUT     |      |
615   |          |               |       (ovs-dpdk) |      |
616   |          | (n)<----->(n) |------------------       |
617   +----------+               +-------------------------+
618   trafficgen_1                          host
619
620
621 OVS-DPDK 3-Node setup - Correlated Traffic
622 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
623
624 .. code-block:: console
625
626                                +--------------------+
627                                |                    |
628                                |                    |
629                                |        DUT         |
630                                |       (VNF)        |
631                                |                    |
632                                +--------------------+
633                                | virtio |  | virtio |
634                                +--------+  +--------+
635                                     ^          ^
636                                     |          |
637                                     |          |
638                                +--------+  +--------+
639                                | vHOST0 |  | vHOST1 |
640   +----------+               +-------------------------+          +------------+
641   |          |               |       ^          ^      |          |            |
642   |          |               |       |          |      |          |            |
643   |          | (0)<----->(0) | ------           |      |          |    TG2     |
644   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
645   |          |               |      (ovs-dpdk)  |      |          |            |
646   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
647   +----------+               +-------------------------+          +------------+
648   trafficgen_1                          host                       trafficgen_2
649
650
651 Before executing Yardstick test cases, make sure that pod.yaml reflects the
652 topology and update all the required fields.
653
654 .. code-block:: console
655
656   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
657   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
658
659 .. note:: Update all the required fields like ip, user, password, pcis, etc...
660
661 OVS-DPDK Config pod_trex.yaml
662 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
663
664 .. code-block:: YAML
665
666     nodes:
667     -
668       name: trafficgen_1
669       role: TrafficGen
670       ip: 1.1.1.1
671       user: root
672       password: r00t
673       interfaces:
674           xe0:  # logical name from topology.yaml and vnfd.yaml
675               vpci:      "0000:07:00.0"
676               driver:    i40e # default kernel driver
677               dpdk_port_num: 0
678               local_ip: "152.16.100.20"
679               netmask:   "255.255.255.0"
680               local_mac: "00:00:00:00:00:01"
681           xe1:  # logical name from topology.yaml and vnfd.yaml
682               vpci:      "0000:07:00.1"
683               driver:    i40e # default kernel driver
684               dpdk_port_num: 1
685               local_ip: "152.16.40.20"
686               netmask:   "255.255.255.0"
687               local_mac: "00:00.00:00:00:02"
688
689 OVS-DPDK Config host_ovs.yaml
690 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
691
692 .. code-block:: YAML
693
694     nodes:
695     -
696        name: ovs_dpdk
697        role: OvsDpdk
698        ip: 192.168.100.101
699        user: ""
700        password: ""
701
702 ovs_dpdk testcase update:
703 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
704
705 Update "contexts" section
706 """""""""""""""""""""""""
707
708 .. code-block:: YAML
709
710   contexts:
711    - name: yardstick
712      type: Node
713      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
714    - type: StandaloneOvsDpdk
715      name: yardstick
716      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
717      vm_deploy: True
718      ovs_properties:
719        version:
720          ovs: 2.7.0
721          dpdk: 16.11.1
722        pmd_threads: 2
723        ram:
724          socket_0: 2048
725          socket_1: 2048
726        queues: 4
727        vpath: "/usr/local"
728
729      flavor:
730        images: "/var/lib/libvirt/images/ubuntu.qcow2"
731        ram: 4096
732        extra_specs:
733          hw:cpu_sockets: 1
734          hw:cpu_cores: 6
735          hw:cpu_threads: 2
736        user: "" # update VM username
737        password: "" # update password
738      servers:
739        vnf:
740          network_ports:
741            mgmt:
742              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
743            xe0:
744              - uplink_0
745            xe1:
746              - downlink_0
747      networks:
748        uplink_0:
749          phy_port: "0000:05:00.0"
750          vpci: "0000:00:07.0"
751          cidr: '152.16.100.10/24'
752          gateway_ip: '152.16.100.20'
753        downlink_0:
754          phy_port: "0000:05:00.1"
755          vpci: "0000:00:08.0"
756          cidr: '152.16.40.10/24'
757          gateway_ip: '152.16.100.20'
758
759
760 Network Service Benchmarking - OpenStack with SR-IOV support
761 ============================================================
762
763 This section describes how to run a Sample VNF test case, using Heat context,
764 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
765 DevStack, with SR-IOV support.
766
767
768 Single node OpenStack setup with external TG
769 --------------------------------------------
770
771 .. code-block:: console
772
773                                  +----------------------------+
774                                  |OpenStack(DevStack)         |
775                                  |                            |
776                                  |   +--------------------+   |
777                                  |   |sample-VNF VM       |   |
778                                  |   |                    |   |
779                                  |   |        DUT         |   |
780                                  |   |       (VNF)        |   |
781                                  |   |                    |   |
782                                  |   +--------+  +--------+   |
783                                  |   | VF NIC |  | VF NIC |   |
784                                  |   +-----+--+--+----+---+   |
785                                  |         ^          ^       |
786                                  |         |          |       |
787   +----------+                   +---------+----------+-------+
788   |          |                   |        VF0        VF1      |
789   |          |                   |         ^          ^       |
790   |          |                   |         |   SUT    |       |
791   |    TG    | (PF0)<----->(PF0) +---------+          |       |
792   |          |                   |                    |       |
793   |          | (PF1)<----->(PF1) +--------------------+       |
794   |          |                   |                            |
795   +----------+                   +----------------------------+
796   trafficgen_1                                 host
797
798
799 Host pre-configuration
800 ^^^^^^^^^^^^^^^^^^^^^^
801
802 .. warning:: The following configuration requires sudo access to the system. Make
803   sure that your user have the access.
804
805 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
806 disable this extension by default.
807
808 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
809 config file ``/etc/default/grub``.
810
811 For the Intel platform:
812
813 .. code:: bash
814
815   ...
816   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
817   ...
818
819 For the AMD platform:
820
821 .. code:: bash
822
823   ...
824   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
825   ...
826
827 Update the grub configuration file and restart the system:
828
829 .. warning:: The following command will reboot the system.
830
831 .. code:: bash
832
833   sudo update-grub
834   sudo reboot
835
836 Make sure the extension has been enabled:
837
838 .. code:: bash
839
840   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
841
842   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
843   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
844   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
845   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
846   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
847   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
848   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
849   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
850
851 Setup system proxy (if needed). Add the following configuration into the
852 ``/etc/environment`` file:
853
854 .. note:: The proxy server name/port and IPs should be changed according to
855   actuall/current proxy configuration in the lab.
856
857 .. code:: bash
858
859   export http_proxy=http://proxy.company.com:port
860   export https_proxy=http://proxy.company.com:port
861   export ftp_proxy=http://proxy.company.com:port
862   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
863   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
864
865 Upgrade the system:
866
867 .. code:: bash
868
869   sudo -EH apt-get update
870   sudo -EH apt-get upgrade
871   sudo -EH apt-get dist-upgrade
872
873 Install dependencies needed for the DevStack
874
875 .. code:: bash
876
877   sudo -EH apt-get install python
878   sudo -EH apt-get install python-dev
879   sudo -EH apt-get install python-pip
880
881 Setup SR-IOV ports on the host:
882
883 .. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
884   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
885   interface names should be changed according to the HW environment used for
886   testing.
887
888 .. code:: bash
889
890   sudo ip link set dev enp24s0f0 up
891   sudo ip link set dev enp24s0f1 up
892   sudo ip link set dev enp24s0f3 up
893
894   # Create VFs on PF
895   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
896   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
897
898
899 DevStack installation
900 ^^^^^^^^^^^^^^^^^^^^^
901
902 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
903 documentation to install OpenStack on a host. Please note, that stable
904 ``pike`` branch of devstack repo should be used during the installation.
905 The required `local.conf`` configuration file are described below.
906
907 DevStack configuration file:
908
909 .. note:: Update the devstack configuration file by replacing angluar brackets
910   with a short description inside.
911
912 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
913   commands to get device and vendor id of the virtual function (VF).
914
915 .. literalinclude:: code/single-devstack-local.conf
916    :language: console
917
918 Start the devstack installation on a host.
919
920
921 TG host configuration
922 ^^^^^^^^^^^^^^^^^^^^^
923
924 Yardstick automatically install and configure Trex traffic generator on TG
925 host based on provided POD file (see below). Anyway, it's recommended to check
926 the compatibility of the installed NIC on the TG server with software Trex using
927 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
928
929
930 Run the Sample VNF test case
931 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
932
933 There is an example of Sample VNF test case ready to be executed in an
934 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
935 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
936
937 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
938 context.
939
940 Create pod file for TG in the yardstick repo folder located in the yardstick
941 container:
942
943 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
944   according to HW environment used for the testing. Use ``lshw -c network -businfo``
945   command to get the PF PCI address for ``vpci`` field.
946
947 .. literalinclude:: code/single-yardstick-pod.conf
948    :language: console
949
950 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
951 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
952 context using steps described in `NS testing - using yardstick CLI`_ section.
953
954
955 Multi node OpenStack TG and VNF setup (two nodes)
956 -------------------------------------------------
957
958 .. code-block:: console
959
960   +----------------------------+                   +----------------------------+
961   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
962   |                            |                   |                            |
963   |   +--------------------+   |                   |   +--------------------+   |
964   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
965   |   |                    |   |                   |   |                    |   |
966   |   |         TG         |   |                   |   |        DUT         |   |
967   |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
968   |   |                    |   |                   |   |                    |   |
969   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
970   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
971   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
972   |        ^           ^       |                   |         ^          ^       |
973   |        |           |       |                   |         |          |       |
974   +--------+-----------+-------+                   +---------+----------+-------+
975   |       VF0         VF1      |                   |        VF0        VF1      |
976   |        ^           ^       |                   |         ^          ^       |
977   |        |    SUT2   |       |                   |         |   SUT1   |       |
978   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
979   |        |                   |                   |                    |       |
980   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
981   |                            |                   |                            |
982   +----------------------------+                   +----------------------------+
983            host2 (compute)                               host1 (controller)
984
985
986 Controller/Compute pre-configuration
987 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
988
989 Pre-configuration of the controller and compute hosts are the same as
990 described in `Host pre-configuration`_ section. Follow the steps in the section.
991
992
993 DevStack configuration
994 ^^^^^^^^^^^^^^^^^^^^^^
995
996 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
997 documentation to install OpenStack on a host. Please note, that stable
998 ``pike`` branch of devstack repo should be used during the installation.
999 The required `local.conf`` configuration file are described below.
1000
1001 .. note:: Update the devstack configuration files by replacing angluar brackets
1002   with a short description inside.
1003
1004 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1005   commands to get device and vendor id of the virtual function (VF).
1006
1007 DevStack configuration file for controller host:
1008
1009 .. literalinclude:: code/multi-devstack-controller-local.conf
1010    :language: console
1011
1012 DevStack configuration file for compute host:
1013
1014 .. literalinclude:: code/multi-devstack-compute-local.conf
1015    :language: console
1016
1017 Start the devstack installation on the controller and compute hosts.
1018
1019
1020 Run the sample vFW TC
1021 ^^^^^^^^^^^^^^^^^^^^^
1022
1023 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1024 context.
1025
1026 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1027 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1028 context using steps described in `NS testing - using yardstick CLI`_ section
1029 and the following yardtick command line arguments:
1030
1031 .. code:: bash
1032
1033   yardstick -d task start --task-args='{"provider": "sriov"}' \
1034   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1035
1036
1037 Enabling other Traffic generator
1038 ================================
1039
1040 IxLoad
1041 ^^^^^^
1042
1043 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1044    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1045    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1046    ``<IxOS version>Linux64.bin.tar.gz``
1047    If the installation was not done inside the container, after installing
1048    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1049    sure you can run this cmd inside the yardstick container. Usually user is
1050    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1051    ``/usr/bin/ixiapython<ver>`` inside the container.
1052
1053 2. Update ``pod_ixia.yaml`` file with ixia details.
1054
1055   .. code-block:: console
1056
1057     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1058
1059   Config ``pod_ixia.yaml``
1060
1061   .. literalinclude:: code/pod_ixia.yaml
1062      :language: console
1063
1064   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1065
1066 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1067    You will also need to configure the IxLoad machine to start the IXIA
1068    IxosTclServer. This can be started like so:
1069
1070    * Connect to the IxLoad machine using RDP
1071    * Go to:
1072      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1073      or
1074      ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1075
1076 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1077
1078 5. Execute testcase in samplevnf folder e.g.
1079    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1080
1081 IxNetwork
1082 ---------
1083
1084 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1085 installed as part of the requirements of the project.
1086
1087 1. Update ``pod_ixia.yaml`` file with ixia details.
1088
1089   .. code-block:: console
1090
1091     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1092
1093   Config pod_ixia.yaml
1094
1095   .. literalinclude:: code/pod_ixia.yaml
1096      :language: console
1097
1098   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1099
1100 2. Start IxNetwork TCL Server
1101    You will also need to configure the IxNetwork machine to start the IXIA
1102    IxNetworkTclServer. This can be started like so:
1103
1104     * Connect to the IxNetwork machine using RDP
1105     * Go to:
1106       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1107       (or ``IxNetworkApiServer``)
1108
1109 3. Execute testcase in samplevnf folder e.g.
1110    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``