Merge "Bugfix: Can't get image list in API"
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
5
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
9
10 Abstract
11 ========
12
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
20
21 The steps needed to run Yardstick with NSB testing are:
22
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
26 * Run the test case.
27
28
29 Prerequisites
30 =============
31
32 Refer chapter Yardstick Installation for more information on yardstick
33 prerequisites
34
35 Several prerequisites are needed for Yardstick (VNF testing):
36
37   * Python Modules: pyzmq, pika.
38   * flex
39   * bison
40   * build-essential
41   * automake
42   * libtool
43   * librabbitmq-dev
44   * rabbitmq-server
45   * collectd
46   * intel-cmt-cat
47
48 Hardware & Software Ingredients
49 -------------------------------
50
51 SUT requirements:
52
53
54    ======= ===================
55    Item    Description
56    ======= ===================
57    Memory  Min 20GB
58    NICs    2 x 10G
59    OS      Ubuntu 16.04.3 LTS
60    kernel  4.4.0-34-generic
61    DPDK    17.02
62    ======= ===================
63
64 Boot and BIOS settings:
65
66
67    ============= =================================================
68    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71                  iommu=on iommu=pt intel_iommu=on
72                  Note: nohz_full and rcu_nocbs is to disable Linux
73                  kernel interrupts
74    BIOS          CPU Power and Performance Policy <Performance>
75                  CPU C-state Disabled
76                  CPU P-state Disabled
77                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78                  Hyper-Threading Technology (If supported) Enabled
79                  Virtualization Techology Enabled
80                  Intel(R) VT for Direct I/O Enabled
81                  Coherency Enabled
82                  Turbo Boost Disabled
83    ============= =================================================
84
85
86
87 Install Yardstick (NSB Testing)
88 ===============================
89
90 Download the source code and install Yardstick from it
91
92 .. code-block:: console
93
94   git clone https://gerrit.opnfv.org/gerrit/yardstick
95
96   cd yardstick
97
98   # Switch to latest stable branch
99   # git checkout <tag or stable branch>
100   git checkout stable/euphrates
101
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
104
105 .. code-block:: ini
106
107     cat /etc/environment
108     http_proxy='http://proxy.company.com:port'
109     https_proxy='http://proxy.company.com:port'
110
111 .. code-block:: console
112
113     export http_proxy='http://proxy.company.com:port'
114     export https_proxy='http://proxy.company.com:port'
115
116 The last step is to modify the Yardstick installation inventory, used by
117 Ansible:
118
119 .. code-block:: ini
120
121   cat ./ansible/yardstick-install-inventory.ini
122   [jumphost]
123   localhost  ansible_connection=local
124
125   [yardstick-standalone]
126   yardstick-standalone-node ansible_host=192.168.1.2
127   yardstick-standalone-node-2 ansible_host=192.168.1.3
128
129   # section below is only due backward compatibility.
130   # it will be removed later
131   [yardstick:children]
132   jumphost
133
134   [all:vars]
135   ansible_user=root
136   ansible_pass=root
137
138
139 To execute an installation for a Bare-Metal or a Standalone context:
140
141 .. code-block:: console
142
143     ./nsb_setup.sh
144
145
146 To execute an installation for an OpenStack context:
147
148 .. code-block:: console
149
150     ./nsb_setup.sh <path to admin-openrc.sh>
151
152 Above command setup docker with latest yardstick code. To execute
153
154 .. code-block:: console
155
156   docker exec -it yardstick bash
157
158 It will also automatically download all the packages needed for NSB Testing
159 setup. Refer chapter :doc:`04-installation` for more on docker
160 **Install Yardstick using Docker (recommended)**
161
162 System Topology:
163 ================
164
165 .. code-block:: console
166
167   +----------+              +----------+
168   |          |              |          |
169   |          | (0)----->(0) |          |
170   |    TG1   |              |    DUT   |
171   |          |              |          |
172   |          | (1)<-----(1) |          |
173   +----------+              +----------+
174   trafficgen_1                   vnf
175
176
177 Environment parameters and credentials
178 ======================================
179
180 Config yardstick conf
181 ---------------------
182
183 If user did not run 'yardstick env influxdb' inside the container, which will
184 generate correct ``yardstick.conf``, then create the config file manually (run
185 inside the container):
186 ::
187
188     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
189     vi /etc/yardstick/yardstick.conf
190
191 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
192
193 ::
194
195   [DEFAULT]
196   debug = True
197   dispatcher = file, influxdb
198
199   [dispatcher_influxdb]
200   timeout = 5
201   target = http://{YOUR_IP_HERE}:8086
202   db_name = yardstick
203   username = root
204   password = root
205
206   [nsb]
207   trex_path=/opt/nsb_bin/trex/scripts
208   bin_path=/opt/nsb_bin
209   trex_client_lib=/opt/nsb_bin/trex_client/stl
210
211 Run Yardstick - Network Service Testcases
212 =========================================
213
214
215 NS testing - using yardstick CLI
216 --------------------------------
217
218   See :doc:`04-installation`
219
220 .. code-block:: console
221
222
223   docker exec -it yardstick /bin/bash
224   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
225   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
226   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
227
228 Network Service Benchmarking - Bare-Metal
229 =========================================
230
231 Bare-Metal Config pod.yaml describing Topology
232 ----------------------------------------------
233
234 Bare-Metal 2-Node setup
235 ^^^^^^^^^^^^^^^^^^^^^^^
236 .. code-block:: console
237
238   +----------+              +----------+
239   |          |              |          |
240   |          | (0)----->(0) |          |
241   |    TG1   |              |    DUT   |
242   |          |              |          |
243   |          | (n)<-----(n) |          |
244   +----------+              +----------+
245   trafficgen_1                   vnf
246
247 Bare-Metal 3-Node setup - Correlated Traffic
248 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
249 .. code-block:: console
250
251   +----------+              +----------+            +------------+
252   |          |              |          |            |            |
253   |          |              |          |            |            |
254   |          | (0)----->(0) |          |            |    UDP     |
255   |    TG1   |              |    DUT   |            |   Replay   |
256   |          |              |          |            |            |
257   |          |              |          |(1)<---->(0)|            |
258   +----------+              +----------+            +------------+
259   trafficgen_1                   vnf                 trafficgen_2
260
261
262 Bare-Metal Config pod.yaml
263 --------------------------
264 Before executing Yardstick test cases, make sure that pod.yaml reflects the
265 topology and update all the required fields.::
266
267     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
268
269 .. code-block:: YAML
270
271     nodes:
272     -
273         name: trafficgen_1
274         role: TrafficGen
275         ip: 1.1.1.1
276         user: root
277         password: r00t
278         interfaces:
279             xe0:  # logical name from topology.yaml and vnfd.yaml
280                 vpci:      "0000:07:00.0"
281                 driver:    i40e # default kernel driver
282                 dpdk_port_num: 0
283                 local_ip: "152.16.100.20"
284                 netmask:   "255.255.255.0"
285                 local_mac: "00:00:00:00:00:01"
286             xe1:  # logical name from topology.yaml and vnfd.yaml
287                 vpci:      "0000:07:00.1"
288                 driver:    i40e # default kernel driver
289                 dpdk_port_num: 1
290                 local_ip: "152.16.40.20"
291                 netmask:   "255.255.255.0"
292                 local_mac: "00:00.00:00:00:02"
293
294     -
295         name: vnf
296         role: vnf
297         ip: 1.1.1.2
298         user: root
299         password: r00t
300         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
301         interfaces:
302             xe0:  # logical name from topology.yaml and vnfd.yaml
303                 vpci:      "0000:07:00.0"
304                 driver:    i40e # default kernel driver
305                 dpdk_port_num: 0
306                 local_ip: "152.16.100.19"
307                 netmask:   "255.255.255.0"
308                 local_mac: "00:00:00:00:00:03"
309
310             xe1:  # logical name from topology.yaml and vnfd.yaml
311                 vpci:      "0000:07:00.1"
312                 driver:    i40e # default kernel driver
313                 dpdk_port_num: 1
314                 local_ip: "152.16.40.19"
315                 netmask:   "255.255.255.0"
316                 local_mac: "00:00:00:00:00:04"
317         routing_table:
318         - network: "152.16.100.20"
319           netmask: "255.255.255.0"
320           gateway: "152.16.100.20"
321           if: "xe0"
322         - network: "152.16.40.20"
323           netmask: "255.255.255.0"
324           gateway: "152.16.40.20"
325           if: "xe1"
326         nd_route_tbl:
327         - network: "0064:ff9b:0:0:0:0:9810:6414"
328           netmask: "112"
329           gateway: "0064:ff9b:0:0:0:0:9810:6414"
330           if: "xe0"
331         - network: "0064:ff9b:0:0:0:0:9810:2814"
332           netmask: "112"
333           gateway: "0064:ff9b:0:0:0:0:9810:2814"
334           if: "xe1"
335
336
337 Network Service Benchmarking - Standalone Virtualization
338 ========================================================
339
340 SR-IOV
341 ------
342
343 SR-IOV Pre-requisites
344 ^^^^^^^^^^^^^^^^^^^^^
345
346 On Host:
347  a) Create a bridge for VM to connect to external network
348
349   .. code-block:: console
350
351       brctl addbr br-int
352       brctl addif br-int <interface_name>    #This interface is connected to internet
353
354  b) Build guest image for VNF to run.
355     Most of the sample test cases in Yardstick are using a guest image called
356     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
357     Yardstick has a tool for building this custom image with samplevnf.
358     It is necessary to have ``sudo`` rights to use this tool.
359
360     Also you may need to install several additional packages to use this tool, by
361     following the commands below::
362
363        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
364
365     This image can be built using the following command in the directory where Yardstick is installed
366
367     .. code-block:: console
368
369        export YARD_IMG_ARCH='amd64'
370        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
371
372     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
373
374     for more details refer to chapter :doc:`04-installation`
375
376     .. note:: VM should be build with static IP and should be accessible from yardstick host.
377
378
379 SR-IOV Config pod.yaml describing Topology
380 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
381
382 SR-IOV 2-Node setup:
383 ^^^^^^^^^^^^^^^^^^^^
384 .. code-block:: console
385
386                                +--------------------+
387                                |                    |
388                                |                    |
389                                |        DUT         |
390                                |       (VNF)        |
391                                |                    |
392                                +--------------------+
393                                | VF NIC |  | VF NIC |
394                                +--------+  +--------+
395                                      ^          ^
396                                      |          |
397                                      |          |
398   +----------+               +-------------------------+
399   |          |               |       ^          ^      |
400   |          |               |       |          |      |
401   |          | (0)<----->(0) | ------           |      |
402   |    TG1   |               |           SUT    |      |
403   |          |               |                  |      |
404   |          | (n)<----->(n) |------------------       |
405   +----------+               +-------------------------+
406   trafficgen_1                          host
407
408
409
410 SR-IOV 3-Node setup - Correlated Traffic
411 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
412 .. code-block:: console
413
414                                +--------------------+
415                                |                    |
416                                |                    |
417                                |        DUT         |
418                                |       (VNF)        |
419                                |                    |
420                                +--------------------+
421                                | VF NIC |  | VF NIC |
422                                +--------+  +--------+
423                                      ^          ^
424                                      |          |
425                                      |          |
426   +----------+               +-------------------------+            +--------------+
427   |          |               |       ^          ^      |            |              |
428   |          |               |       |          |      |            |              |
429   |          | (0)<----->(0) | ------           |      |            |     TG2      |
430   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
431   |          |               |                  |      |            |              |
432   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
433   +----------+               +-------------------------+            +--------------+
434   trafficgen_1                          host                       trafficgen_2
435
436 Before executing Yardstick test cases, make sure that pod.yaml reflects the
437 topology and update all the required fields.
438
439 .. code-block:: console
440
441     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
442     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
443
444 .. note:: Update all the required fields like ip, user, password, pcis, etc...
445
446 SR-IOV Config pod_trex.yaml
447 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
448
449 .. code-block:: YAML
450
451     nodes:
452     -
453         name: trafficgen_1
454         role: TrafficGen
455         ip: 1.1.1.1
456         user: root
457         password: r00t
458         key_filename: /root/.ssh/id_rsa
459         interfaces:
460             xe0:  # logical name from topology.yaml and vnfd.yaml
461                 vpci:      "0000:07:00.0"
462                 driver:    i40e # default kernel driver
463                 dpdk_port_num: 0
464                 local_ip: "152.16.100.20"
465                 netmask:   "255.255.255.0"
466                 local_mac: "00:00:00:00:00:01"
467             xe1:  # logical name from topology.yaml and vnfd.yaml
468                 vpci:      "0000:07:00.1"
469                 driver:    i40e # default kernel driver
470                 dpdk_port_num: 1
471                 local_ip: "152.16.40.20"
472                 netmask:   "255.255.255.0"
473                 local_mac: "00:00.00:00:00:02"
474
475 SR-IOV Config host_sriov.yaml
476 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
477
478 .. code-block:: YAML
479
480     nodes:
481     -
482        name: sriov
483        role: Sriov
484        ip: 192.168.100.101
485        user: ""
486        password: ""
487
488 SR-IOV testcase update:
489 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
490
491 Update "contexts" section
492 """""""""""""""""""""""""
493
494 .. code-block:: YAML
495
496   contexts:
497    - name: yardstick
498      type: Node
499      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
500    - type: StandaloneSriov
501      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
502      name: yardstick
503      vm_deploy: True
504      flavor:
505        images: "/var/lib/libvirt/images/ubuntu.qcow2"
506        ram: 4096
507        extra_specs:
508          hw:cpu_sockets: 1
509          hw:cpu_cores: 6
510          hw:cpu_threads: 2
511        user: "" # update VM username
512        password: "" # update password
513      servers:
514        vnf:
515          network_ports:
516            mgmt:
517              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
518            xe0:
519              - uplink_0
520            xe1:
521              - downlink_0
522      networks:
523        uplink_0:
524          phy_port: "0000:05:00.0"
525          vpci: "0000:00:07.0"
526          cidr: '152.16.100.10/24'
527          gateway_ip: '152.16.100.20'
528        downlink_0:
529          phy_port: "0000:05:00.1"
530          vpci: "0000:00:08.0"
531          cidr: '152.16.40.10/24'
532          gateway_ip: '152.16.100.20'
533
534
535
536 OVS-DPDK
537 --------
538
539 OVS-DPDK Pre-requisites
540 ^^^^^^^^^^^^^^^^^^^^^^^
541
542 On Host:
543  a) Create a bridge for VM to connect to external network
544
545   .. code-block:: console
546
547       brctl addbr br-int
548       brctl addif br-int <interface_name>    #This interface is connected to internet
549
550  b) Build guest image for VNF to run.
551     Most of the sample test cases in Yardstick are using a guest image called
552     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
553     Yardstick has a tool for building this custom image with samplevnf.
554     It is necessary to have ``sudo`` rights to use this tool.
555
556     Also you may need to install several additional packages to use this tool, by
557     following the commands below::
558
559        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
560
561     This image can be built using the following command in the directory where Yardstick is installed::
562
563        export YARD_IMG_ARCH='amd64'
564        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
565        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
566
567     for more details refer to chapter :doc:`04-installation`
568
569     .. note::  VM should be build with static IP and should be accessible from yardstick host.
570
571  c) OVS & DPDK version.
572      - OVS 2.7 and DPDK 16.11.1 above version is supported
573
574  d) Setup OVS/DPDK on host.
575      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
576
577
578 OVS-DPDK Config pod.yaml describing Topology
579 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
580
581 OVS-DPDK 2-Node setup
582 ^^^^^^^^^^^^^^^^^^^^^
583
584
585 .. code-block:: console
586
587                                +--------------------+
588                                |                    |
589                                |                    |
590                                |        DUT         |
591                                |       (VNF)        |
592                                |                    |
593                                +--------------------+
594                                | virtio |  | virtio |
595                                +--------+  +--------+
596                                     ^          ^
597                                     |          |
598                                     |          |
599                                +--------+  +--------+
600                                | vHOST0 |  | vHOST1 |
601   +----------+               +-------------------------+
602   |          |               |       ^          ^      |
603   |          |               |       |          |      |
604   |          | (0)<----->(0) | ------           |      |
605   |    TG1   |               |          SUT     |      |
606   |          |               |       (ovs-dpdk) |      |
607   |          | (n)<----->(n) |------------------       |
608   +----------+               +-------------------------+
609   trafficgen_1                          host
610
611
612 OVS-DPDK 3-Node setup - Correlated Traffic
613 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
614
615 .. code-block:: console
616
617                                +--------------------+
618                                |                    |
619                                |                    |
620                                |        DUT         |
621                                |       (VNF)        |
622                                |                    |
623                                +--------------------+
624                                | virtio |  | virtio |
625                                +--------+  +--------+
626                                     ^          ^
627                                     |          |
628                                     |          |
629                                +--------+  +--------+
630                                | vHOST0 |  | vHOST1 |
631   +----------+               +-------------------------+          +------------+
632   |          |               |       ^          ^      |          |            |
633   |          |               |       |          |      |          |            |
634   |          | (0)<----->(0) | ------           |      |          |    TG2     |
635   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
636   |          |               |      (ovs-dpdk)  |      |          |            |
637   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
638   +----------+               +-------------------------+          +------------+
639   trafficgen_1                          host                       trafficgen_2
640
641
642 Before executing Yardstick test cases, make sure that pod.yaml reflects the
643 topology and update all the required fields.
644
645 .. code-block:: console
646
647   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
648   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
649
650 .. note:: Update all the required fields like ip, user, password, pcis, etc...
651
652 OVS-DPDK Config pod_trex.yaml
653 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
654
655 .. code-block:: YAML
656
657     nodes:
658     -
659       name: trafficgen_1
660       role: TrafficGen
661       ip: 1.1.1.1
662       user: root
663       password: r00t
664       interfaces:
665           xe0:  # logical name from topology.yaml and vnfd.yaml
666               vpci:      "0000:07:00.0"
667               driver:    i40e # default kernel driver
668               dpdk_port_num: 0
669               local_ip: "152.16.100.20"
670               netmask:   "255.255.255.0"
671               local_mac: "00:00:00:00:00:01"
672           xe1:  # logical name from topology.yaml and vnfd.yaml
673               vpci:      "0000:07:00.1"
674               driver:    i40e # default kernel driver
675               dpdk_port_num: 1
676               local_ip: "152.16.40.20"
677               netmask:   "255.255.255.0"
678               local_mac: "00:00.00:00:00:02"
679
680 OVS-DPDK Config host_ovs.yaml
681 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
682
683 .. code-block:: YAML
684
685     nodes:
686     -
687        name: ovs_dpdk
688        role: OvsDpdk
689        ip: 192.168.100.101
690        user: ""
691        password: ""
692
693 ovs_dpdk testcase update:
694 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
695
696 Update "contexts" section
697 """""""""""""""""""""""""
698
699 .. code-block:: YAML
700
701   contexts:
702    - name: yardstick
703      type: Node
704      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
705    - type: StandaloneOvsDpdk
706      name: yardstick
707      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
708      vm_deploy: True
709      ovs_properties:
710        version:
711          ovs: 2.7.0
712          dpdk: 16.11.1
713        pmd_threads: 2
714        ram:
715          socket_0: 2048
716          socket_1: 2048
717        queues: 4
718        vpath: "/usr/local"
719
720      flavor:
721        images: "/var/lib/libvirt/images/ubuntu.qcow2"
722        ram: 4096
723        extra_specs:
724          hw:cpu_sockets: 1
725          hw:cpu_cores: 6
726          hw:cpu_threads: 2
727        user: "" # update VM username
728        password: "" # update password
729      servers:
730        vnf:
731          network_ports:
732            mgmt:
733              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
734            xe0:
735              - uplink_0
736            xe1:
737              - downlink_0
738      networks:
739        uplink_0:
740          phy_port: "0000:05:00.0"
741          vpci: "0000:00:07.0"
742          cidr: '152.16.100.10/24'
743          gateway_ip: '152.16.100.20'
744        downlink_0:
745          phy_port: "0000:05:00.1"
746          vpci: "0000:00:08.0"
747          cidr: '152.16.40.10/24'
748          gateway_ip: '152.16.100.20'
749
750
751 Network Service Benchmarking - OpenStack with SR-IOV support
752 ============================================================
753
754 This section describes how to run a Sample VNF test case, using Heat context,
755 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
756 DevStack, with SR-IOV support.
757
758
759 Single node OpenStack setup with external TG
760 --------------------------------------------
761
762 .. code-block:: console
763
764                                  +----------------------------+
765                                  |OpenStack(DevStack)         |
766                                  |                            |
767                                  |   +--------------------+   |
768                                  |   |sample-VNF VM       |   |
769                                  |   |                    |   |
770                                  |   |        DUT         |   |
771                                  |   |       (VNF)        |   |
772                                  |   |                    |   |
773                                  |   +--------+  +--------+   |
774                                  |   | VF NIC |  | VF NIC |   |
775                                  |   +-----+--+--+----+---+   |
776                                  |         ^          ^       |
777                                  |         |          |       |
778   +----------+                   +---------+----------+-------+
779   |          |                   |        VF0        VF1      |
780   |          |                   |         ^          ^       |
781   |          |                   |         |   SUT    |       |
782   |    TG    | (PF0)<----->(PF0) +---------+          |       |
783   |          |                   |                    |       |
784   |          | (PF1)<----->(PF1) +--------------------+       |
785   |          |                   |                            |
786   +----------+                   +----------------------------+
787   trafficgen_1                                 host
788
789
790 Host pre-configuration
791 ^^^^^^^^^^^^^^^^^^^^^^
792
793 .. warning:: The following configuration requires sudo access to the system. Make
794   sure that your user have the access.
795
796 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
797 disable this extension by default.
798
799 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
800 config file ``/etc/default/grub``.
801
802 For the Intel platform:
803
804 .. code:: bash
805
806   ...
807   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
808   ...
809
810 For the AMD platform:
811
812 .. code:: bash
813
814   ...
815   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
816   ...
817
818 Update the grub configuration file and restart the system:
819
820 .. warning:: The following command will reboot the system.
821
822 .. code:: bash
823
824   sudo update-grub
825   sudo reboot
826
827 Make sure the extension has been enabled:
828
829 .. code:: bash
830
831   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
832
833   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
834   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
835   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
836   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
837   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
838   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
839   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
840   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
841
842 Setup system proxy (if needed). Add the following configuration into the
843 ``/etc/environment`` file:
844
845 .. note:: The proxy server name/port and IPs should be changed according to
846   actuall/current proxy configuration in the lab.
847
848 .. code:: bash
849
850   export http_proxy=http://proxy.company.com:port
851   export https_proxy=http://proxy.company.com:port
852   export ftp_proxy=http://proxy.company.com:port
853   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
854   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
855
856 Upgrade the system:
857
858 .. code:: bash
859
860   sudo -EH apt-get update
861   sudo -EH apt-get upgrade
862   sudo -EH apt-get dist-upgrade
863
864 Install dependencies needed for the DevStack
865
866 .. code:: bash
867
868   sudo -EH apt-get install python
869   sudo -EH apt-get install python-dev
870   sudo -EH apt-get install python-pip
871
872 Setup SR-IOV ports on the host:
873
874 .. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
875   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
876   interface names should be changed according to the HW environment used for
877   testing.
878
879 .. code:: bash
880
881   sudo ip link set dev enp24s0f0 up
882   sudo ip link set dev enp24s0f1 up
883   sudo ip link set dev enp24s0f3 up
884
885   # Create VFs on PF
886   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
887   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
888
889
890 DevStack installation
891 ^^^^^^^^^^^^^^^^^^^^^
892
893 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
894 documentation to install OpenStack on a host. Please note, that stable
895 ``pike`` branch of devstack repo should be used during the installation.
896 The required `local.conf`` configuration file are described below.
897
898 DevStack configuration file:
899
900 .. note:: Update the devstack configuration file by replacing angluar brackets
901   with a short description inside.
902
903 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
904   commands to get device and vendor id of the virtual function (VF).
905
906 .. literalinclude:: code/single-devstack-local.conf
907    :language: console
908
909 Start the devstack installation on a host.
910
911
912 TG host configuration
913 ^^^^^^^^^^^^^^^^^^^^^
914
915 Yardstick automatically install and configure Trex traffic generator on TG
916 host based on provided POD file (see below). Anyway, it's recommended to check
917 the compatibility of the installed NIC on the TG server with software Trex using
918 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
919
920
921 Run the Sample VNF test case
922 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
923
924 There is an example of Sample VNF test case ready to be executed in an
925 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
926 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
927
928 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
929 context.
930
931 Create pod file for TG in the yardstick repo folder located in the yardstick
932 container:
933
934 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
935   according to HW environment used for the testing. Use ``lshw -c network -businfo``
936   command to get the PF PCI address for ``vpci`` field.
937
938 .. literalinclude:: code/single-yardstick-pod.conf
939    :language: console
940
941 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
942 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
943 context using steps described in `NS testing - using yardstick CLI`_ section.
944
945
946 Multi node OpenStack TG and VNF setup (two nodes)
947 -------------------------------------------------
948
949 .. code-block:: console
950
951   +----------------------------+                   +----------------------------+
952   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
953   |                            |                   |                            |
954   |   +--------------------+   |                   |   +--------------------+   |
955   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
956   |   |                    |   |                   |   |                    |   |
957   |   |         TG         |   |                   |   |        DUT         |   |
958   |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
959   |   |                    |   |                   |   |                    |   |
960   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
961   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
962   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
963   |        ^           ^       |                   |         ^          ^       |
964   |        |           |       |                   |         |          |       |
965   +--------+-----------+-------+                   +---------+----------+-------+
966   |       VF0         VF1      |                   |        VF0        VF1      |
967   |        ^           ^       |                   |         ^          ^       |
968   |        |    SUT2   |       |                   |         |   SUT1   |       |
969   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
970   |        |                   |                   |                    |       |
971   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
972   |                            |                   |                            |
973   +----------------------------+                   +----------------------------+
974            host2 (compute)                               host1 (controller)
975
976
977 Controller/Compute pre-configuration
978 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
979
980 Pre-configuration of the controller and compute hosts are the same as
981 described in `Host pre-configuration`_ section. Follow the steps in the section.
982
983
984 DevStack configuration
985 ^^^^^^^^^^^^^^^^^^^^^^
986
987 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
988 documentation to install OpenStack on a host. Please note, that stable
989 ``pike`` branch of devstack repo should be used during the installation.
990 The required `local.conf`` configuration file are described below.
991
992 .. note:: Update the devstack configuration files by replacing angluar brackets
993   with a short description inside.
994
995 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
996   commands to get device and vendor id of the virtual function (VF).
997
998 DevStack configuration file for controller host:
999
1000 .. literalinclude:: code/multi-devstack-controller-local.conf
1001    :language: console
1002
1003 DevStack configuration file for compute host:
1004
1005 .. literalinclude:: code/multi-devstack-compute-local.conf
1006    :language: console
1007
1008 Start the devstack installation on the controller and compute hosts.
1009
1010
1011 Run the sample vFW TC
1012 ^^^^^^^^^^^^^^^^^^^^^
1013
1014 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1015 context.
1016
1017 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1018 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1019 context using steps described in `NS testing - using yardstick CLI`_ section
1020 and the following yardtick command line arguments:
1021
1022 .. code:: bash
1023
1024   yardstick -d task start --task-args='{"provider": "sriov"}' \
1025   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1026
1027
1028 Enabling other Traffic generator
1029 ================================
1030
1031 IxLoad
1032 ^^^^^^
1033
1034 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1035    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1036    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1037    ``<IxOS version>Linux64.bin.tar.gz``
1038    If the installation was not done inside the container, after installing
1039    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1040    sure you can run this cmd inside the yardstick container. Usually user is
1041    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1042    ``/usr/bin/ixiapython<ver>`` inside the container.
1043
1044 2. Update ``pod_ixia.yaml`` file with ixia details.
1045
1046   .. code-block:: console
1047
1048     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1049
1050   Config ``pod_ixia.yaml``
1051
1052   .. code-block:: yaml
1053
1054       nodes:
1055           -
1056             name: trafficgen_1
1057             role: IxNet
1058             ip: 1.2.1.1 #ixia machine ip
1059             user: user
1060             password: r00t
1061             key_filename: /root/.ssh/id_rsa
1062             tg_config:
1063                 ixchassis: "1.2.1.7" #ixia chassis ip
1064                 tcl_port: "8009" # tcl server port
1065                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1066                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1067                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1068                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1069                 dut_result_dir: "/mnt/ixia"
1070                 version: 8.1
1071             interfaces:
1072                 xe0:  # logical name from topology.yaml and vnfd.yaml
1073                     vpci: "2:5" # Card:port
1074                     driver:    "none"
1075                     dpdk_port_num: 0
1076                     local_ip: "152.16.100.20"
1077                     netmask:   "255.255.0.0"
1078                     local_mac: "00:98:10:64:14:00"
1079                 xe1:  # logical name from topology.yaml and vnfd.yaml
1080                     vpci: "2:6" # [(Card, port)]
1081                     driver:    "none"
1082                     dpdk_port_num: 1
1083                     local_ip: "152.40.40.20"
1084                     netmask:   "255.255.0.0"
1085                     local_mac: "00:98:28:28:14:00"
1086
1087   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1088
1089 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1090    You will also need to configure the IxLoad machine to start the IXIA
1091    IxosTclServer. This can be started like so:
1092
1093    * Connect to the IxLoad machine using RDP
1094    * Go to:
1095      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1096      or
1097      ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1098
1099 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1100
1101 5. Execute testcase in samplevnf folder e.g.
1102    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1103
1104 IxNetwork
1105 ---------
1106
1107 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1108    (Download from ixia support site)
1109    Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1110 2. Update pod_ixia.yaml file with ixia details.
1111
1112   .. code-block:: console
1113
1114     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1115
1116   Config pod_ixia.yaml
1117
1118   .. code-block:: yaml
1119
1120       nodes:
1121           -
1122             name: trafficgen_1
1123             role: IxNet
1124             ip: 1.2.1.1 #ixia machine ip
1125             user: user
1126             password: r00t
1127             key_filename: /root/.ssh/id_rsa
1128             tg_config:
1129                 ixchassis: "1.2.1.7" #ixia chassis ip
1130                 tcl_port: "8009" # tcl server port
1131                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1132                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1133                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1134                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1135                 dut_result_dir: "/mnt/ixia"
1136                 version: 8.1
1137             interfaces:
1138                 xe0:  # logical name from topology.yaml and vnfd.yaml
1139                     vpci: "2:5" # Card:port
1140                     driver:    "none"
1141                     dpdk_port_num: 0
1142                     local_ip: "152.16.100.20"
1143                     netmask:   "255.255.0.0"
1144                     local_mac: "00:98:10:64:14:00"
1145                 xe1:  # logical name from topology.yaml and vnfd.yaml
1146                     vpci: "2:6" # [(Card, port)]
1147                     driver:    "none"
1148                     dpdk_port_num: 1
1149                     local_ip: "152.40.40.20"
1150                     netmask:   "255.255.0.0"
1151                     local_mac: "00:98:28:28:14:00"
1152
1153   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1154
1155 3. Start IxNetwork TCL Server
1156    You will also need to configure the IxNetwork machine to start the IXIA
1157    IxNetworkTclServer. This can be started like so:
1158
1159     * Connect to the IxNetwork machine using RDP
1160     * Go to:
1161       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1162       (or ``IxNetworkApiServer``)
1163
1164 4. Execute testcase in samplevnf folder e.g.
1165    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1166