Merge "Replace neutron floating ip deletion with shade."
[yardstick.git] / docs / testing / user / userguide / 12-nsb_installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
5
6 Yardstick - NSB Testing -Installation
7 =====================================
8
9 Abstract
10 --------
11
12 The Network Service Benchmarking (NSB) extends the yardstick framework to do
13 VNF characterization and benchmarking in three different execution
14 environments viz., bare metal i.e. native Linux environment, standalone virtual
15 environment and managed virtualized environment (e.g. Open stack etc.).
16 It also brings in the capability to interact with external traffic generators
17 both hardware & software based for triggering and validating the traffic
18 according to user defined profiles.
19
20 The steps needed to run Yardstick with NSB testing are:
21
22 * Install Yardstick (NSB Testing).
23 * Setup/Reference pod.yaml describing Test topology
24 * Create/Reference the test configuration yaml file.
25 * Run the test case.
26
27
28 Prerequisites
29 -------------
30
31 Refer chapter Yardstick Installation for more information on yardstick
32 prerequisites
33
34 Several prerequisites are needed for Yardstick(VNF testing):
35
36   - Python Modules: pyzmq, pika.
37
38   - flex
39
40   - bison
41
42   - build-essential
43
44   - automake
45
46   - libtool
47
48   - librabbitmq-dev
49
50   - rabbitmq-server
51
52   - collectd
53
54   - intel-cmt-cat
55
56 Hardware & Software Ingredients
57 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
58
59 SUT requirements:
60
61
62    +-----------+--------------------+
63    | Item      | Description        |
64    +-----------+--------------------+
65    | Memory    | Min 20GB           |
66    +-----------+--------------------+
67    | NICs      | 2 x 10G            |
68    +-----------+--------------------+
69    | OS        | Ubuntu 16.04.3 LTS |
70    +-----------+--------------------+
71    | kernel    | 4.4.0-34-generic   |
72    +-----------+--------------------+
73    | DPDK      | 17.02              |
74    +-----------+--------------------+
75
76 Boot and BIOS settings:
77
78
79    +------------------+---------------------------------------------------+
80    | Boot settings    | default_hugepagesz=1G hugepagesz=1G hugepages=16  |
81    |                  | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33  |
82    |                  | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33         |
83    |                  | iommu=on iommu=pt intel_iommu=on                  |
84    |                  | Note: nohz_full and rcu_nocbs is to disable Linux |
85    |                  | kernel interrupts                                 |
86    +------------------+---------------------------------------------------+
87    |BIOS              | CPU Power and Performance Policy <Performance>    |
88    |                  | CPU C-state Disabled                              |
89    |                  | CPU P-state Disabled                              |
90    |                  | Enhanced IntelĀ® SpeedstepĀ® Tech Disabled          |
91    |                  | Hyper-Threading Technology (If supported) Enabled |
92    |                  | Virtualization Techology Enabled                  |
93    |                  | Intel(R) VT for Direct I/O Enabled                |
94    |                  | Coherency Enabled                                 |
95    |                  | Turbo Boost Disabled                              |
96    +------------------+---------------------------------------------------+
97
98
99
100 Install Yardstick (NSB Testing)
101 -------------------------------
102
103 Download the source code and install Yardstick from it
104
105 .. code-block:: console
106
107   git clone https://gerrit.opnfv.org/gerrit/yardstick
108
109   cd yardstick
110
111   # Switch to latest stable branch
112   # git checkout <tag or stable branch>
113   git checkout stable/euphrates
114
115 Configure the network proxy, either using the environment variables or setting
116 the global environment file:
117
118 .. code-block:: ini
119
120     cat /etc/environment
121     http_proxy='http://proxy.company.com:port'
122     https_proxy='http://proxy.company.com:port'
123
124 .. code-block:: console
125
126     export http_proxy='http://proxy.company.com:port'
127     export https_proxy='http://proxy.company.com:port'
128
129 The last step is to modify the Yardstick installation inventory, used by
130 Ansible:
131
132 .. code-block:: ini
133
134   cat ./ansible/yardstick-install-inventory.ini
135   [jumphost]
136   localhost  ansible_connection=local
137
138   [yardstick-standalone]
139   yardstick-standalone-node ansible_host=192.168.1.2
140   yardstick-standalone-node-2 ansible_host=192.168.1.3
141
142   # section below is only due backward compatibility.
143   # it will be removed later
144   [yardstick:children]
145   jumphost
146
147   [all:vars]
148   ansible_user=root
149   ansible_pass=root
150
151
152 To execute an installation for a Bare-Metal or a Standalone context:
153
154 .. code-block:: console
155
156     ./nsb_setup.sh
157
158
159 To execute an installation for an OpenStack context:
160
161 .. code-block:: console
162
163     ./nsb_setup.sh <path to admin-openrc.sh>
164
165 Above command setup docker with latest yardstick code. To execute
166
167 .. code-block:: console
168
169   docker exec -it yardstick bash
170
171 It will also automatically download all the packages needed for NSB Testing setup.
172 Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
173
174 System Topology:
175 ----------------
176
177 .. code-block:: console
178
179   +----------+              +----------+
180   |          |              |          |
181   |          | (0)----->(0) |          |
182   |    TG1   |              |    DUT   |
183   |          |              |          |
184   |          | (1)<-----(1) |          |
185   +----------+              +----------+
186   trafficgen_1                   vnf
187
188
189 Environment parameters and credentials
190 --------------------------------------
191
192 Config yardstick conf
193 ^^^^^^^^^^^^^^^^^^^^^
194
195 If user did not run 'yardstick env influxdb' inside the container, which will generate
196 correct yardstick.conf, then create the config file manually (run inside the container):
197
198     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
199     vi /etc/yardstick/yardstick.conf
200
201 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
202
203 ::
204
205   [DEFAULT]
206   debug = True
207   dispatcher = file, influxdb
208
209   [dispatcher_influxdb]
210   timeout = 5
211   target = http://{YOUR_IP_HERE}:8086
212   db_name = yardstick
213   username = root
214   password = root
215
216   [nsb]
217   trex_path=/opt/nsb_bin/trex/scripts
218   bin_path=/opt/nsb_bin
219   trex_client_lib=/opt/nsb_bin/trex_client/stl
220
221 Run Yardstick - Network Service Testcases
222 -----------------------------------------
223
224
225 NS testing - using yardstick CLI
226 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
227
228   See :doc:`04-installation`
229
230 .. code-block:: console
231
232
233   docker exec -it yardstick /bin/bash
234   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
235   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
236   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
237
238 Network Service Benchmarking - Bare-Metal
239 -----------------------------------------
240
241 Bare-Metal Config pod.yaml describing Topology
242 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
243
244 Bare-Metal 2-Node setup:
245 ########################
246 .. code-block:: console
247
248   +----------+              +----------+
249   |          |              |          |
250   |          | (0)----->(0) |          |
251   |    TG1   |              |    DUT   |
252   |          |              |          |
253   |          | (n)<-----(n) |          |
254   +----------+              +----------+
255   trafficgen_1                   vnf
256
257 Bare-Metal 3-Node setup - Correlated Traffic:
258 #############################################
259 .. code-block:: console
260
261   +----------+              +----------+            +------------+
262   |          |              |          |            |            |
263   |          |              |          |            |            |
264   |          | (0)----->(0) |          |            |    UDP     |
265   |    TG1   |              |    DUT   |            |   Replay   |
266   |          |              |          |            |            |
267   |          |              |          |(1)<---->(0)|            |
268   +----------+              +----------+            +------------+
269   trafficgen_1                   vnf                 trafficgen_2
270
271
272 Bare-Metal Config pod.yaml
273 ^^^^^^^^^^^^^^^^^^^^^^^^^^
274 Before executing Yardstick test cases, make sure that pod.yaml reflects the
275 topology and update all the required fields.::
276
277     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
278
279 .. code-block:: YAML
280
281     nodes:
282     -
283         name: trafficgen_1
284         role: TrafficGen
285         ip: 1.1.1.1
286         user: root
287         password: r00t
288         interfaces:
289             xe0:  # logical name from topology.yaml and vnfd.yaml
290                 vpci:      "0000:07:00.0"
291                 driver:    i40e # default kernel driver
292                 dpdk_port_num: 0
293                 local_ip: "152.16.100.20"
294                 netmask:   "255.255.255.0"
295                 local_mac: "00:00:00:00:00:01"
296             xe1:  # logical name from topology.yaml and vnfd.yaml
297                 vpci:      "0000:07:00.1"
298                 driver:    i40e # default kernel driver
299                 dpdk_port_num: 1
300                 local_ip: "152.16.40.20"
301                 netmask:   "255.255.255.0"
302                 local_mac: "00:00.00:00:00:02"
303
304     -
305         name: vnf
306         role: vnf
307         ip: 1.1.1.2
308         user: root
309         password: r00t
310         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
311         interfaces:
312             xe0:  # logical name from topology.yaml and vnfd.yaml
313                 vpci:      "0000:07:00.0"
314                 driver:    i40e # default kernel driver
315                 dpdk_port_num: 0
316                 local_ip: "152.16.100.19"
317                 netmask:   "255.255.255.0"
318                 local_mac: "00:00:00:00:00:03"
319
320             xe1:  # logical name from topology.yaml and vnfd.yaml
321                 vpci:      "0000:07:00.1"
322                 driver:    i40e # default kernel driver
323                 dpdk_port_num: 1
324                 local_ip: "152.16.40.19"
325                 netmask:   "255.255.255.0"
326                 local_mac: "00:00:00:00:00:04"
327         routing_table:
328         - network: "152.16.100.20"
329           netmask: "255.255.255.0"
330           gateway: "152.16.100.20"
331           if: "xe0"
332         - network: "152.16.40.20"
333           netmask: "255.255.255.0"
334           gateway: "152.16.40.20"
335           if: "xe1"
336         nd_route_tbl:
337         - network: "0064:ff9b:0:0:0:0:9810:6414"
338           netmask: "112"
339           gateway: "0064:ff9b:0:0:0:0:9810:6414"
340           if: "xe0"
341         - network: "0064:ff9b:0:0:0:0:9810:2814"
342           netmask: "112"
343           gateway: "0064:ff9b:0:0:0:0:9810:2814"
344           if: "xe1"
345
346
347 Network Service Benchmarking - Standalone Virtualization
348 --------------------------------------------------------
349
350 SR-IOV:
351 ^^^^^^^
352
353 SR-IOV Pre-requisites
354 #####################
355
356 On Host:
357  a) Create a bridge for VM to connect to external network
358
359   .. code-block:: console
360
361       brctl addbr br-int
362       brctl addif br-int <interface_name>    #This interface is connected to internet
363
364  b) Build guest image for VNF to run.
365     Most of the sample test cases in Yardstick are using a guest image called
366     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
367     Yardstick has a tool for building this custom image with samplevnf.
368     It is necessary to have ``sudo`` rights to use this tool.
369
370     Also you may need to install several additional packages to use this tool, by
371     following the commands below::
372
373        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
374
375     This image can be built using the following command in the directory where Yardstick is installed
376
377     .. code-block:: console
378
379        export YARD_IMG_ARCH='amd64'
380        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
381
382     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
383
384     for more details refer to chapter :doc:`04-installation`
385
386     .. note:: VM should be build with static IP and should be accessible from yardstick host.
387
388
389 SR-IOV Config pod.yaml describing Topology
390 ##########################################
391
392 SR-IOV 2-Node setup:
393 ####################
394 .. code-block:: console
395
396                                +--------------------+
397                                |                    |
398                                |                    |
399                                |        DUT         |
400                                |       (VNF)        |
401                                |                    |
402                                +--------------------+
403                                | VF NIC |  | VF NIC |
404                                +--------+  +--------+
405                                      ^          ^
406                                      |          |
407                                      |          |
408   +----------+               +-------------------------+
409   |          |               |       ^          ^      |
410   |          |               |       |          |      |
411   |          | (0)<----->(0) | ------           |      |
412   |    TG1   |               |           SUT    |      |
413   |          |               |                  |      |
414   |          | (n)<----->(n) |------------------       |
415   +----------+               +-------------------------+
416   trafficgen_1                          host
417
418
419
420 SR-IOV 3-Node setup - Correlated Traffic
421 ########################################
422 .. code-block:: console
423
424                                +--------------------+
425                                |                    |
426                                |                    |
427                                |        DUT         |
428                                |       (VNF)        |
429                                |                    |
430                                +--------------------+
431                                | VF NIC |  | VF NIC |
432                                +--------+  +--------+
433                                      ^          ^
434                                      |          |
435                                      |          |
436   +----------+               +-------------------------+            +--------------+
437   |          |               |       ^          ^      |            |              |
438   |          |               |       |          |      |            |              |
439   |          | (0)<----->(0) | ------           |      |            |     TG2      |
440   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
441   |          |               |                  |      |            |              |
442   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
443   +----------+               +-------------------------+            +--------------+
444   trafficgen_1                          host                       trafficgen_2
445
446 Before executing Yardstick test cases, make sure that pod.yaml reflects the
447 topology and update all the required fields.
448
449 .. code-block:: console
450
451     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
452     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
453
454 .. note:: Update all the required fields like ip, user, password, pcis, etc...
455
456 SR-IOV Config pod_trex.yaml
457 ###########################
458
459 .. code-block:: YAML
460
461     nodes:
462     -
463         name: trafficgen_1
464         role: TrafficGen
465         ip: 1.1.1.1
466         user: root
467         password: r00t
468         key_filename: /root/.ssh/id_rsa
469         interfaces:
470             xe0:  # logical name from topology.yaml and vnfd.yaml
471                 vpci:      "0000:07:00.0"
472                 driver:    i40e # default kernel driver
473                 dpdk_port_num: 0
474                 local_ip: "152.16.100.20"
475                 netmask:   "255.255.255.0"
476                 local_mac: "00:00:00:00:00:01"
477             xe1:  # logical name from topology.yaml and vnfd.yaml
478                 vpci:      "0000:07:00.1"
479                 driver:    i40e # default kernel driver
480                 dpdk_port_num: 1
481                 local_ip: "152.16.40.20"
482                 netmask:   "255.255.255.0"
483                 local_mac: "00:00.00:00:00:02"
484
485 SR-IOV Config host_sriov.yaml
486 #############################
487
488 .. code-block:: YAML
489
490     nodes:
491     -
492        name: sriov
493        role: Sriov
494        ip: 192.168.100.101
495        user: ""
496        password: ""
497
498 SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
499
500 Update "contexts" section
501 """""""""""""""""""""""""
502
503 .. code-block:: YAML
504
505   contexts:
506    - name: yardstick
507      type: Node
508      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
509    - type: StandaloneSriov
510      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
511      name: yardstick
512      vm_deploy: True
513      flavor:
514        images: "/var/lib/libvirt/images/ubuntu.qcow2"
515        ram: 4096
516        extra_specs:
517          hw:cpu_sockets: 1
518          hw:cpu_cores: 6
519          hw:cpu_threads: 2
520        user: "" # update VM username
521        password: "" # update password
522      servers:
523        vnf:
524          network_ports:
525            mgmt:
526              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
527            xe0:
528              - uplink_0
529            xe1:
530              - downlink_0
531      networks:
532        uplink_0:
533          phy_port: "0000:05:00.0"
534          vpci: "0000:00:07.0"
535          cidr: '152.16.100.10/24'
536          gateway_ip: '152.16.100.20'
537        downlink_0:
538          phy_port: "0000:05:00.1"
539          vpci: "0000:00:08.0"
540          cidr: '152.16.40.10/24'
541          gateway_ip: '152.16.100.20'
542
543
544
545 OVS-DPDK:
546 ^^^^^^^^^
547
548 OVS-DPDK Pre-requisites
549 #######################
550
551 On Host:
552  a) Create a bridge for VM to connect to external network
553
554   .. code-block:: console
555
556       brctl addbr br-int
557       brctl addif br-int <interface_name>    #This interface is connected to internet
558
559  b) Build guest image for VNF to run.
560     Most of the sample test cases in Yardstick are using a guest image called
561     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
562     Yardstick has a tool for building this custom image with samplevnf.
563     It is necessary to have ``sudo`` rights to use this tool.
564
565     Also you may need to install several additional packages to use this tool, by
566     following the commands below::
567
568        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
569
570     This image can be built using the following command in the directory where Yardstick is installed::
571
572        export YARD_IMG_ARCH='amd64'
573        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
574        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
575
576     for more details refer to chapter :doc:`04-installation`
577
578     .. note::  VM should be build with static IP and should be accessible from yardstick host.
579
580  c) OVS & DPDK version.
581      - OVS 2.7 and DPDK 16.11.1 above version is supported
582
583  d) Setup OVS/DPDK on host.
584      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
585
586
587 OVS-DPDK Config pod.yaml describing Topology
588 ############################################
589
590 OVS-DPDK 2-Node setup:
591 ######################
592
593
594 .. code-block:: console
595
596                                +--------------------+
597                                |                    |
598                                |                    |
599                                |        DUT         |
600                                |       (VNF)        |
601                                |                    |
602                                +--------------------+
603                                | virtio |  | virtio |
604                                +--------+  +--------+
605                                     ^          ^
606                                     |          |
607                                     |          |
608                                +--------+  +--------+
609                                | vHOST0 |  | vHOST1 |
610   +----------+               +-------------------------+
611   |          |               |       ^          ^      |
612   |          |               |       |          |      |
613   |          | (0)<----->(0) | ------           |      |
614   |    TG1   |               |          SUT     |      |
615   |          |               |       (ovs-dpdk) |      |
616   |          | (n)<----->(n) |------------------       |
617   +----------+               +-------------------------+
618   trafficgen_1                          host
619
620
621 OVS-DPDK 3-Node setup - Correlated Traffic
622 ##########################################
623
624 .. code-block:: console
625
626                                +--------------------+
627                                |                    |
628                                |                    |
629                                |        DUT         |
630                                |       (VNF)        |
631                                |                    |
632                                +--------------------+
633                                | virtio |  | virtio |
634                                +--------+  +--------+
635                                     ^          ^
636                                     |          |
637                                     |          |
638                                +--------+  +--------+
639                                | vHOST0 |  | vHOST1 |
640   +----------+               +-------------------------+          +------------+
641   |          |               |       ^          ^      |          |            |
642   |          |               |       |          |      |          |            |
643   |          | (0)<----->(0) | ------           |      |          |    TG2     |
644   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
645   |          |               |      (ovs-dpdk)  |      |          |            |
646   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
647   +----------+               +-------------------------+          +------------+
648   trafficgen_1                          host                       trafficgen_2
649
650
651 Before executing Yardstick test cases, make sure that pod.yaml reflects the
652 topology and update all the required fields.
653
654 .. code-block:: console
655
656   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
657   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
658
659 .. note:: Update all the required fields like ip, user, password, pcis, etc...
660
661 OVS-DPDK Config pod_trex.yaml
662 #############################
663
664 .. code-block:: YAML
665
666     nodes:
667     -
668       name: trafficgen_1
669       role: TrafficGen
670       ip: 1.1.1.1
671       user: root
672       password: r00t
673       interfaces:
674           xe0:  # logical name from topology.yaml and vnfd.yaml
675               vpci:      "0000:07:00.0"
676               driver:    i40e # default kernel driver
677               dpdk_port_num: 0
678               local_ip: "152.16.100.20"
679               netmask:   "255.255.255.0"
680               local_mac: "00:00:00:00:00:01"
681           xe1:  # logical name from topology.yaml and vnfd.yaml
682               vpci:      "0000:07:00.1"
683               driver:    i40e # default kernel driver
684               dpdk_port_num: 1
685               local_ip: "152.16.40.20"
686               netmask:   "255.255.255.0"
687               local_mac: "00:00.00:00:00:02"
688
689 OVS-DPDK Config host_ovs.yaml
690 #############################
691
692 .. code-block:: YAML
693
694     nodes:
695     -
696        name: ovs_dpdk
697        role: OvsDpdk
698        ip: 192.168.100.101
699        user: ""
700        password: ""
701
702 ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
703
704 Update "contexts" section
705 """""""""""""""""""""""""
706
707 .. code-block:: YAML
708
709   contexts:
710    - name: yardstick
711      type: Node
712      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
713    - type: StandaloneOvsDpdk
714      name: yardstick
715      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
716      vm_deploy: True
717      ovs_properties:
718        version:
719          ovs: 2.7.0
720          dpdk: 16.11.1
721        pmd_threads: 2
722        ram:
723          socket_0: 2048
724          socket_1: 2048
725        queues: 4
726        vpath: "/usr/local"
727
728      flavor:
729        images: "/var/lib/libvirt/images/ubuntu.qcow2"
730        ram: 4096
731        extra_specs:
732          hw:cpu_sockets: 1
733          hw:cpu_cores: 6
734          hw:cpu_threads: 2
735        user: "" # update VM username
736        password: "" # update password
737      servers:
738        vnf:
739          network_ports:
740            mgmt:
741              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
742            xe0:
743              - uplink_0
744            xe1:
745              - downlink_0
746      networks:
747        uplink_0:
748          phy_port: "0000:05:00.0"
749          vpci: "0000:00:07.0"
750          cidr: '152.16.100.10/24'
751          gateway_ip: '152.16.100.20'
752        downlink_0:
753          phy_port: "0000:05:00.1"
754          vpci: "0000:00:08.0"
755          cidr: '152.16.40.10/24'
756          gateway_ip: '152.16.100.20'
757
758
759 Network Service Benchmarking - OpenStack with SR-IOV support
760 ------------------------------------------------------------
761
762 This section describes how to run a Sample VNF test case, using Heat context,
763 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
764 DevStack, with SR-IOV support.
765
766
767 Single node OpenStack setup with external TG
768 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
769
770 .. code-block:: console
771
772                                  +----------------------------+
773                                  |OpenStack(DevStack)         |
774                                  |                            |
775                                  |   +--------------------+   |
776                                  |   |sample-VNF VM       |   |
777                                  |   |                    |   |
778                                  |   |        DUT         |   |
779                                  |   |       (VNF)        |   |
780                                  |   |                    |   |
781                                  |   +--------+  +--------+   |
782                                  |   | VF NIC |  | VF NIC |   |
783                                  |   +-----+--+--+----+---+   |
784                                  |         ^          ^       |
785                                  |         |          |       |
786   +----------+                   +---------+----------+-------+
787   |          |                   |        VF0        VF1      |
788   |          |                   |         ^          ^       |
789   |          |                   |         |   SUT    |       |
790   |    TG    | (PF0)<----->(PF0) +---------+          |       |
791   |          |                   |                    |       |
792   |          | (PF1)<----->(PF1) +--------------------+       |
793   |          |                   |                            |
794   +----------+                   +----------------------------+
795   trafficgen_1                                 host
796
797
798 Host pre-configuration
799 ######################
800
801 .. warning:: The following configuration requires sudo access to the system. Make
802   sure that your user have the access.
803
804 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
805 disable this extension by default.
806
807 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
808 config file ``/etc/default/grub``.
809
810 For the Intel platform:
811
812 .. code:: bash
813
814   ...
815   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
816   ...
817
818 For the AMD platform:
819
820 .. code:: bash
821
822   ...
823   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
824   ...
825
826 Update the grub configuration file and restart the system:
827
828 .. warning:: The following command will reboot the system.
829
830 .. code:: bash
831
832   sudo update-grub
833   sudo reboot
834
835 Make sure the extension has been enabled:
836
837 .. code:: bash
838
839   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
840
841   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
842   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
843   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
844   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
845   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
846   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
847   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
848   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
849
850 Setup system proxy (if needed). Add the following configuration into the
851 ``/etc/environment`` file:
852
853 .. note:: The proxy server name/port and IPs should be changed according to
854   actuall/current proxy configuration in the lab.
855
856 .. code:: bash
857
858   export http_proxy=http://proxy.company.com:port
859   export https_proxy=http://proxy.company.com:port
860   export ftp_proxy=http://proxy.company.com:port
861   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
862   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
863
864 Upgrade the system:
865
866 .. code:: bash
867
868   sudo -EH apt-get update
869   sudo -EH apt-get upgrade
870   sudo -EH apt-get dist-upgrade
871
872 Install dependencies needed for the DevStack
873
874 .. code:: bash
875
876   sudo -EH apt-get install python
877   sudo -EH apt-get install python-dev
878   sudo -EH apt-get install python-pip
879
880 Setup SR-IOV ports on the host:
881
882 .. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
883   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
884   interface names should be changed according to the HW environment used for
885   testing.
886
887 .. code:: bash
888
889   sudo ip link set dev enp24s0f0 up
890   sudo ip link set dev enp24s0f1 up
891   sudo ip link set dev enp24s0f3 up
892
893   # Create VFs on PF
894   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
895   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
896
897
898 DevStack installation
899 #####################
900
901 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
902 documentation to install OpenStack on a host. Please note, that stable
903 ``pike`` branch of devstack repo should be used during the installation.
904 The required `local.conf`` configuration file are described below.
905
906 DevStack configuration file:
907
908 .. note:: Update the devstack configuration file by replacing angluar brackets
909   with a short description inside.
910
911 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
912   commands to get device and vendor id of the virtual function (VF).
913
914 .. literalinclude:: code/single-devstack-local.conf
915    :language: console
916
917 Start the devstack installation on a host.
918
919
920 TG host configuration
921 #####################
922
923 Yardstick automatically install and configure Trex traffic generator on TG
924 host based on provided POD file (see below). Anyway, it's recommended to check
925 the compatibility of the installed NIC on the TG server with software Trex using
926 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
927
928
929 Run the Sample VNF test case
930 ############################
931
932 There is an example of Sample VNF test case ready to be executed in an
933 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
934 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
935
936 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
937 context.
938
939 Create pod file for TG in the yardstick repo folder located in the yardstick
940 container:
941
942 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
943   according to HW environment used for the testing. Use ``lshw -c network -businfo``
944   command to get the PF PCI address for ``vpci`` field.
945
946 .. literalinclude:: code/single-yardstick-pod.conf
947    :language: console
948
949 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
950 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
951 context using steps described in `NS testing - using yardstick CLI`_ section.
952
953
954 Multi node OpenStack TG and VNF setup (two nodes)
955 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
956
957 .. code-block:: console
958
959   +----------------------------+                   +----------------------------+
960   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
961   |                            |                   |                            |
962   |   +--------------------+   |                   |   +--------------------+   |
963   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
964   |   |                    |   |                   |   |                    |   |
965   |   |         TG         |   |                   |   |        DUT         |   |
966   |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
967   |   |                    |   |                   |   |                    |   |
968   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
969   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
970   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
971   |        ^           ^       |                   |         ^          ^       |
972   |        |           |       |                   |         |          |       |
973   +--------+-----------+-------+                   +---------+----------+-------+
974   |       VF0         VF1      |                   |        VF0        VF1      |
975   |        ^           ^       |                   |         ^          ^       |
976   |        |    SUT2   |       |                   |         |   SUT1   |       |
977   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
978   |        |                   |                   |                    |       |
979   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
980   |                            |                   |                            |
981   +----------------------------+                   +----------------------------+
982            host2 (compute)                               host1 (controller)
983
984
985 Controller/Compute pre-configuration
986 ####################################
987
988 Pre-configuration of the controller and compute hosts are the same as
989 described in `Host pre-configuration`_ section. Follow the steps in the section.
990
991
992 DevStack configuration
993 ######################
994
995 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
996 documentation to install OpenStack on a host. Please note, that stable
997 ``pike`` branch of devstack repo should be used during the installation.
998 The required `local.conf`` configuration file are described below.
999
1000 .. note:: Update the devstack configuration files by replacing angluar brackets
1001   with a short description inside.
1002
1003 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1004   commands to get device and vendor id of the virtual function (VF).
1005
1006 DevStack configuration file for controller host:
1007
1008 .. literalinclude:: code/multi-devstack-controller-local.conf
1009    :language: console
1010
1011 DevStack configuration file for compute host:
1012
1013 .. literalinclude:: code/multi-devstack-compute-local.conf
1014    :language: console
1015
1016 Start the devstack installation on the controller and compute hosts.
1017
1018
1019 Run the sample vFW TC
1020 #####################
1021
1022 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1023 context.
1024
1025 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1026 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1027 context using steps described in `NS testing - using yardstick CLI`_ section
1028 and the following yardtick command line arguments:
1029
1030 .. code:: bash
1031
1032   yardstick -d task start --task-args='{"provider": "sriov"}' \
1033   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1034
1035
1036 Enabling other Traffic generator
1037 --------------------------------
1038
1039 IxLoad:
1040 ^^^^^^^
1041
1042 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS
1043    version>Linux64.bin.tar.gz`` (Download from ixia support site)
1044    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
1045    If the installation was not done inside the container, after installing the IXIA client,
1046    check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
1047    inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
1048    to /usr/bin/ixiapython<ver> inside the container.
1049
1050 2. Update pod_ixia.yaml file with ixia details.
1051
1052   .. code-block:: console
1053
1054     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1055
1056   Config pod_ixia.yaml
1057
1058   .. code-block:: yaml
1059
1060
1061       nodes:
1062           -
1063             name: trafficgen_1
1064             role: IxNet
1065             ip: 1.2.1.1 #ixia machine ip
1066             user: user
1067             password: r00t
1068             key_filename: /root/.ssh/id_rsa
1069             tg_config:
1070                 ixchassis: "1.2.1.7" #ixia chassis ip
1071                 tcl_port: "8009" # tcl server port
1072                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1073                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1074                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1075                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1076                 dut_result_dir: "/mnt/ixia"
1077                 version: 8.1
1078             interfaces:
1079                 xe0:  # logical name from topology.yaml and vnfd.yaml
1080                     vpci: "2:5" # Card:port
1081                     driver:    "none"
1082                     dpdk_port_num: 0
1083                     local_ip: "152.16.100.20"
1084                     netmask:   "255.255.0.0"
1085                     local_mac: "00:98:10:64:14:00"
1086                 xe1:  # logical name from topology.yaml and vnfd.yaml
1087                     vpci: "2:6" # [(Card, port)]
1088                     driver:    "none"
1089                     dpdk_port_num: 1
1090                     local_ip: "152.40.40.20"
1091                     netmask:   "255.255.0.0"
1092                     local_mac: "00:98:28:28:14:00"
1093
1094   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1095
1096 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1097    You will also need to configure the IxLoad machine to start the IXIA
1098    IxosTclServer. This can be started like so:
1099
1100    - Connect to the IxLoad machine using RDP
1101    - Go to:
1102      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1103      or
1104      ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1105
1106 4. Create a folder "Results" in c:\ and share the folder on the network.
1107
1108 5. execute testcase in samplevnf folder.
1109    eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1110
1111 IxNetwork:
1112 ^^^^^^^^^^
1113
1114 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
1115                      Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
1116 2. Update pod_ixia.yaml file with ixia details.
1117
1118   .. code-block:: console
1119
1120     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1121
1122   Config pod_ixia.yaml
1123
1124   .. code-block:: yaml
1125
1126       nodes:
1127           -
1128             name: trafficgen_1
1129             role: IxNet
1130             ip: 1.2.1.1 #ixia machine ip
1131             user: user
1132             password: r00t
1133             key_filename: /root/.ssh/id_rsa
1134             tg_config:
1135                 ixchassis: "1.2.1.7" #ixia chassis ip
1136                 tcl_port: "8009" # tcl server port
1137                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
1138                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
1139                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
1140                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
1141                 dut_result_dir: "/mnt/ixia"
1142                 version: 8.1
1143             interfaces:
1144                 xe0:  # logical name from topology.yaml and vnfd.yaml
1145                     vpci: "2:5" # Card:port
1146                     driver:    "none"
1147                     dpdk_port_num: 0
1148                     local_ip: "152.16.100.20"
1149                     netmask:   "255.255.0.0"
1150                     local_mac: "00:98:10:64:14:00"
1151                 xe1:  # logical name from topology.yaml and vnfd.yaml
1152                     vpci: "2:6" # [(Card, port)]
1153                     driver:    "none"
1154                     dpdk_port_num: 1
1155                     local_ip: "152.40.40.20"
1156                     netmask:   "255.255.0.0"
1157                     local_mac: "00:98:28:28:14:00"
1158
1159   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1160
1161 3. Start IxNetwork TCL Server
1162    You will also need to configure the IxNetwork machine to start the IXIA
1163    IxNetworkTclServer. This can be started like so:
1164
1165     - Connect to the IxNetwork machine using RDP
1166     - Go to:     ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
1167
1168 4. execute testcase in samplevnf folder.
1169    eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1170