Updated documentation for stanalone context
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
5
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
9
10 Abstract
11 ========
12
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
20
21 The steps needed to run Yardstick with NSB testing are:
22
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
26 * Run the test case.
27
28
29 Prerequisites
30 =============
31
32 Refer chapter Yardstick Installation for more information on yardstick
33 prerequisites
34
35 Several prerequisites are needed for Yardstick (VNF testing):
36
37   * Python Modules: pyzmq, pika.
38   * flex
39   * bison
40   * build-essential
41   * automake
42   * libtool
43   * librabbitmq-dev
44   * rabbitmq-server
45   * collectd
46   * intel-cmt-cat
47
48 Hardware & Software Ingredients
49 -------------------------------
50
51 SUT requirements:
52
53
54    ======= ===================
55    Item    Description
56    ======= ===================
57    Memory  Min 20GB
58    NICs    2 x 10G
59    OS      Ubuntu 16.04.3 LTS
60    kernel  4.4.0-34-generic
61    DPDK    17.02
62    ======= ===================
63
64 Boot and BIOS settings:
65
66
67    ============= =================================================
68    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71                  iommu=on iommu=pt intel_iommu=on
72                  Note: nohz_full and rcu_nocbs is to disable Linux
73                  kernel interrupts
74    BIOS          CPU Power and Performance Policy <Performance>
75                  CPU C-state Disabled
76                  CPU P-state Disabled
77                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78                  Hyper-Threading Technology (If supported) Enabled
79                  Virtualization Techology Enabled
80                  Intel(R) VT for Direct I/O Enabled
81                  Coherency Enabled
82                  Turbo Boost Disabled
83    ============= =================================================
84
85
86
87 Install Yardstick (NSB Testing)
88 ===============================
89
90 Download the source code and install Yardstick from it
91
92 .. code-block:: console
93
94   git clone https://gerrit.opnfv.org/gerrit/yardstick
95
96   cd yardstick
97
98   # Switch to latest stable branch
99   # git checkout <tag or stable branch>
100   git checkout stable/euphrates
101
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
104
105 .. code-block:: ini
106
107     cat /etc/environment
108     http_proxy='http://proxy.company.com:port'
109     https_proxy='http://proxy.company.com:port'
110
111 .. code-block:: console
112
113     export http_proxy='http://proxy.company.com:port'
114     export https_proxy='http://proxy.company.com:port'
115
116 The last step is to modify the Yardstick installation inventory, used by
117 Ansible:
118
119 .. code-block:: ini
120
121   cat ./ansible/install-inventory.ini
122   [jumphost]
123   localhost  ansible_connection=local
124
125   [yardstick-standalone]
126   yardstick-standalone-node ansible_host=192.168.1.2
127   yardstick-standalone-node-2 ansible_host=192.168.1.3
128
129   # section below is only due backward compatibility.
130   # it will be removed later
131   [yardstick:children]
132   jumphost
133
134   [all:vars]
135   ansible_user=root
136   ansible_pass=root
137
138 .. note::
139
140    SSH access without password needs to be configured for all your nodes defined in
141    ``install-inventory.ini`` file.
142    If you want to use password authentication you need to install sshpass
143
144    .. code-block:: console
145
146      sudo -EH apt-get install sshpass
147
148 To execute an installation for a Bare-Metal or a Standalone context:
149
150 .. code-block:: console
151
152     ./nsb_setup.sh
153
154
155 To execute an installation for an OpenStack context:
156
157 .. code-block:: console
158
159     ./nsb_setup.sh <path to admin-openrc.sh>
160
161 Above command setup docker with latest yardstick code. To execute
162
163 .. code-block:: console
164
165   docker exec -it yardstick bash
166
167 It will also automatically download all the packages needed for NSB Testing
168 setup. Refer chapter :doc:`04-installation` for more on docker
169 **Install Yardstick using Docker (recommended)**
170
171 System Topology:
172 ================
173
174 .. code-block:: console
175
176   +----------+              +----------+
177   |          |              |          |
178   |          | (0)----->(0) |          |
179   |    TG1   |              |    DUT   |
180   |          |              |          |
181   |          | (1)<-----(1) |          |
182   +----------+              +----------+
183   trafficgen_1                   vnf
184
185
186 Environment parameters and credentials
187 ======================================
188
189 Config yardstick conf
190 ---------------------
191
192 If user did not run 'yardstick env influxdb' inside the container, which will
193 generate correct ``yardstick.conf``, then create the config file manually (run
194 inside the container):
195 ::
196
197     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
198     vi /etc/yardstick/yardstick.conf
199
200 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
201
202 ::
203
204   [DEFAULT]
205   debug = True
206   dispatcher = file, influxdb
207
208   [dispatcher_influxdb]
209   timeout = 5
210   target = http://{YOUR_IP_HERE}:8086
211   db_name = yardstick
212   username = root
213   password = root
214
215   [nsb]
216   trex_path=/opt/nsb_bin/trex/scripts
217   bin_path=/opt/nsb_bin
218   trex_client_lib=/opt/nsb_bin/trex_client/stl
219
220 Run Yardstick - Network Service Testcases
221 =========================================
222
223
224 NS testing - using yardstick CLI
225 --------------------------------
226
227   See :doc:`04-installation`
228
229 .. code-block:: console
230
231
232   docker exec -it yardstick /bin/bash
233   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
234   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
235   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
236
237 Network Service Benchmarking - Bare-Metal
238 =========================================
239
240 Bare-Metal Config pod.yaml describing Topology
241 ----------------------------------------------
242
243 Bare-Metal 2-Node setup
244 ^^^^^^^^^^^^^^^^^^^^^^^
245 .. code-block:: console
246
247   +----------+              +----------+
248   |          |              |          |
249   |          | (0)----->(0) |          |
250   |    TG1   |              |    DUT   |
251   |          |              |          |
252   |          | (n)<-----(n) |          |
253   +----------+              +----------+
254   trafficgen_1                   vnf
255
256 Bare-Metal 3-Node setup - Correlated Traffic
257 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
258 .. code-block:: console
259
260   +----------+              +----------+            +------------+
261   |          |              |          |            |            |
262   |          |              |          |            |            |
263   |          | (0)----->(0) |          |            |    UDP     |
264   |    TG1   |              |    DUT   |            |   Replay   |
265   |          |              |          |            |            |
266   |          |              |          |(1)<---->(0)|            |
267   +----------+              +----------+            +------------+
268   trafficgen_1                   vnf                 trafficgen_2
269
270
271 Bare-Metal Config pod.yaml
272 --------------------------
273 Before executing Yardstick test cases, make sure that pod.yaml reflects the
274 topology and update all the required fields.::
275
276     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
277
278 .. code-block:: YAML
279
280     nodes:
281     -
282         name: trafficgen_1
283         role: TrafficGen
284         ip: 1.1.1.1
285         user: root
286         password: r00t
287         interfaces:
288             xe0:  # logical name from topology.yaml and vnfd.yaml
289                 vpci:      "0000:07:00.0"
290                 driver:    i40e # default kernel driver
291                 dpdk_port_num: 0
292                 local_ip: "152.16.100.20"
293                 netmask:   "255.255.255.0"
294                 local_mac: "00:00:00:00:00:01"
295             xe1:  # logical name from topology.yaml and vnfd.yaml
296                 vpci:      "0000:07:00.1"
297                 driver:    i40e # default kernel driver
298                 dpdk_port_num: 1
299                 local_ip: "152.16.40.20"
300                 netmask:   "255.255.255.0"
301                 local_mac: "00:00.00:00:00:02"
302
303     -
304         name: vnf
305         role: vnf
306         ip: 1.1.1.2
307         user: root
308         password: r00t
309         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
310         interfaces:
311             xe0:  # logical name from topology.yaml and vnfd.yaml
312                 vpci:      "0000:07:00.0"
313                 driver:    i40e # default kernel driver
314                 dpdk_port_num: 0
315                 local_ip: "152.16.100.19"
316                 netmask:   "255.255.255.0"
317                 local_mac: "00:00:00:00:00:03"
318
319             xe1:  # logical name from topology.yaml and vnfd.yaml
320                 vpci:      "0000:07:00.1"
321                 driver:    i40e # default kernel driver
322                 dpdk_port_num: 1
323                 local_ip: "152.16.40.19"
324                 netmask:   "255.255.255.0"
325                 local_mac: "00:00:00:00:00:04"
326         routing_table:
327         - network: "152.16.100.20"
328           netmask: "255.255.255.0"
329           gateway: "152.16.100.20"
330           if: "xe0"
331         - network: "152.16.40.20"
332           netmask: "255.255.255.0"
333           gateway: "152.16.40.20"
334           if: "xe1"
335         nd_route_tbl:
336         - network: "0064:ff9b:0:0:0:0:9810:6414"
337           netmask: "112"
338           gateway: "0064:ff9b:0:0:0:0:9810:6414"
339           if: "xe0"
340         - network: "0064:ff9b:0:0:0:0:9810:2814"
341           netmask: "112"
342           gateway: "0064:ff9b:0:0:0:0:9810:2814"
343           if: "xe1"
344
345
346 Network Service Benchmarking - Standalone Virtualization
347 ========================================================
348
349 SR-IOV
350 ------
351
352 SR-IOV Pre-requisites
353 ^^^^^^^^^^^^^^^^^^^^^
354
355 On Host, where VM is created:
356  a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
357     Currently this can be done using VXLAN tunnel.
358
359     Execute the following on host, where VM is created:
360
361   .. code-block:: console
362
363       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
364       brctl addbr br-int
365       brctl addif br-int vxlan0
366       ip link set dev vxlan0 up
367       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
368       ip link set dev br-int up
369
370   .. note:: May be needed to add extra rules to iptable to forward traffic.
371
372   .. code-block:: console
373
374     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
375     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
376
377   Execute the following on a jump host:
378
379   .. code-block:: console
380
381       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
382       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
383       ip link set dev vxlan0 up
384
385   .. note:: Host and jump host are different baremetal servers.
386
387  b) Modify test case management CIDR.
388     IP addresses IP#1, IP#2 and CIDR must be in the same network.
389
390   .. code-block:: YAML
391
392     servers:
393       vnf:
394         network_ports:
395           mgmt:
396             cidr: '1.1.1.7/24'
397
398  c) Build guest image for VNF to run.
399     Most of the sample test cases in Yardstick are using a guest image called
400     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
401     Yardstick has a tool for building this custom image with SampleVNF.
402     It is necessary to have ``sudo`` rights to use this tool.
403
404     Also you may need to install several additional packages to use this tool, by
405     following the commands below::
406
407        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
408
409     This image can be built using the following command in the directory where Yardstick is installed
410
411     .. code-block:: console
412
413        export YARD_IMG_ARCH='amd64'
414        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
415
416     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
417
418     for more details refer to chapter :doc:`04-installation`
419
420     .. note:: VM should be build with static IP and should be accessible from yardstick host.
421
422
423 SR-IOV Config pod.yaml describing Topology
424 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
425
426 SR-IOV 2-Node setup:
427 ^^^^^^^^^^^^^^^^^^^^
428 .. code-block:: console
429
430                                +--------------------+
431                                |                    |
432                                |                    |
433                                |        DUT         |
434                                |       (VNF)        |
435                                |                    |
436                                +--------------------+
437                                | VF NIC |  | VF NIC |
438                                +--------+  +--------+
439                                      ^          ^
440                                      |          |
441                                      |          |
442   +----------+               +-------------------------+
443   |          |               |       ^          ^      |
444   |          |               |       |          |      |
445   |          | (0)<----->(0) | ------           |      |
446   |    TG1   |               |           SUT    |      |
447   |          |               |                  |      |
448   |          | (n)<----->(n) |------------------       |
449   +----------+               +-------------------------+
450   trafficgen_1                          host
451
452
453
454 SR-IOV 3-Node setup - Correlated Traffic
455 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
456 .. code-block:: console
457
458                                +--------------------+
459                                |                    |
460                                |                    |
461                                |        DUT         |
462                                |       (VNF)        |
463                                |                    |
464                                +--------------------+
465                                | VF NIC |  | VF NIC |
466                                +--------+  +--------+
467                                      ^          ^
468                                      |          |
469                                      |          |
470   +----------+               +-------------------------+            +--------------+
471   |          |               |       ^          ^      |            |              |
472   |          |               |       |          |      |            |              |
473   |          | (0)<----->(0) | ------           |      |            |     TG2      |
474   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
475   |          |               |                  |      |            |              |
476   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
477   +----------+               +-------------------------+            +--------------+
478   trafficgen_1                          host                       trafficgen_2
479
480 Before executing Yardstick test cases, make sure that pod.yaml reflects the
481 topology and update all the required fields.
482
483 .. code-block:: console
484
485     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
486     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
487
488 .. note:: Update all the required fields like ip, user, password, pcis, etc...
489
490 SR-IOV Config pod_trex.yaml
491 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
492
493 .. code-block:: YAML
494
495     nodes:
496     -
497         name: trafficgen_1
498         role: TrafficGen
499         ip: 1.1.1.1
500         user: root
501         password: r00t
502         key_filename: /root/.ssh/id_rsa
503         interfaces:
504             xe0:  # logical name from topology.yaml and vnfd.yaml
505                 vpci:      "0000:07:00.0"
506                 driver:    i40e # default kernel driver
507                 dpdk_port_num: 0
508                 local_ip: "152.16.100.20"
509                 netmask:   "255.255.255.0"
510                 local_mac: "00:00:00:00:00:01"
511             xe1:  # logical name from topology.yaml and vnfd.yaml
512                 vpci:      "0000:07:00.1"
513                 driver:    i40e # default kernel driver
514                 dpdk_port_num: 1
515                 local_ip: "152.16.40.20"
516                 netmask:   "255.255.255.0"
517                 local_mac: "00:00.00:00:00:02"
518
519 SR-IOV Config host_sriov.yaml
520 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
521
522 .. code-block:: YAML
523
524     nodes:
525     -
526        name: sriov
527        role: Sriov
528        ip: 192.168.100.101
529        user: ""
530        password: ""
531
532 SR-IOV testcase update:
533 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
534
535 Update "contexts" section
536 """""""""""""""""""""""""
537
538 .. code-block:: YAML
539
540   contexts:
541    - name: yardstick
542      type: Node
543      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
544    - type: StandaloneSriov
545      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
546      name: yardstick
547      vm_deploy: True
548      flavor:
549        images: "/var/lib/libvirt/images/ubuntu.qcow2"
550        ram: 4096
551        extra_specs:
552          hw:cpu_sockets: 1
553          hw:cpu_cores: 6
554          hw:cpu_threads: 2
555        user: "" # update VM username
556        password: "" # update password
557      servers:
558        vnf:
559          network_ports:
560            mgmt:
561              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
562            xe0:
563              - uplink_0
564            xe1:
565              - downlink_0
566      networks:
567        uplink_0:
568          phy_port: "0000:05:00.0"
569          vpci: "0000:00:07.0"
570          cidr: '152.16.100.10/24'
571          gateway_ip: '152.16.100.20'
572        downlink_0:
573          phy_port: "0000:05:00.1"
574          vpci: "0000:00:08.0"
575          cidr: '152.16.40.10/24'
576          gateway_ip: '152.16.100.20'
577
578
579
580 OVS-DPDK
581 --------
582
583 OVS-DPDK Pre-requisites
584 ^^^^^^^^^^^^^^^^^^^^^^^
585
586 On Host, where VM is created:
587  a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
588     Currently this can be done using VXLAN tunnel.
589
590     Execute the following on host, where VM is created:
591
592   .. code-block:: console
593
594       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
595       brctl addbr br-int
596       brctl addif br-int vxlan0
597       ip link set dev vxlan0 up
598       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
599       ip link set dev br-int up
600
601   .. note:: May be needed to add extra rules to iptable to forward traffic.
602
603   .. code-block:: console
604
605     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
606     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
607
608   Execute the following on a jump host:
609
610   .. code-block:: console
611
612       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
613       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
614       ip link set dev vxlan0 up
615
616   .. note:: Host and jump host are different baremetal servers.
617
618  b) Modify test case management CIDR.
619     IP addresses IP#1, IP#2 and CIDR must be in the same network.
620
621   .. code-block:: YAML
622
623     servers:
624       vnf:
625         network_ports:
626           mgmt:
627             cidr: '1.1.1.7/24'
628
629  c) Build guest image for VNF to run.
630     Most of the sample test cases in Yardstick are using a guest image called
631     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
632     Yardstick has a tool for building this custom image with SampleVNF.
633     It is necessary to have ``sudo`` rights to use this tool.
634
635     Also you may need to install several additional packages to use this tool, by
636     following the commands below::
637
638        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
639
640     This image can be built using the following command in the directory where Yardstick is installed::
641
642        export YARD_IMG_ARCH='amd64'
643        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
644        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
645
646     for more details refer to chapter :doc:`04-installation`
647
648     .. note::  VM should be build with static IP and should be accessible from yardstick host.
649
650  c) OVS & DPDK version.
651      - OVS 2.7 and DPDK 16.11.1 above version is supported
652
653  d) Setup OVS/DPDK on host.
654      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
655
656
657 OVS-DPDK Config pod.yaml describing Topology
658 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
659
660 OVS-DPDK 2-Node setup
661 ^^^^^^^^^^^^^^^^^^^^^
662
663
664 .. code-block:: console
665
666                                +--------------------+
667                                |                    |
668                                |                    |
669                                |        DUT         |
670                                |       (VNF)        |
671                                |                    |
672                                +--------------------+
673                                | virtio |  | virtio |
674                                +--------+  +--------+
675                                     ^          ^
676                                     |          |
677                                     |          |
678                                +--------+  +--------+
679                                | vHOST0 |  | vHOST1 |
680   +----------+               +-------------------------+
681   |          |               |       ^          ^      |
682   |          |               |       |          |      |
683   |          | (0)<----->(0) | ------           |      |
684   |    TG1   |               |          SUT     |      |
685   |          |               |       (ovs-dpdk) |      |
686   |          | (n)<----->(n) |------------------       |
687   +----------+               +-------------------------+
688   trafficgen_1                          host
689
690
691 OVS-DPDK 3-Node setup - Correlated Traffic
692 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
693
694 .. code-block:: console
695
696                                +--------------------+
697                                |                    |
698                                |                    |
699                                |        DUT         |
700                                |       (VNF)        |
701                                |                    |
702                                +--------------------+
703                                | virtio |  | virtio |
704                                +--------+  +--------+
705                                     ^          ^
706                                     |          |
707                                     |          |
708                                +--------+  +--------+
709                                | vHOST0 |  | vHOST1 |
710   +----------+               +-------------------------+          +------------+
711   |          |               |       ^          ^      |          |            |
712   |          |               |       |          |      |          |            |
713   |          | (0)<----->(0) | ------           |      |          |    TG2     |
714   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
715   |          |               |      (ovs-dpdk)  |      |          |            |
716   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
717   +----------+               +-------------------------+          +------------+
718   trafficgen_1                          host                       trafficgen_2
719
720
721 Before executing Yardstick test cases, make sure that pod.yaml reflects the
722 topology and update all the required fields.
723
724 .. code-block:: console
725
726   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
727   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
728
729 .. note:: Update all the required fields like ip, user, password, pcis, etc...
730
731 OVS-DPDK Config pod_trex.yaml
732 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
733
734 .. code-block:: YAML
735
736     nodes:
737     -
738       name: trafficgen_1
739       role: TrafficGen
740       ip: 1.1.1.1
741       user: root
742       password: r00t
743       interfaces:
744           xe0:  # logical name from topology.yaml and vnfd.yaml
745               vpci:      "0000:07:00.0"
746               driver:    i40e # default kernel driver
747               dpdk_port_num: 0
748               local_ip: "152.16.100.20"
749               netmask:   "255.255.255.0"
750               local_mac: "00:00:00:00:00:01"
751           xe1:  # logical name from topology.yaml and vnfd.yaml
752               vpci:      "0000:07:00.1"
753               driver:    i40e # default kernel driver
754               dpdk_port_num: 1
755               local_ip: "152.16.40.20"
756               netmask:   "255.255.255.0"
757               local_mac: "00:00.00:00:00:02"
758
759 OVS-DPDK Config host_ovs.yaml
760 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
761
762 .. code-block:: YAML
763
764     nodes:
765     -
766        name: ovs_dpdk
767        role: OvsDpdk
768        ip: 192.168.100.101
769        user: ""
770        password: ""
771
772 ovs_dpdk testcase update:
773 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
774
775 Update "contexts" section
776 """""""""""""""""""""""""
777
778 .. code-block:: YAML
779
780   contexts:
781    - name: yardstick
782      type: Node
783      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
784    - type: StandaloneOvsDpdk
785      name: yardstick
786      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
787      vm_deploy: True
788      ovs_properties:
789        version:
790          ovs: 2.7.0
791          dpdk: 16.11.1
792        pmd_threads: 2
793        ram:
794          socket_0: 2048
795          socket_1: 2048
796        queues: 4
797        vpath: "/usr/local"
798
799      flavor:
800        images: "/var/lib/libvirt/images/ubuntu.qcow2"
801        ram: 4096
802        extra_specs:
803          hw:cpu_sockets: 1
804          hw:cpu_cores: 6
805          hw:cpu_threads: 2
806        user: "" # update VM username
807        password: "" # update password
808      servers:
809        vnf:
810          network_ports:
811            mgmt:
812              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
813            xe0:
814              - uplink_0
815            xe1:
816              - downlink_0
817      networks:
818        uplink_0:
819          phy_port: "0000:05:00.0"
820          vpci: "0000:00:07.0"
821          cidr: '152.16.100.10/24'
822          gateway_ip: '152.16.100.20'
823        downlink_0:
824          phy_port: "0000:05:00.1"
825          vpci: "0000:00:08.0"
826          cidr: '152.16.40.10/24'
827          gateway_ip: '152.16.100.20'
828
829
830 Network Service Benchmarking - OpenStack with SR-IOV support
831 ============================================================
832
833 This section describes how to run a Sample VNF test case, using Heat context,
834 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
835 DevStack, with SR-IOV support.
836
837
838 Single node OpenStack setup with external TG
839 --------------------------------------------
840
841 .. code-block:: console
842
843                                  +----------------------------+
844                                  |OpenStack(DevStack)         |
845                                  |                            |
846                                  |   +--------------------+   |
847                                  |   |sample-VNF VM       |   |
848                                  |   |                    |   |
849                                  |   |        DUT         |   |
850                                  |   |       (VNF)        |   |
851                                  |   |                    |   |
852                                  |   +--------+  +--------+   |
853                                  |   | VF NIC |  | VF NIC |   |
854                                  |   +-----+--+--+----+---+   |
855                                  |         ^          ^       |
856                                  |         |          |       |
857   +----------+                   +---------+----------+-------+
858   |          |                   |        VF0        VF1      |
859   |          |                   |         ^          ^       |
860   |          |                   |         |   SUT    |       |
861   |    TG    | (PF0)<----->(PF0) +---------+          |       |
862   |          |                   |                    |       |
863   |          | (PF1)<----->(PF1) +--------------------+       |
864   |          |                   |                            |
865   +----------+                   +----------------------------+
866   trafficgen_1                                 host
867
868
869 Host pre-configuration
870 ^^^^^^^^^^^^^^^^^^^^^^
871
872 .. warning:: The following configuration requires sudo access to the system. Make
873   sure that your user have the access.
874
875 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
876 disable this extension by default.
877
878 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
879 config file ``/etc/default/grub``.
880
881 For the Intel platform:
882
883 .. code:: bash
884
885   ...
886   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
887   ...
888
889 For the AMD platform:
890
891 .. code:: bash
892
893   ...
894   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
895   ...
896
897 Update the grub configuration file and restart the system:
898
899 .. warning:: The following command will reboot the system.
900
901 .. code:: bash
902
903   sudo update-grub
904   sudo reboot
905
906 Make sure the extension has been enabled:
907
908 .. code:: bash
909
910   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
911
912   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
913   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
914   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
915   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
916   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
917   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
918   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
919   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
920
921 Setup system proxy (if needed). Add the following configuration into the
922 ``/etc/environment`` file:
923
924 .. note:: The proxy server name/port and IPs should be changed according to
925   actuall/current proxy configuration in the lab.
926
927 .. code:: bash
928
929   export http_proxy=http://proxy.company.com:port
930   export https_proxy=http://proxy.company.com:port
931   export ftp_proxy=http://proxy.company.com:port
932   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
933   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
934
935 Upgrade the system:
936
937 .. code:: bash
938
939   sudo -EH apt-get update
940   sudo -EH apt-get upgrade
941   sudo -EH apt-get dist-upgrade
942
943 Install dependencies needed for the DevStack
944
945 .. code:: bash
946
947   sudo -EH apt-get install python
948   sudo -EH apt-get install python-dev
949   sudo -EH apt-get install python-pip
950
951 Setup SR-IOV ports on the host:
952
953 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
954   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
955   interface names should be changed according to the HW environment used for
956   testing.
957
958 .. code:: bash
959
960   sudo ip link set dev enp24s0f0 up
961   sudo ip link set dev enp24s0f1 up
962   sudo ip link set dev enp24s0f3 up
963
964   # Create VFs on PF
965   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
966   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
967
968
969 DevStack installation
970 ^^^^^^^^^^^^^^^^^^^^^
971
972 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
973 documentation to install OpenStack on a host. Please note, that stable
974 ``pike`` branch of devstack repo should be used during the installation.
975 The required `local.conf`` configuration file are described below.
976
977 DevStack configuration file:
978
979 .. note:: Update the devstack configuration file by replacing angluar brackets
980   with a short description inside.
981
982 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
983   commands to get device and vendor id of the virtual function (VF).
984
985 .. literalinclude:: code/single-devstack-local.conf
986    :language: console
987
988 Start the devstack installation on a host.
989
990
991 TG host configuration
992 ^^^^^^^^^^^^^^^^^^^^^
993
994 Yardstick automatically install and configure Trex traffic generator on TG
995 host based on provided POD file (see below). Anyway, it's recommended to check
996 the compatibility of the installed NIC on the TG server with software Trex using
997 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
998
999
1000 Run the Sample VNF test case
1001 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1002
1003 There is an example of Sample VNF test case ready to be executed in an
1004 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1005 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1006
1007 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1008 context.
1009
1010 Create pod file for TG in the yardstick repo folder located in the yardstick
1011 container:
1012
1013 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1014   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1015   command to get the PF PCI address for ``vpci`` field.
1016
1017 .. literalinclude:: code/single-yardstick-pod.conf
1018    :language: console
1019
1020 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1021 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1022 context using steps described in `NS testing - using yardstick CLI`_ section.
1023
1024
1025 Multi node OpenStack TG and VNF setup (two nodes)
1026 -------------------------------------------------
1027
1028 .. code-block:: console
1029
1030   +----------------------------+                   +----------------------------+
1031   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1032   |                            |                   |                            |
1033   |   +--------------------+   |                   |   +--------------------+   |
1034   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1035   |   |                    |   |                   |   |                    |   |
1036   |   |         TG         |   |                   |   |        DUT         |   |
1037   |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
1038   |   |                    |   |                   |   |                    |   |
1039   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1040   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1041   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1042   |        ^           ^       |                   |         ^          ^       |
1043   |        |           |       |                   |         |          |       |
1044   +--------+-----------+-------+                   +---------+----------+-------+
1045   |       VF0         VF1      |                   |        VF0        VF1      |
1046   |        ^           ^       |                   |         ^          ^       |
1047   |        |    SUT2   |       |                   |         |   SUT1   |       |
1048   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1049   |        |                   |                   |                    |       |
1050   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1051   |                            |                   |                            |
1052   +----------------------------+                   +----------------------------+
1053            host2 (compute)                               host1 (controller)
1054
1055
1056 Controller/Compute pre-configuration
1057 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1058
1059 Pre-configuration of the controller and compute hosts are the same as
1060 described in `Host pre-configuration`_ section. Follow the steps in the section.
1061
1062
1063 DevStack configuration
1064 ^^^^^^^^^^^^^^^^^^^^^^
1065
1066 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
1067 documentation to install OpenStack on a host. Please note, that stable
1068 ``pike`` branch of devstack repo should be used during the installation.
1069 The required `local.conf`` configuration file are described below.
1070
1071 .. note:: Update the devstack configuration files by replacing angluar brackets
1072   with a short description inside.
1073
1074 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1075   commands to get device and vendor id of the virtual function (VF).
1076
1077 DevStack configuration file for controller host:
1078
1079 .. literalinclude:: code/multi-devstack-controller-local.conf
1080    :language: console
1081
1082 DevStack configuration file for compute host:
1083
1084 .. literalinclude:: code/multi-devstack-compute-local.conf
1085    :language: console
1086
1087 Start the devstack installation on the controller and compute hosts.
1088
1089
1090 Run the sample vFW TC
1091 ^^^^^^^^^^^^^^^^^^^^^
1092
1093 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1094 context.
1095
1096 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1097 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1098 context using steps described in `NS testing - using yardstick CLI`_ section
1099 and the following yardtick command line arguments:
1100
1101 .. code:: bash
1102
1103   yardstick -d task start --task-args='{"provider": "sriov"}' \
1104   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1105
1106
1107 Enabling other Traffic generator
1108 ================================
1109
1110 IxLoad
1111 ^^^^^^
1112
1113 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1114    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1115    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1116    ``<IxOS version>Linux64.bin.tar.gz``
1117    If the installation was not done inside the container, after installing
1118    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1119    sure you can run this cmd inside the yardstick container. Usually user is
1120    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1121    ``/usr/bin/ixiapython<ver>`` inside the container.
1122
1123 2. Update ``pod_ixia.yaml`` file with ixia details.
1124
1125   .. code-block:: console
1126
1127     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1128
1129   Config ``pod_ixia.yaml``
1130
1131   .. literalinclude:: code/pod_ixia.yaml
1132      :language: console
1133
1134   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1135
1136 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1137    You will also need to configure the IxLoad machine to start the IXIA
1138    IxosTclServer. This can be started like so:
1139
1140    * Connect to the IxLoad machine using RDP
1141    * Go to:
1142      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1143      or
1144      ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1145
1146 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1147
1148 5. Execute testcase in samplevnf folder e.g.
1149    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1150
1151 IxNetwork
1152 ---------
1153
1154 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1155 installed as part of the requirements of the project.
1156
1157 1. Update ``pod_ixia.yaml`` file with ixia details.
1158
1159   .. code-block:: console
1160
1161     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1162
1163   Config pod_ixia.yaml
1164
1165   .. literalinclude:: code/pod_ixia.yaml
1166      :language: console
1167
1168   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1169
1170 2. Start IxNetwork TCL Server
1171    You will also need to configure the IxNetwork machine to start the IXIA
1172    IxNetworkTclServer. This can be started like so:
1173
1174     * Connect to the IxNetwork machine using RDP
1175     * Go to:
1176       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1177       (or ``IxNetworkApiServer``)
1178
1179 3. Execute testcase in samplevnf folder e.g.
1180    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``