Merge "Updating yaml file to match other standalone test cases"
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
5
6 ..
7    Convention for heading levels in Yardstick documentation:
8
9    =======  Heading 0 (reserved for the title in a document)
10    -------  Heading 1
11    ^^^^^^^  Heading 2
12    +++++++  Heading 3
13    '''''''  Heading 4
14
15    Avoid deeper levels because they do not render well.
16
17
18 ================
19 NSB Installation
20 ================
21
22 .. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
23 .. _devstack: https://docs.openstack.org/devstack/pike/>
24
25 Abstract
26 --------
27
28 The steps needed to run Yardstick with NSB testing are:
29
30 * Install Yardstick (NSB Testing).
31 * Setup/reference ``pod.yaml`` describing Test topology
32 * Create/reference the test configuration yaml file.
33 * Run the test case.
34
35 Prerequisites
36 -------------
37
38 Refer to :doc:`04-installation` for more information on Yardstick
39 prerequisites.
40
41 Several prerequisites are needed for Yardstick (VNF testing):
42
43   * Python Modules: pyzmq, pika.
44   * flex
45   * bison
46   * build-essential
47   * automake
48   * libtool
49   * librabbitmq-dev
50   * rabbitmq-server
51   * collectd
52   * intel-cmt-cat
53
54 Hardware & Software Ingredients
55 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
56
57 SUT requirements:
58
59    ======= ===================
60    Item    Description
61    ======= ===================
62    Memory  Min 20GB
63    NICs    2 x 10G
64    OS      Ubuntu 16.04.3 LTS
65    kernel  4.4.0-34-generic
66    DPDK    17.02
67    ======= ===================
68
69 Boot and BIOS settings:
70
71    ============= =================================================
72    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
73                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
74                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
75                  iommu=on iommu=pt intel_iommu=on
76                  Note: nohz_full and rcu_nocbs is to disable Linux
77                  kernel interrupts
78    BIOS          CPU Power and Performance Policy <Performance>
79                  CPU C-state Disabled
80                  CPU P-state Disabled
81                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
82                  Hyper-Threading Technology (If supported) Enabled
83                  Virtualization Techology Enabled
84                  Intel(R) VT for Direct I/O Enabled
85                  Coherency Enabled
86                  Turbo Boost Disabled
87    ============= =================================================
88
89 Install Yardstick (NSB Testing)
90 -------------------------------
91
92 Download the source code and check out the latest stable branch::
93
94 .. code-block:: console
95
96   git clone https://gerrit.opnfv.org/gerrit/yardstick
97   cd yardstick
98   # Switch to latest stable branch
99   git checkout stable/gambia
100
101 Configure the network proxy, either using the environment variables or setting
102 the global environment file.
103
104 * Set environment
105
106 .. code-block::
107
108     http_proxy='http://proxy.company.com:port'
109     https_proxy='http://proxy.company.com:port'
110
111 .. code-block:: console
112
113     export http_proxy='http://proxy.company.com:port'
114     export https_proxy='http://proxy.company.com:port'
115
116 Modify the Yardstick installation inventory, used by Ansible::
117
118   cat ./ansible/install-inventory.ini
119   [jumphost]
120   localhost ansible_connection=local
121
122   [yardstick-standalone]
123   yardstick-standalone-node ansible_host=192.168.1.2
124   yardstick-standalone-node-2 ansible_host=192.168.1.3
125
126   # section below is only due backward compatibility.
127   # it will be removed later
128   [yardstick:children]
129   jumphost
130
131   [all:vars]
132   ansible_user=root
133   ansible_pass=root
134
135 .. note::
136
137    SSH access without password needs to be configured for all your nodes
138    defined in ``yardstick-install-inventory.ini`` file.
139    If you want to use password authentication you need to install ``sshpass``::
140
141      sudo -EH apt-get install sshpass
142
143 To execute an installation for a BareMetal or a Standalone context::
144
145     ./nsb_setup.sh
146
147
148 To execute an installation for an OpenStack context::
149
150     ./nsb_setup.sh <path to admin-openrc.sh>
151
152 The above commands will set up Docker with the latest Yardstick code. To
153 execute::
154
155   docker exec -it yardstick bash
156
157 It will also automatically download all the packages needed for NSB Testing
158 setup. Refer chapter :doc:`04-installation` for more on Docker
159
160 **Install Yardstick using Docker (recommended)**
161
162 Another way to execute an installation for a Bare-Metal or a Standalone context
163 is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
164 for more details.
165
166 System Topology
167 ---------------
168
169 .. code-block:: console
170
171   +----------+              +----------+
172   |          |              |          |
173   |          | (0)----->(0) |          |
174   |    TG1   |              |    DUT   |
175   |          |              |          |
176   |          | (1)<-----(1) |          |
177   +----------+              +----------+
178   trafficgen_1                   vnf
179
180
181 Environment parameters and credentials
182 --------------------------------------
183
184 Configure yardstick.conf
185 ^^^^^^^^^^^^^^^^^^^^^^^^
186
187 If you did not run ``yardstick env influxdb`` inside the container to generate
188  ``yardstick.conf``, then create the config file manually (run inside the
189 container)::
190
191     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
192     vi /etc/yardstick/yardstick.conf
193
194 Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
195 section::
196
197   [DEFAULT]
198   debug = True
199   dispatcher = influxdb
200
201   [dispatcher_influxdb]
202   timeout = 5
203   target = http://{YOUR_IP_HERE}:8086
204   db_name = yardstick
205   username = root
206   password = root
207
208   [nsb]
209   trex_path=/opt/nsb_bin/trex/scripts
210   bin_path=/opt/nsb_bin
211   trex_client_lib=/opt/nsb_bin/trex_client/stl
212
213 Run Yardstick - Network Service Testcases
214 -----------------------------------------
215
216 NS testing - using yardstick CLI
217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
218
219   See :doc:`04-installation`
220
221 Connect to the Yardstick container::
222
223   docker exec -it yardstick /bin/bash
224
225 If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
226   source /etc/yardstick/openstack.creds
227
228 In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
229 OpenStack::
230
231   export EXTERNAL_NETWORK="<openstack public network>"
232
233 Finally, you should be able to run the testcase::
234
235   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
236
237 Network Service Benchmarking - Bare-Metal
238 -----------------------------------------
239
240 Bare-Metal Config pod.yaml describing Topology
241 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
242
243 Bare-Metal 2-Node setup
244 +++++++++++++++++++++++
245 .. code-block:: console
246
247   +----------+              +----------+
248   |          |              |          |
249   |          | (0)----->(0) |          |
250   |    TG1   |              |    DUT   |
251   |          |              |          |
252   |          | (n)<-----(n) |          |
253   +----------+              +----------+
254   trafficgen_1                   vnf
255
256 Bare-Metal 3-Node setup - Correlated Traffic
257 ++++++++++++++++++++++++++++++++++++++++++++
258 .. code-block:: console
259
260   +----------+              +----------+            +------------+
261   |          |              |          |            |            |
262   |          |              |          |            |            |
263   |          | (0)----->(0) |          |            |    UDP     |
264   |    TG1   |              |    DUT   |            |   Replay   |
265   |          |              |          |            |            |
266   |          |              |          |(1)<---->(0)|            |
267   +----------+              +----------+            +------------+
268   trafficgen_1                   vnf                 trafficgen_2
269
270
271 Bare-Metal Config pod.yaml
272 ^^^^^^^^^^^^^^^^^^^^^^^^^^
273 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
274 topology and update all the required fields.::
275
276     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
277
278 .. code-block:: YAML
279
280     nodes:
281     -
282         name: trafficgen_1
283         role: TrafficGen
284         ip: 1.1.1.1
285         user: root
286         password: r00t
287         interfaces:
288             xe0:  # logical name from topology.yaml and vnfd.yaml
289                 vpci:      "0000:07:00.0"
290                 driver:    i40e # default kernel driver
291                 dpdk_port_num: 0
292                 local_ip: "152.16.100.20"
293                 netmask:   "255.255.255.0"
294                 local_mac: "00:00:00:00:00:01"
295             xe1:  # logical name from topology.yaml and vnfd.yaml
296                 vpci:      "0000:07:00.1"
297                 driver:    i40e # default kernel driver
298                 dpdk_port_num: 1
299                 local_ip: "152.16.40.20"
300                 netmask:   "255.255.255.0"
301                 local_mac: "00:00.00:00:00:02"
302
303     -
304         name: vnf
305         role: vnf
306         ip: 1.1.1.2
307         user: root
308         password: r00t
309         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
310         interfaces:
311             xe0:  # logical name from topology.yaml and vnfd.yaml
312                 vpci:      "0000:07:00.0"
313                 driver:    i40e # default kernel driver
314                 dpdk_port_num: 0
315                 local_ip: "152.16.100.19"
316                 netmask:   "255.255.255.0"
317                 local_mac: "00:00:00:00:00:03"
318
319             xe1:  # logical name from topology.yaml and vnfd.yaml
320                 vpci:      "0000:07:00.1"
321                 driver:    i40e # default kernel driver
322                 dpdk_port_num: 1
323                 local_ip: "152.16.40.19"
324                 netmask:   "255.255.255.0"
325                 local_mac: "00:00:00:00:00:04"
326         routing_table:
327         - network: "152.16.100.20"
328           netmask: "255.255.255.0"
329           gateway: "152.16.100.20"
330           if: "xe0"
331         - network: "152.16.40.20"
332           netmask: "255.255.255.0"
333           gateway: "152.16.40.20"
334           if: "xe1"
335         nd_route_tbl:
336         - network: "0064:ff9b:0:0:0:0:9810:6414"
337           netmask: "112"
338           gateway: "0064:ff9b:0:0:0:0:9810:6414"
339           if: "xe0"
340         - network: "0064:ff9b:0:0:0:0:9810:2814"
341           netmask: "112"
342           gateway: "0064:ff9b:0:0:0:0:9810:2814"
343           if: "xe1"
344
345
346 Standalone Virtualization
347 -------------------------
348
349 SR-IOV
350 ^^^^^^
351
352 SR-IOV Pre-requisites
353 +++++++++++++++++++++
354
355 On Host, where VM is created:
356  a) Create and configure a bridge named ``br-int`` for VM to connect to
357     external network. Currently this can be done using VXLAN tunnel.
358
359     Execute the following on host, where VM is created::
360
361       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
362       brctl addbr br-int
363       brctl addif br-int vxlan0
364       ip link set dev vxlan0 up
365       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
366       ip link set dev br-int up
367
368   .. note:: You may need to add extra rules to iptable to forward traffic.
369
370   .. code-block:: console
371
372     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
373     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
374
375   Execute the following on a jump host:
376
377   .. code-block:: console
378
379       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
380       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
381       ip link set dev vxlan0 up
382
383   .. note:: Host and jump host are different baremetal servers.
384
385  b) Modify test case management CIDR.
386     IP addresses IP#1, IP#2 and CIDR must be in the same network.
387
388   .. code-block:: YAML
389
390     servers:
391       vnf:
392         network_ports:
393           mgmt:
394             cidr: '1.1.1.7/24'
395
396  c) Build guest image for VNF to run.
397     Most of the sample test cases in Yardstick are using a guest image called
398     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
399     Yardstick has a tool for building this custom image with SampleVNF.
400     It is necessary to have ``sudo`` rights to use this tool.
401
402    Also you may need to install several additional packages to use this tool, by
403    following the commands below::
404
405       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
406
407    This image can be built using the following command in the directory where
408    Yardstick is installed::
409
410       export YARD_IMG_ARCH='amd64'
411       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
412
413    For instructions on generating a cloud image using Ansible, refer to
414    :doc:`04-installation`.
415
416    for more details refer to chapter :doc:`04-installation`
417
418    .. note:: VM should be build with static IP and be accessible from the
419       Yardstick host.
420
421
422 SR-IOV Config pod.yaml describing Topology
423 ++++++++++++++++++++++++++++++++++++++++++
424
425 SR-IOV 2-Node setup
426 +++++++++++++++++++
427 .. code-block:: console
428
429                                +--------------------+
430                                |                    |
431                                |                    |
432                                |        DUT         |
433                                |       (VNF)        |
434                                |                    |
435                                +--------------------+
436                                | VF NIC |  | VF NIC |
437                                +--------+  +--------+
438                                      ^          ^
439                                      |          |
440                                      |          |
441   +----------+               +-------------------------+
442   |          |               |       ^          ^      |
443   |          |               |       |          |      |
444   |          | (0)<----->(0) | ------    SUT    |      |
445   |    TG1   |               |                  |      |
446   |          | (n)<----->(n) | -----------------       |
447   |          |               |                         |
448   +----------+               +-------------------------+
449   trafficgen_1                          host
450
451
452
453 SR-IOV 3-Node setup - Correlated Traffic
454 ++++++++++++++++++++++++++++++++++++++++
455 .. code-block:: console
456
457                              +--------------------+
458                              |                    |
459                              |                    |
460                              |        DUT         |
461                              |       (VNF)        |
462                              |                    |
463                              +--------------------+
464                              | VF NIC |  | VF NIC |
465                              +--------+  +--------+
466                                    ^          ^
467                                    |          |
468                                    |          |
469   +----------+               +---------------------+            +--------------+
470   |          |               |     ^          ^    |            |              |
471   |          |               |     |          |    |            |              |
472   |          | (0)<----->(0) |-----           |    |            |     TG2      |
473   |    TG1   |               |         SUT    |    |            | (UDP Replay) |
474   |          |               |                |    |            |              |
475   |          | (n)<----->(n) |                -----| (n)<-->(n) |              |
476   +----------+               +---------------------+            +--------------+
477   trafficgen_1                          host                      trafficgen_2
478
479 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
480 topology and update all the required fields.
481
482 .. code-block:: console
483
484     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
485     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
486
487 .. note:: Update all the required fields like ip, user, password, pcis, etc...
488
489 SR-IOV Config pod_trex.yaml
490 +++++++++++++++++++++++++++
491
492 .. code-block:: YAML
493
494     nodes:
495     -
496         name: trafficgen_1
497         role: TrafficGen
498         ip: 1.1.1.1
499         user: root
500         password: r00t
501         key_filename: /root/.ssh/id_rsa
502         interfaces:
503             xe0:  # logical name from topology.yaml and vnfd.yaml
504                 vpci:      "0000:07:00.0"
505                 driver:    i40e # default kernel driver
506                 dpdk_port_num: 0
507                 local_ip: "152.16.100.20"
508                 netmask:   "255.255.255.0"
509                 local_mac: "00:00:00:00:00:01"
510             xe1:  # logical name from topology.yaml and vnfd.yaml
511                 vpci:      "0000:07:00.1"
512                 driver:    i40e # default kernel driver
513                 dpdk_port_num: 1
514                 local_ip: "152.16.40.20"
515                 netmask:   "255.255.255.0"
516                 local_mac: "00:00.00:00:00:02"
517
518 SR-IOV Config host_sriov.yaml
519 +++++++++++++++++++++++++++++
520
521 .. code-block:: YAML
522
523     nodes:
524     -
525        name: sriov
526        role: Sriov
527        ip: 192.168.100.101
528        user: ""
529        password: ""
530
531 SR-IOV testcase update:
532 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
533
534 Update contexts section
535 '''''''''''''''''''''''
536
537 .. code-block:: YAML
538
539   contexts:
540    - name: yardstick
541      type: Node
542      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
543    - type: StandaloneSriov
544      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
545      name: yardstick
546      vm_deploy: True
547      flavor:
548        images: "/var/lib/libvirt/images/ubuntu.qcow2"
549        ram: 4096
550        extra_specs:
551          hw:cpu_sockets: 1
552          hw:cpu_cores: 6
553          hw:cpu_threads: 2
554        user: "" # update VM username
555        password: "" # update password
556      servers:
557        vnf:
558          network_ports:
559            mgmt:
560              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
561            xe0:
562              - uplink_0
563            xe1:
564              - downlink_0
565      networks:
566        uplink_0:
567          phy_port: "0000:05:00.0"
568          vpci: "0000:00:07.0"
569          cidr: '152.16.100.10/24'
570          gateway_ip: '152.16.100.20'
571        downlink_0:
572          phy_port: "0000:05:00.1"
573          vpci: "0000:00:08.0"
574          cidr: '152.16.40.10/24'
575          gateway_ip: '152.16.100.20'
576
577
578 OVS-DPDK
579 ^^^^^^^^
580
581 OVS-DPDK Pre-requisites
582 +++++++++++++++++++++++
583
584 On Host, where VM is created:
585  a) Create and configure a bridge named ``br-int`` for VM to connect to
586     external network. Currently this can be done using VXLAN tunnel.
587
588     Execute the following on host, where VM is created:
589
590   .. code-block:: console
591
592       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
593       brctl addbr br-int
594       brctl addif br-int vxlan0
595       ip link set dev vxlan0 up
596       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
597       ip link set dev br-int up
598
599   .. note:: May be needed to add extra rules to iptable to forward traffic.
600
601   .. code-block:: console
602
603     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
604     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
605
606   Execute the following on a jump host:
607
608   .. code-block:: console
609
610       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
611       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
612       ip link set dev vxlan0 up
613
614   .. note:: Host and jump host are different baremetal servers.
615
616  b) Modify test case management CIDR.
617     IP addresses IP#1, IP#2 and CIDR must be in the same network.
618
619   .. code-block:: YAML
620
621     servers:
622       vnf:
623         network_ports:
624           mgmt:
625             cidr: '1.1.1.7/24'
626
627  c) Build guest image for VNF to run.
628     Most of the sample test cases in Yardstick are using a guest image called
629     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
630     Yardstick has a tool for building this custom image with SampleVNF.
631     It is necessary to have ``sudo`` rights to use this tool.
632
633    You may need to install several additional packages to use this tool, by
634    following the commands below::
635
636       sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
637
638    This image can be built using the following command in the directory where
639    Yardstick is installed::
640
641       export YARD_IMG_ARCH='amd64'
642       sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
643       sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
644
645    for more details refer to chapter :doc:`04-installation`
646
647    .. note::  VM should be build with static IP and should be accessible from
648       yardstick host.
649
650 3. OVS & DPDK version.
651    * OVS 2.7 and DPDK 16.11.1 above version is supported
652
653 4. Setup `OVS-DPDK`_ on host.
654
655
656 OVS-DPDK Config pod.yaml describing Topology
657 ++++++++++++++++++++++++++++++++++++++++++++
658
659 OVS-DPDK 2-Node setup
660 +++++++++++++++++++++
661
662 .. code-block:: console
663
664                                +--------------------+
665                                |                    |
666                                |                    |
667                                |        DUT         |
668                                |       (VNF)        |
669                                |                    |
670                                +--------------------+
671                                | virtio |  | virtio |
672                                +--------+  +--------+
673                                     ^          ^
674                                     |          |
675                                     |          |
676                                +--------+  +--------+
677                                | vHOST0 |  | vHOST1 |
678   +----------+               +-------------------------+
679   |          |               |       ^          ^      |
680   |          |               |       |          |      |
681   |          | (0)<----->(0) | ------           |      |
682   |    TG1   |               |          SUT     |      |
683   |          |               |       (ovs-dpdk) |      |
684   |          | (n)<----->(n) |------------------       |
685   +----------+               +-------------------------+
686   trafficgen_1                          host
687
688
689 OVS-DPDK 3-Node setup - Correlated Traffic
690 ++++++++++++++++++++++++++++++++++++++++++
691
692 .. code-block:: console
693
694                                +--------------------+
695                                |                    |
696                                |                    |
697                                |        DUT         |
698                                |       (VNF)        |
699                                |                    |
700                                +--------------------+
701                                | virtio |  | virtio |
702                                +--------+  +--------+
703                                     ^          ^
704                                     |          |
705                                     |          |
706                                +--------+  +--------+
707                                | vHOST0 |  | vHOST1 |
708   +----------+               +-------------------------+          +------------+
709   |          |               |       ^          ^      |          |            |
710   |          |               |       |          |      |          |            |
711   |          | (0)<----->(0) | ------           |      |          |    TG2     |
712   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
713   |          |               |      (ovs-dpdk)  |      |          |            |
714   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
715   +----------+               +-------------------------+          +------------+
716   trafficgen_1                          host                       trafficgen_2
717
718
719 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
720 the topology and update all the required fields::
721
722   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
723   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
724
725 .. note:: Update all the required fields like ip, user, password, pcis, etc...
726
727 OVS-DPDK Config pod_trex.yaml
728 +++++++++++++++++++++++++++++
729
730 .. code-block:: YAML
731
732     nodes:
733     -
734       name: trafficgen_1
735       role: TrafficGen
736       ip: 1.1.1.1
737       user: root
738       password: r00t
739       interfaces:
740           xe0:  # logical name from topology.yaml and vnfd.yaml
741               vpci:      "0000:07:00.0"
742               driver:    i40e # default kernel driver
743               dpdk_port_num: 0
744               local_ip: "152.16.100.20"
745               netmask:   "255.255.255.0"
746               local_mac: "00:00:00:00:00:01"
747           xe1:  # logical name from topology.yaml and vnfd.yaml
748               vpci:      "0000:07:00.1"
749               driver:    i40e # default kernel driver
750               dpdk_port_num: 1
751               local_ip: "152.16.40.20"
752               netmask:   "255.255.255.0"
753               local_mac: "00:00.00:00:00:02"
754
755 OVS-DPDK Config host_ovs.yaml
756 +++++++++++++++++++++++++++++
757
758 .. code-block:: YAML
759
760     nodes:
761     -
762        name: ovs_dpdk
763        role: OvsDpdk
764        ip: 192.168.100.101
765        user: ""
766        password: ""
767
768 ovs_dpdk testcase update:
769 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
770
771 Update contexts section
772 '''''''''''''''''''''''
773
774 .. code-block:: YAML
775
776   contexts:
777    - name: yardstick
778      type: Node
779      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
780    - type: StandaloneOvsDpdk
781      name: yardstick
782      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
783      vm_deploy: True
784      ovs_properties:
785        version:
786          ovs: 2.7.0
787          dpdk: 16.11.1
788        pmd_threads: 2
789        ram:
790          socket_0: 2048
791          socket_1: 2048
792        queues: 4
793        vpath: "/usr/local"
794
795      flavor:
796        images: "/var/lib/libvirt/images/ubuntu.qcow2"
797        ram: 4096
798        extra_specs:
799          hw:cpu_sockets: 1
800          hw:cpu_cores: 6
801          hw:cpu_threads: 2
802        user: "" # update VM username
803        password: "" # update password
804      servers:
805        vnf:
806          network_ports:
807            mgmt:
808              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
809            xe0:
810              - uplink_0
811            xe1:
812              - downlink_0
813      networks:
814        uplink_0:
815          phy_port: "0000:05:00.0"
816          vpci: "0000:00:07.0"
817          cidr: '152.16.100.10/24'
818          gateway_ip: '152.16.100.20'
819        downlink_0:
820          phy_port: "0000:05:00.1"
821          vpci: "0000:00:08.0"
822          cidr: '152.16.40.10/24'
823          gateway_ip: '152.16.100.20'
824
825
826 OpenStack with SR-IOV support
827 -----------------------------
828
829 This section describes how to run a Sample VNF test case, using Heat context,
830 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
831 DevStack, with SR-IOV support.
832
833
834 Single node OpenStack with external TG
835 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
836
837 .. code-block:: console
838
839                                  +----------------------------+
840                                  |OpenStack(DevStack)         |
841                                  |                            |
842                                  |   +--------------------+   |
843                                  |   |sample-VNF VM       |   |
844                                  |   |                    |   |
845                                  |   |        DUT         |   |
846                                  |   |       (VNF)        |   |
847                                  |   |                    |   |
848                                  |   +--------+  +--------+   |
849                                  |   | VF NIC |  | VF NIC |   |
850                                  |   +-----+--+--+----+---+   |
851                                  |         ^          ^       |
852                                  |         |          |       |
853   +----------+                   +---------+----------+-------+
854   |          |                   |        VF0        VF1      |
855   |          |                   |         ^          ^       |
856   |          |                   |         |   SUT    |       |
857   |    TG    | (PF0)<----->(PF0) +---------+          |       |
858   |          |                   |                    |       |
859   |          | (PF1)<----->(PF1) +--------------------+       |
860   |          |                   |                            |
861   +----------+                   +----------------------------+
862   trafficgen_1                                 host
863
864
865 Host pre-configuration
866 ++++++++++++++++++++++
867
868 .. warning:: The following configuration requires sudo access to the system.
869    Make sure that your user have the access.
870
871 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
872 manufacturers disable this extension by default.
873
874 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
875 config file ``/etc/default/grub``.
876
877 For the Intel platform::
878
879   ...
880   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
881   ...
882
883 For the AMD platform::
884
885   ...
886   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
887   ...
888
889 Update the grub configuration file and restart the system:
890
891 .. warning:: The following command will reboot the system.
892
893 .. code:: bash
894
895   sudo update-grub
896   sudo reboot
897
898 Make sure the extension has been enabled::
899
900   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
901
902   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
903   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
904   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
905   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
906   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
907   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
908   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
909   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
910
911 .. TODO: Refer to the yardstick installation guide for proxy set up
912
913 Setup system proxy (if needed). Add the following configuration into the
914 ``/etc/environment`` file:
915
916 .. note:: The proxy server name/port and IPs should be changed according to
917   actual/current proxy configuration in the lab.
918
919 .. code:: bash
920
921   export http_proxy=http://proxy.company.com:port
922   export https_proxy=http://proxy.company.com:port
923   export ftp_proxy=http://proxy.company.com:port
924   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
925   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
926
927 Upgrade the system:
928
929 .. code:: bash
930
931   sudo -EH apt-get update
932   sudo -EH apt-get upgrade
933   sudo -EH apt-get dist-upgrade
934
935 Install dependencies needed for DevStack
936
937 .. code:: bash
938
939   sudo -EH apt-get install python python-dev python-pip
940
941 Setup SR-IOV ports on the host:
942
943 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
944   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
945   interface names should be changed according to the HW environment used for
946   testing.
947
948 .. code:: bash
949
950   sudo ip link set dev enp24s0f0 up
951   sudo ip link set dev enp24s0f1 up
952   sudo ip link set dev enp24s0f3 up
953
954   # Create VFs on PF
955   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
956   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
957
958
959 DevStack installation
960 +++++++++++++++++++++
961
962 If you want to try out NSB, but don't have OpenStack set-up, you can use
963 `Devstack`_ to install OpenStack on a host. Please note, that the
964 ``stable/pike`` branch of devstack repo should be used during the installation.
965 The required ``local.conf`` configuration file are described below.
966
967 DevStack configuration file:
968
969 .. note:: Update the devstack configuration file by replacing angluar brackets
970   with a short description inside.
971
972 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
973   commands to get device and vendor id of the virtual function (VF).
974
975 .. literalinclude:: code/single-devstack-local.conf
976    :language: console
977
978 Start the devstack installation on a host.
979
980 TG host configuration
981 +++++++++++++++++++++
982
983 Yardstick automatically installs and configures Trex traffic generator on TG
984 host based on provided POD file (see below). Anyway, it's recommended to check
985 the compatibility of the installed NIC on the TG server with software Trex
986 using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
987
988 Run the Sample VNF test case
989 ++++++++++++++++++++++++++++
990
991 There is an example of Sample VNF test case ready to be executed in an
992 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
993 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
994
995 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
996 context.
997
998 Create pod file for TG in the yardstick repo folder located in the yardstick
999 container:
1000
1001 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1002   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1003   command to get the PF PCI address for ``vpci`` field.
1004
1005 .. literalinclude:: code/single-yardstick-pod.conf
1006    :language: console
1007
1008 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1009 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1010 context using steps described in `NS testing - using yardstick CLI`_ section.
1011
1012
1013 Multi node OpenStack TG and VNF setup (two nodes)
1014 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1015
1016 .. code-block:: console
1017
1018   +----------------------------+                   +----------------------------+
1019   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1020   |                            |                   |                            |
1021   |   +--------------------+   |                   |   +--------------------+   |
1022   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1023   |   |                    |   |                   |   |                    |   |
1024   |   |         TG         |   |                   |   |        DUT         |   |
1025   |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
1026   |   |                    |   |                   |   |                    |   |
1027   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1028   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1029   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1030   |        ^           ^       |                   |         ^          ^       |
1031   |        |           |       |                   |         |          |       |
1032   +--------+-----------+-------+                   +---------+----------+-------+
1033   |       VF0         VF1      |                   |        VF0        VF1      |
1034   |        ^           ^       |                   |         ^          ^       |
1035   |        |    SUT2   |       |                   |         |   SUT1   |       |
1036   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1037   |        |                   |                   |                    |       |
1038   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1039   |                            |                   |                            |
1040   +----------------------------+                   +----------------------------+
1041            host2 (compute)                               host1 (controller)
1042
1043
1044 Controller/Compute pre-configuration
1045 ++++++++++++++++++++++++++++++++++++
1046
1047 Pre-configuration of the controller and compute hosts are the same as
1048 described in `Host pre-configuration`_ section.
1049
1050 DevStack configuration
1051 ++++++++++++++++++++++
1052
1053 A reference ``local.conf`` for deploying OpenStack in a multi-host environment
1054 using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
1055 devstack repo should be used during the installation.
1056
1057 .. note:: Update the devstack configuration files by replacing angluar brackets
1058   with a short description inside.
1059
1060 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1061   commands to get device and vendor id of the virtual function (VF).
1062
1063 DevStack configuration file for controller host:
1064
1065 .. literalinclude:: code/multi-devstack-controller-local.conf
1066    :language: console
1067
1068 DevStack configuration file for compute host:
1069
1070 .. literalinclude:: code/multi-devstack-compute-local.conf
1071    :language: console
1072
1073 Start the devstack installation on the controller and compute hosts.
1074
1075 Run the sample vFW TC
1076 +++++++++++++++++++++
1077
1078 Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1079 context.
1080
1081 Run the sample vFW RFC2544 SR-IOV test case
1082 (``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
1083 in the heat context using steps described in
1084 `NS testing - using yardstick CLI`_ section and the following Yardstick command
1085 line arguments:
1086
1087 .. code:: bash
1088
1089   yardstick -d task start --task-args='{"provider": "sriov"}' \
1090   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1091
1092
1093 Enabling other Traffic generators
1094 ---------------------------------
1095
1096 IxLoad
1097 ~~~~~~
1098
1099 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1100    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1101    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1102    ``<IxOS version>Linux64.bin.tar.gz``
1103    If the installation was not done inside the container, after installing
1104    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1105    sure you can run this cmd inside the yardstick container. Usually user is
1106    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1107    ``/usr/bin/ixiapython<ver>`` inside the container.
1108
1109 2. Update ``pod_ixia.yaml`` file with ixia details.
1110
1111   .. code-block:: console
1112
1113     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1114       etc/yardstick/nodes/pod_ixia.yaml
1115
1116   Config ``pod_ixia.yaml``
1117
1118   .. literalinclude:: code/pod_ixia.yaml
1119      :language: console
1120
1121   for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
1122   for ovs-dpdk/sriov configuration
1123
1124 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1125    You will also need to configure the IxLoad machine to start the IXIA
1126    IxosTclServer. This can be started like so:
1127
1128    * Connect to the IxLoad machine using RDP
1129    * Go to:
1130      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1131      or
1132      ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
1133
1134 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1135
1136 5. Execute testcase in samplevnf folder e.g.
1137    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1138
1139 IxNetwork
1140 ^^^^^^^^^
1141
1142 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1143 installed as part of the requirements of the project.
1144
1145 1. Update ``pod_ixia.yaml`` file with ixia details.
1146
1147   .. code-block:: console
1148
1149     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
1150     etc/yardstick/nodes/pod_ixia.yaml
1151
1152   Configure ``pod_ixia.yaml``
1153
1154   .. literalinclude:: code/pod_ixia.yaml
1155      :language: console
1156
1157   for sriov/ovs_dpdk pod files, please refer to above
1158   `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
1159
1160 2. Start IxNetwork TCL Server
1161    You will also need to configure the IxNetwork machine to start the IXIA
1162    IxNetworkTclServer. This can be started like so:
1163
1164     * Connect to the IxNetwork machine using RDP
1165     * Go to:
1166       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1167       (or ``IxNetworkApiServer``)
1168
1169 3. Execute testcase in samplevnf folder e.g.
1170    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1171
1172 Spirent Landslide
1173 -----------------
1174
1175 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1176 to be preinstalled and properly configured.
1177
1178 - Java
1179
1180     32-bit Java installation is required for the Spirent Landslide TCL API.
1181
1182     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1183
1184     .. important::
1185       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1186       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1187       section of installation instructions.
1188
1189 - LsApi (Tcl API module)
1190
1191     Follow Landslide documentation for detailed instructions on Linux
1192     installation of Tcl API and its dependencies
1193     ``http://TAS_HOST_IP/tclapiinstall.html``.
1194     For working with LsApi Python wrapper only steps 1-5 are required.
1195
1196     .. note:: After installation make sure your API home path is included in
1197       ``PYTHONPATH`` environment variable.
1198
1199     .. important::
1200     The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1201     For LsApi module to initialize correctly following lines (184-186) in
1202     lsapi.py
1203
1204     .. code-block:: python
1205
1206         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1207         if ldpath == '':
1208          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1209
1210     should be changed to:
1211
1212     .. code-block:: python
1213
1214         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1215         if not ldpath == '':
1216                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1217
1218 .. note:: The Spirent landslide TCL software package needs to be updated in case
1219   the user upgrades to a new version of Spirent landslide software.