[docs] Add vEPC test case preparation steps
[yardstick.git] / docs / testing / user / userguide / 13-nsb-installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
5
6 =====================================
7 Yardstick - NSB Testing -Installation
8 =====================================
9
10 Abstract
11 ========
12
13 The Network Service Benchmarking (NSB) extends the yardstick framework to do
14 VNF characterization and benchmarking in three different execution
15 environments viz., bare metal i.e. native Linux environment, standalone virtual
16 environment and managed virtualized environment (e.g. Open stack etc.).
17 It also brings in the capability to interact with external traffic generators
18 both hardware & software based for triggering and validating the traffic
19 according to user defined profiles.
20
21 The steps needed to run Yardstick with NSB testing are:
22
23 * Install Yardstick (NSB Testing).
24 * Setup/Reference pod.yaml describing Test topology
25 * Create/Reference the test configuration yaml file.
26 * Run the test case.
27
28
29 Prerequisites
30 =============
31
32 Refer chapter Yardstick Installation for more information on yardstick
33 prerequisites
34
35 Several prerequisites are needed for Yardstick (VNF testing):
36
37   * Python Modules: pyzmq, pika.
38   * flex
39   * bison
40   * build-essential
41   * automake
42   * libtool
43   * librabbitmq-dev
44   * rabbitmq-server
45   * collectd
46   * intel-cmt-cat
47
48 Hardware & Software Ingredients
49 -------------------------------
50
51 SUT requirements:
52
53
54    ======= ===================
55    Item    Description
56    ======= ===================
57    Memory  Min 20GB
58    NICs    2 x 10G
59    OS      Ubuntu 16.04.3 LTS
60    kernel  4.4.0-34-generic
61    DPDK    17.02
62    ======= ===================
63
64 Boot and BIOS settings:
65
66
67    ============= =================================================
68    Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
69                  hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
70                  nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
71                  iommu=on iommu=pt intel_iommu=on
72                  Note: nohz_full and rcu_nocbs is to disable Linux
73                  kernel interrupts
74    BIOS          CPU Power and Performance Policy <Performance>
75                  CPU C-state Disabled
76                  CPU P-state Disabled
77                  Enhanced IntelĀ® SpeedstepĀ® Tech Disabl
78                  Hyper-Threading Technology (If supported) Enabled
79                  Virtualization Techology Enabled
80                  Intel(R) VT for Direct I/O Enabled
81                  Coherency Enabled
82                  Turbo Boost Disabled
83    ============= =================================================
84
85
86
87 Install Yardstick (NSB Testing)
88 ===============================
89
90 Download the source code and install Yardstick from it
91
92 .. code-block:: console
93
94   git clone https://gerrit.opnfv.org/gerrit/yardstick
95
96   cd yardstick
97
98   # Switch to latest stable branch
99   # git checkout <tag or stable branch>
100   git checkout stable/euphrates
101
102 Configure the network proxy, either using the environment variables or setting
103 the global environment file:
104
105 .. code-block:: ini
106
107     cat /etc/environment
108     http_proxy='http://proxy.company.com:port'
109     https_proxy='http://proxy.company.com:port'
110
111 .. code-block:: console
112
113     export http_proxy='http://proxy.company.com:port'
114     export https_proxy='http://proxy.company.com:port'
115
116 The last step is to modify the Yardstick installation inventory, used by
117 Ansible:
118
119 .. code-block:: ini
120
121   cat ./ansible/install-inventory.ini
122   [jumphost]
123   localhost  ansible_connection=local
124
125   [yardstick-standalone]
126   yardstick-standalone-node ansible_host=192.168.1.2
127   yardstick-standalone-node-2 ansible_host=192.168.1.3
128
129   # section below is only due backward compatibility.
130   # it will be removed later
131   [yardstick:children]
132   jumphost
133
134   [all:vars]
135   ansible_user=root
136   ansible_pass=root
137
138 .. note::
139
140    SSH access without password needs to be configured for all your nodes defined in
141    ``install-inventory.ini`` file.
142    If you want to use password authentication you need to install sshpass
143
144    .. code-block:: console
145
146      sudo -EH apt-get install sshpass
147
148 To execute an installation for a Bare-Metal or a Standalone context:
149
150 .. code-block:: console
151
152     ./nsb_setup.sh
153
154
155 To execute an installation for an OpenStack context:
156
157 .. code-block:: console
158
159     ./nsb_setup.sh <path to admin-openrc.sh>
160
161 Above command setup docker with latest yardstick code. To execute
162
163 .. code-block:: console
164
165   docker exec -it yardstick bash
166
167 It will also automatically download all the packages needed for NSB Testing
168 setup. Refer chapter :doc:`04-installation` for more on docker
169 **Install Yardstick using Docker (recommended)**
170
171 Another way to execute an installation for a Bare-Metal or a Standalone context
172 is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
173 for more details.
174
175 System Topology:
176 ================
177
178 .. code-block:: console
179
180   +----------+              +----------+
181   |          |              |          |
182   |          | (0)----->(0) |          |
183   |    TG1   |              |    DUT   |
184   |          |              |          |
185   |          | (1)<-----(1) |          |
186   +----------+              +----------+
187   trafficgen_1                   vnf
188
189
190 Environment parameters and credentials
191 ======================================
192
193 Config yardstick conf
194 ---------------------
195
196 If user did not run 'yardstick env influxdb' inside the container, which will
197 generate correct ``yardstick.conf``, then create the config file manually (run
198 inside the container):
199 ::
200
201     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
202     vi /etc/yardstick/yardstick.conf
203
204 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
205
206 ::
207
208   [DEFAULT]
209   debug = True
210   dispatcher = file, influxdb
211
212   [dispatcher_influxdb]
213   timeout = 5
214   target = http://{YOUR_IP_HERE}:8086
215   db_name = yardstick
216   username = root
217   password = root
218
219   [nsb]
220   trex_path=/opt/nsb_bin/trex/scripts
221   bin_path=/opt/nsb_bin
222   trex_client_lib=/opt/nsb_bin/trex_client/stl
223
224 Run Yardstick - Network Service Testcases
225 =========================================
226
227
228 NS testing - using yardstick CLI
229 --------------------------------
230
231   See :doc:`04-installation`
232
233 .. code-block:: console
234
235
236   docker exec -it yardstick /bin/bash
237   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
238   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
239   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
240
241 Network Service Benchmarking - Bare-Metal
242 =========================================
243
244 Bare-Metal Config pod.yaml describing Topology
245 ----------------------------------------------
246
247 Bare-Metal 2-Node setup
248 ^^^^^^^^^^^^^^^^^^^^^^^
249 .. code-block:: console
250
251   +----------+              +----------+
252   |          |              |          |
253   |          | (0)----->(0) |          |
254   |    TG1   |              |    DUT   |
255   |          |              |          |
256   |          | (n)<-----(n) |          |
257   +----------+              +----------+
258   trafficgen_1                   vnf
259
260 Bare-Metal 3-Node setup - Correlated Traffic
261 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
262 .. code-block:: console
263
264   +----------+              +----------+            +------------+
265   |          |              |          |            |            |
266   |          |              |          |            |            |
267   |          | (0)----->(0) |          |            |    UDP     |
268   |    TG1   |              |    DUT   |            |   Replay   |
269   |          |              |          |            |            |
270   |          |              |          |(1)<---->(0)|            |
271   +----------+              +----------+            +------------+
272   trafficgen_1                   vnf                 trafficgen_2
273
274
275 Bare-Metal Config pod.yaml
276 --------------------------
277 Before executing Yardstick test cases, make sure that pod.yaml reflects the
278 topology and update all the required fields.::
279
280     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
281
282 .. code-block:: YAML
283
284     nodes:
285     -
286         name: trafficgen_1
287         role: TrafficGen
288         ip: 1.1.1.1
289         user: root
290         password: r00t
291         interfaces:
292             xe0:  # logical name from topology.yaml and vnfd.yaml
293                 vpci:      "0000:07:00.0"
294                 driver:    i40e # default kernel driver
295                 dpdk_port_num: 0
296                 local_ip: "152.16.100.20"
297                 netmask:   "255.255.255.0"
298                 local_mac: "00:00:00:00:00:01"
299             xe1:  # logical name from topology.yaml and vnfd.yaml
300                 vpci:      "0000:07:00.1"
301                 driver:    i40e # default kernel driver
302                 dpdk_port_num: 1
303                 local_ip: "152.16.40.20"
304                 netmask:   "255.255.255.0"
305                 local_mac: "00:00.00:00:00:02"
306
307     -
308         name: vnf
309         role: vnf
310         ip: 1.1.1.2
311         user: root
312         password: r00t
313         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
314         interfaces:
315             xe0:  # logical name from topology.yaml and vnfd.yaml
316                 vpci:      "0000:07:00.0"
317                 driver:    i40e # default kernel driver
318                 dpdk_port_num: 0
319                 local_ip: "152.16.100.19"
320                 netmask:   "255.255.255.0"
321                 local_mac: "00:00:00:00:00:03"
322
323             xe1:  # logical name from topology.yaml and vnfd.yaml
324                 vpci:      "0000:07:00.1"
325                 driver:    i40e # default kernel driver
326                 dpdk_port_num: 1
327                 local_ip: "152.16.40.19"
328                 netmask:   "255.255.255.0"
329                 local_mac: "00:00:00:00:00:04"
330         routing_table:
331         - network: "152.16.100.20"
332           netmask: "255.255.255.0"
333           gateway: "152.16.100.20"
334           if: "xe0"
335         - network: "152.16.40.20"
336           netmask: "255.255.255.0"
337           gateway: "152.16.40.20"
338           if: "xe1"
339         nd_route_tbl:
340         - network: "0064:ff9b:0:0:0:0:9810:6414"
341           netmask: "112"
342           gateway: "0064:ff9b:0:0:0:0:9810:6414"
343           if: "xe0"
344         - network: "0064:ff9b:0:0:0:0:9810:2814"
345           netmask: "112"
346           gateway: "0064:ff9b:0:0:0:0:9810:2814"
347           if: "xe1"
348
349
350 Network Service Benchmarking - Standalone Virtualization
351 ========================================================
352
353 SR-IOV
354 ------
355
356 SR-IOV Pre-requisites
357 ^^^^^^^^^^^^^^^^^^^^^
358
359 On Host, where VM is created:
360  a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
361     Currently this can be done using VXLAN tunnel.
362
363     Execute the following on host, where VM is created:
364
365   .. code-block:: console
366
367       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
368       brctl addbr br-int
369       brctl addif br-int vxlan0
370       ip link set dev vxlan0 up
371       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
372       ip link set dev br-int up
373
374   .. note:: May be needed to add extra rules to iptable to forward traffic.
375
376   .. code-block:: console
377
378     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
379     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
380
381   Execute the following on a jump host:
382
383   .. code-block:: console
384
385       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
386       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
387       ip link set dev vxlan0 up
388
389   .. note:: Host and jump host are different baremetal servers.
390
391  b) Modify test case management CIDR.
392     IP addresses IP#1, IP#2 and CIDR must be in the same network.
393
394   .. code-block:: YAML
395
396     servers:
397       vnf:
398         network_ports:
399           mgmt:
400             cidr: '1.1.1.7/24'
401
402  c) Build guest image for VNF to run.
403     Most of the sample test cases in Yardstick are using a guest image called
404     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
405     Yardstick has a tool for building this custom image with SampleVNF.
406     It is necessary to have ``sudo`` rights to use this tool.
407
408     Also you may need to install several additional packages to use this tool, by
409     following the commands below::
410
411        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
412
413     This image can be built using the following command in the directory where Yardstick is installed
414
415     .. code-block:: console
416
417        export YARD_IMG_ARCH='amd64'
418        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
419
420     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
421
422     for more details refer to chapter :doc:`04-installation`
423
424     .. note:: VM should be build with static IP and should be accessible from yardstick host.
425
426
427 SR-IOV Config pod.yaml describing Topology
428 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
429
430 SR-IOV 2-Node setup:
431 ^^^^^^^^^^^^^^^^^^^^
432 .. code-block:: console
433
434                                +--------------------+
435                                |                    |
436                                |                    |
437                                |        DUT         |
438                                |       (VNF)        |
439                                |                    |
440                                +--------------------+
441                                | VF NIC |  | VF NIC |
442                                +--------+  +--------+
443                                      ^          ^
444                                      |          |
445                                      |          |
446   +----------+               +-------------------------+
447   |          |               |       ^          ^      |
448   |          |               |       |          |      |
449   |          | (0)<----->(0) | ------           |      |
450   |    TG1   |               |           SUT    |      |
451   |          |               |                  |      |
452   |          | (n)<----->(n) |------------------       |
453   +----------+               +-------------------------+
454   trafficgen_1                          host
455
456
457
458 SR-IOV 3-Node setup - Correlated Traffic
459 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
460 .. code-block:: console
461
462                                +--------------------+
463                                |                    |
464                                |                    |
465                                |        DUT         |
466                                |       (VNF)        |
467                                |                    |
468                                +--------------------+
469                                | VF NIC |  | VF NIC |
470                                +--------+  +--------+
471                                      ^          ^
472                                      |          |
473                                      |          |
474   +----------+               +-------------------------+            +--------------+
475   |          |               |       ^          ^      |            |              |
476   |          |               |       |          |      |            |              |
477   |          | (0)<----->(0) | ------           |      |            |     TG2      |
478   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
479   |          |               |                  |      |            |              |
480   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
481   +----------+               +-------------------------+            +--------------+
482   trafficgen_1                          host                       trafficgen_2
483
484 Before executing Yardstick test cases, make sure that pod.yaml reflects the
485 topology and update all the required fields.
486
487 .. code-block:: console
488
489     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
490     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
491
492 .. note:: Update all the required fields like ip, user, password, pcis, etc...
493
494 SR-IOV Config pod_trex.yaml
495 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
496
497 .. code-block:: YAML
498
499     nodes:
500     -
501         name: trafficgen_1
502         role: TrafficGen
503         ip: 1.1.1.1
504         user: root
505         password: r00t
506         key_filename: /root/.ssh/id_rsa
507         interfaces:
508             xe0:  # logical name from topology.yaml and vnfd.yaml
509                 vpci:      "0000:07:00.0"
510                 driver:    i40e # default kernel driver
511                 dpdk_port_num: 0
512                 local_ip: "152.16.100.20"
513                 netmask:   "255.255.255.0"
514                 local_mac: "00:00:00:00:00:01"
515             xe1:  # logical name from topology.yaml and vnfd.yaml
516                 vpci:      "0000:07:00.1"
517                 driver:    i40e # default kernel driver
518                 dpdk_port_num: 1
519                 local_ip: "152.16.40.20"
520                 netmask:   "255.255.255.0"
521                 local_mac: "00:00.00:00:00:02"
522
523 SR-IOV Config host_sriov.yaml
524 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
525
526 .. code-block:: YAML
527
528     nodes:
529     -
530        name: sriov
531        role: Sriov
532        ip: 192.168.100.101
533        user: ""
534        password: ""
535
536 SR-IOV testcase update:
537 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
538
539 Update "contexts" section
540 """""""""""""""""""""""""
541
542 .. code-block:: YAML
543
544   contexts:
545    - name: yardstick
546      type: Node
547      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
548    - type: StandaloneSriov
549      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
550      name: yardstick
551      vm_deploy: True
552      flavor:
553        images: "/var/lib/libvirt/images/ubuntu.qcow2"
554        ram: 4096
555        extra_specs:
556          hw:cpu_sockets: 1
557          hw:cpu_cores: 6
558          hw:cpu_threads: 2
559        user: "" # update VM username
560        password: "" # update password
561      servers:
562        vnf:
563          network_ports:
564            mgmt:
565              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
566            xe0:
567              - uplink_0
568            xe1:
569              - downlink_0
570      networks:
571        uplink_0:
572          phy_port: "0000:05:00.0"
573          vpci: "0000:00:07.0"
574          cidr: '152.16.100.10/24'
575          gateway_ip: '152.16.100.20'
576        downlink_0:
577          phy_port: "0000:05:00.1"
578          vpci: "0000:00:08.0"
579          cidr: '152.16.40.10/24'
580          gateway_ip: '152.16.100.20'
581
582
583
584 OVS-DPDK
585 --------
586
587 OVS-DPDK Pre-requisites
588 ^^^^^^^^^^^^^^^^^^^^^^^
589
590 On Host, where VM is created:
591  a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
592     Currently this can be done using VXLAN tunnel.
593
594     Execute the following on host, where VM is created:
595
596   .. code-block:: console
597
598       ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
599       brctl addbr br-int
600       brctl addif br-int vxlan0
601       ip link set dev vxlan0 up
602       ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
603       ip link set dev br-int up
604
605   .. note:: May be needed to add extra rules to iptable to forward traffic.
606
607   .. code-block:: console
608
609     iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
610     iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
611
612   Execute the following on a jump host:
613
614   .. code-block:: console
615
616       ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
617       ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
618       ip link set dev vxlan0 up
619
620   .. note:: Host and jump host are different baremetal servers.
621
622  b) Modify test case management CIDR.
623     IP addresses IP#1, IP#2 and CIDR must be in the same network.
624
625   .. code-block:: YAML
626
627     servers:
628       vnf:
629         network_ports:
630           mgmt:
631             cidr: '1.1.1.7/24'
632
633  c) Build guest image for VNF to run.
634     Most of the sample test cases in Yardstick are using a guest image called
635     ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
636     Yardstick has a tool for building this custom image with SampleVNF.
637     It is necessary to have ``sudo`` rights to use this tool.
638
639     Also you may need to install several additional packages to use this tool, by
640     following the commands below::
641
642        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
643
644     This image can be built using the following command in the directory where Yardstick is installed::
645
646        export YARD_IMG_ARCH='amd64'
647        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
648        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
649
650     for more details refer to chapter :doc:`04-installation`
651
652     .. note::  VM should be build with static IP and should be accessible from yardstick host.
653
654  c) OVS & DPDK version.
655      - OVS 2.7 and DPDK 16.11.1 above version is supported
656
657  d) Setup OVS/DPDK on host.
658      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
659
660
661 OVS-DPDK Config pod.yaml describing Topology
662 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
663
664 OVS-DPDK 2-Node setup
665 ^^^^^^^^^^^^^^^^^^^^^
666
667
668 .. code-block:: console
669
670                                +--------------------+
671                                |                    |
672                                |                    |
673                                |        DUT         |
674                                |       (VNF)        |
675                                |                    |
676                                +--------------------+
677                                | virtio |  | virtio |
678                                +--------+  +--------+
679                                     ^          ^
680                                     |          |
681                                     |          |
682                                +--------+  +--------+
683                                | vHOST0 |  | vHOST1 |
684   +----------+               +-------------------------+
685   |          |               |       ^          ^      |
686   |          |               |       |          |      |
687   |          | (0)<----->(0) | ------           |      |
688   |    TG1   |               |          SUT     |      |
689   |          |               |       (ovs-dpdk) |      |
690   |          | (n)<----->(n) |------------------       |
691   +----------+               +-------------------------+
692   trafficgen_1                          host
693
694
695 OVS-DPDK 3-Node setup - Correlated Traffic
696 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
697
698 .. code-block:: console
699
700                                +--------------------+
701                                |                    |
702                                |                    |
703                                |        DUT         |
704                                |       (VNF)        |
705                                |                    |
706                                +--------------------+
707                                | virtio |  | virtio |
708                                +--------+  +--------+
709                                     ^          ^
710                                     |          |
711                                     |          |
712                                +--------+  +--------+
713                                | vHOST0 |  | vHOST1 |
714   +----------+               +-------------------------+          +------------+
715   |          |               |       ^          ^      |          |            |
716   |          |               |       |          |      |          |            |
717   |          | (0)<----->(0) | ------           |      |          |    TG2     |
718   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
719   |          |               |      (ovs-dpdk)  |      |          |            |
720   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
721   +----------+               +-------------------------+          +------------+
722   trafficgen_1                          host                       trafficgen_2
723
724
725 Before executing Yardstick test cases, make sure that pod.yaml reflects the
726 topology and update all the required fields.
727
728 .. code-block:: console
729
730   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
731   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
732
733 .. note:: Update all the required fields like ip, user, password, pcis, etc...
734
735 OVS-DPDK Config pod_trex.yaml
736 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
737
738 .. code-block:: YAML
739
740     nodes:
741     -
742       name: trafficgen_1
743       role: TrafficGen
744       ip: 1.1.1.1
745       user: root
746       password: r00t
747       interfaces:
748           xe0:  # logical name from topology.yaml and vnfd.yaml
749               vpci:      "0000:07:00.0"
750               driver:    i40e # default kernel driver
751               dpdk_port_num: 0
752               local_ip: "152.16.100.20"
753               netmask:   "255.255.255.0"
754               local_mac: "00:00:00:00:00:01"
755           xe1:  # logical name from topology.yaml and vnfd.yaml
756               vpci:      "0000:07:00.1"
757               driver:    i40e # default kernel driver
758               dpdk_port_num: 1
759               local_ip: "152.16.40.20"
760               netmask:   "255.255.255.0"
761               local_mac: "00:00.00:00:00:02"
762
763 OVS-DPDK Config host_ovs.yaml
764 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
765
766 .. code-block:: YAML
767
768     nodes:
769     -
770        name: ovs_dpdk
771        role: OvsDpdk
772        ip: 192.168.100.101
773        user: ""
774        password: ""
775
776 ovs_dpdk testcase update:
777 ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
778
779 Update "contexts" section
780 """""""""""""""""""""""""
781
782 .. code-block:: YAML
783
784   contexts:
785    - name: yardstick
786      type: Node
787      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
788    - type: StandaloneOvsDpdk
789      name: yardstick
790      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
791      vm_deploy: True
792      ovs_properties:
793        version:
794          ovs: 2.7.0
795          dpdk: 16.11.1
796        pmd_threads: 2
797        ram:
798          socket_0: 2048
799          socket_1: 2048
800        queues: 4
801        vpath: "/usr/local"
802
803      flavor:
804        images: "/var/lib/libvirt/images/ubuntu.qcow2"
805        ram: 4096
806        extra_specs:
807          hw:cpu_sockets: 1
808          hw:cpu_cores: 6
809          hw:cpu_threads: 2
810        user: "" # update VM username
811        password: "" # update password
812      servers:
813        vnf:
814          network_ports:
815            mgmt:
816              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
817            xe0:
818              - uplink_0
819            xe1:
820              - downlink_0
821      networks:
822        uplink_0:
823          phy_port: "0000:05:00.0"
824          vpci: "0000:00:07.0"
825          cidr: '152.16.100.10/24'
826          gateway_ip: '152.16.100.20'
827        downlink_0:
828          phy_port: "0000:05:00.1"
829          vpci: "0000:00:08.0"
830          cidr: '152.16.40.10/24'
831          gateway_ip: '152.16.100.20'
832
833
834 Network Service Benchmarking - OpenStack with SR-IOV support
835 ============================================================
836
837 This section describes how to run a Sample VNF test case, using Heat context,
838 with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
839 DevStack, with SR-IOV support.
840
841
842 Single node OpenStack setup with external TG
843 --------------------------------------------
844
845 .. code-block:: console
846
847                                  +----------------------------+
848                                  |OpenStack(DevStack)         |
849                                  |                            |
850                                  |   +--------------------+   |
851                                  |   |sample-VNF VM       |   |
852                                  |   |                    |   |
853                                  |   |        DUT         |   |
854                                  |   |       (VNF)        |   |
855                                  |   |                    |   |
856                                  |   +--------+  +--------+   |
857                                  |   | VF NIC |  | VF NIC |   |
858                                  |   +-----+--+--+----+---+   |
859                                  |         ^          ^       |
860                                  |         |          |       |
861   +----------+                   +---------+----------+-------+
862   |          |                   |        VF0        VF1      |
863   |          |                   |         ^          ^       |
864   |          |                   |         |   SUT    |       |
865   |    TG    | (PF0)<----->(PF0) +---------+          |       |
866   |          |                   |                    |       |
867   |          | (PF1)<----->(PF1) +--------------------+       |
868   |          |                   |                            |
869   +----------+                   +----------------------------+
870   trafficgen_1                                 host
871
872
873 Host pre-configuration
874 ^^^^^^^^^^^^^^^^^^^^^^
875
876 .. warning:: The following configuration requires sudo access to the system. Make
877   sure that your user have the access.
878
879 Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
880 disable this extension by default.
881
882 Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
883 config file ``/etc/default/grub``.
884
885 For the Intel platform:
886
887 .. code:: bash
888
889   ...
890   GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
891   ...
892
893 For the AMD platform:
894
895 .. code:: bash
896
897   ...
898   GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
899   ...
900
901 Update the grub configuration file and restart the system:
902
903 .. warning:: The following command will reboot the system.
904
905 .. code:: bash
906
907   sudo update-grub
908   sudo reboot
909
910 Make sure the extension has been enabled:
911
912 .. code:: bash
913
914   sudo journalctl -b 0 | grep -e IOMMU -e DMAR
915
916   Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL  S2600WF  00000001 INTL 20091013)
917   Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
918   Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
919   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
920   Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
921   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
922   Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
923   Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
924
925 Setup system proxy (if needed). Add the following configuration into the
926 ``/etc/environment`` file:
927
928 .. note:: The proxy server name/port and IPs should be changed according to
929   actual/current proxy configuration in the lab.
930
931 .. code:: bash
932
933   export http_proxy=http://proxy.company.com:port
934   export https_proxy=http://proxy.company.com:port
935   export ftp_proxy=http://proxy.company.com:port
936   export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
937   export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
938
939 Upgrade the system:
940
941 .. code:: bash
942
943   sudo -EH apt-get update
944   sudo -EH apt-get upgrade
945   sudo -EH apt-get dist-upgrade
946
947 Install dependencies needed for the DevStack
948
949 .. code:: bash
950
951   sudo -EH apt-get install python
952   sudo -EH apt-get install python-dev
953   sudo -EH apt-get install python-pip
954
955 Setup SR-IOV ports on the host:
956
957 .. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
958   on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
959   interface names should be changed according to the HW environment used for
960   testing.
961
962 .. code:: bash
963
964   sudo ip link set dev enp24s0f0 up
965   sudo ip link set dev enp24s0f1 up
966   sudo ip link set dev enp24s0f3 up
967
968   # Create VFs on PF
969   echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
970   echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
971
972
973 DevStack installation
974 ^^^^^^^^^^^^^^^^^^^^^
975
976 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
977 documentation to install OpenStack on a host. Please note, that stable
978 ``pike`` branch of devstack repo should be used during the installation.
979 The required `local.conf`` configuration file are described below.
980
981 DevStack configuration file:
982
983 .. note:: Update the devstack configuration file by replacing angluar brackets
984   with a short description inside.
985
986 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
987   commands to get device and vendor id of the virtual function (VF).
988
989 .. literalinclude:: code/single-devstack-local.conf
990    :language: console
991
992 Start the devstack installation on a host.
993
994
995 TG host configuration
996 ^^^^^^^^^^^^^^^^^^^^^
997
998 Yardstick automatically install and configure Trex traffic generator on TG
999 host based on provided POD file (see below). Anyway, it's recommended to check
1000 the compatibility of the installed NIC on the TG server with software Trex using
1001 the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
1002
1003
1004 Run the Sample VNF test case
1005 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1006
1007 There is an example of Sample VNF test case ready to be executed in an
1008 OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
1009 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
1010
1011 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1012 context.
1013
1014 Create pod file for TG in the yardstick repo folder located in the yardstick
1015 container:
1016
1017 .. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be  changed
1018   according to HW environment used for the testing. Use ``lshw -c network -businfo``
1019   command to get the PF PCI address for ``vpci`` field.
1020
1021 .. literalinclude:: code/single-yardstick-pod.conf
1022    :language: console
1023
1024 Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1025 tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1026 context using steps described in `NS testing - using yardstick CLI`_ section.
1027
1028
1029 Multi node OpenStack TG and VNF setup (two nodes)
1030 -------------------------------------------------
1031
1032 .. code-block:: console
1033
1034   +----------------------------+                   +----------------------------+
1035   |OpenStack(DevStack)         |                   |OpenStack(DevStack)         |
1036   |                            |                   |                            |
1037   |   +--------------------+   |                   |   +--------------------+   |
1038   |   |sample-VNF VM       |   |                   |   |sample-VNF VM       |   |
1039   |   |                    |   |                   |   |                    |   |
1040   |   |         TG         |   |                   |   |        DUT         |   |
1041   |   |    trafficgen_1    |   |                   |   |       (VNF)        |   |
1042   |   |                    |   |                   |   |                    |   |
1043   |   +--------+  +--------+   |                   |   +--------+  +--------+   |
1044   |   | VF NIC |  | VF NIC |   |                   |   | VF NIC |  | VF NIC |   |
1045   |   +----+---+--+----+---+   |                   |   +-----+--+--+----+---+   |
1046   |        ^           ^       |                   |         ^          ^       |
1047   |        |           |       |                   |         |          |       |
1048   +--------+-----------+-------+                   +---------+----------+-------+
1049   |       VF0         VF1      |                   |        VF0        VF1      |
1050   |        ^           ^       |                   |         ^          ^       |
1051   |        |    SUT2   |       |                   |         |   SUT1   |       |
1052   |        |           +-------+ (PF0)<----->(PF0) +---------+          |       |
1053   |        |                   |                   |                    |       |
1054   |        +-------------------+ (PF1)<----->(PF1) +--------------------+       |
1055   |                            |                   |                            |
1056   +----------------------------+                   +----------------------------+
1057            host2 (compute)                               host1 (controller)
1058
1059
1060 Controller/Compute pre-configuration
1061 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1062
1063 Pre-configuration of the controller and compute hosts are the same as
1064 described in `Host pre-configuration`_ section. Follow the steps in the section.
1065
1066
1067 DevStack configuration
1068 ^^^^^^^^^^^^^^^^^^^^^^
1069
1070 Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
1071 documentation to install OpenStack on a host. Please note, that stable
1072 ``pike`` branch of devstack repo should be used during the installation.
1073 The required `local.conf`` configuration file are described below.
1074
1075 .. note:: Update the devstack configuration files by replacing angluar brackets
1076   with a short description inside.
1077
1078 .. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
1079   commands to get device and vendor id of the virtual function (VF).
1080
1081 DevStack configuration file for controller host:
1082
1083 .. literalinclude:: code/multi-devstack-controller-local.conf
1084    :language: console
1085
1086 DevStack configuration file for compute host:
1087
1088 .. literalinclude:: code/multi-devstack-compute-local.conf
1089    :language: console
1090
1091 Start the devstack installation on the controller and compute hosts.
1092
1093
1094 Run the sample vFW TC
1095 ^^^^^^^^^^^^^^^^^^^^^
1096
1097 Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
1098 context.
1099
1100 Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
1101 tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
1102 context using steps described in `NS testing - using yardstick CLI`_ section
1103 and the following yardtick command line arguments:
1104
1105 .. code:: bash
1106
1107   yardstick -d task start --task-args='{"provider": "sriov"}' \
1108   samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
1109
1110
1111 Enabling other Traffic generator
1112 ================================
1113
1114 IxLoad
1115 ^^^^^^
1116
1117 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1118    ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
1119    Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
1120    ``<IxOS version>Linux64.bin.tar.gz``
1121    If the installation was not done inside the container, after installing
1122    the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
1123    sure you can run this cmd inside the yardstick container. Usually user is
1124    required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
1125    ``/usr/bin/ixiapython<ver>`` inside the container.
1126
1127 2. Update ``pod_ixia.yaml`` file with ixia details.
1128
1129   .. code-block:: console
1130
1131     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1132
1133   Config ``pod_ixia.yaml``
1134
1135   .. literalinclude:: code/pod_ixia.yaml
1136      :language: console
1137
1138   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1139
1140 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
1141    You will also need to configure the IxLoad machine to start the IXIA
1142    IxosTclServer. This can be started like so:
1143
1144    * Connect to the IxLoad machine using RDP
1145    * Go to:
1146      ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
1147      or
1148      ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
1149
1150 4. Create a folder ``Results`` in c:\ and share the folder on the network.
1151
1152 5. Execute testcase in samplevnf folder e.g.
1153    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
1154
1155 IxNetwork
1156 ---------
1157
1158 IxNetwork testcases use IxNetwork API Python Bindings module, which is
1159 installed as part of the requirements of the project.
1160
1161 1. Update ``pod_ixia.yaml`` file with ixia details.
1162
1163   .. code-block:: console
1164
1165     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
1166
1167   Config pod_ixia.yaml
1168
1169   .. literalinclude:: code/pod_ixia.yaml
1170      :language: console
1171
1172   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
1173
1174 2. Start IxNetwork TCL Server
1175    You will also need to configure the IxNetwork machine to start the IXIA
1176    IxNetworkTclServer. This can be started like so:
1177
1178     * Connect to the IxNetwork machine using RDP
1179     * Go to:
1180       ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
1181       (or ``IxNetworkApiServer``)
1182
1183 3. Execute testcase in samplevnf folder e.g.
1184    ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
1185
1186 Spirent Landslide
1187 -----------------
1188
1189 In order to use Spirent Landslide for vEPC testcases, some dependencies have
1190 to be preinstalled and properly configured.
1191
1192 - Java
1193
1194     32-bit Java installation is required for the Spirent Landslide TCL API.
1195
1196     | ``$ sudo apt-get install openjdk-8-jdk:i386``
1197
1198     .. important::
1199       Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
1200       check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
1201       section of installation instructions.
1202
1203 - LsApi (Tcl API module)
1204
1205     Follow Landslide documentation for detailed instructions on Linux
1206     installation of Tcl API and its dependencies
1207     ``http://TAS_HOST_IP/tclapiinstall.html``.
1208     For working with LsApi Python wrapper only steps 1-5 are required.
1209
1210     .. note:: After installation make sure your API home path is included in
1211       ``PYTHONPATH`` environment variable.
1212
1213     .. important::
1214     The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
1215     For LsApi module to initialize correctly following lines (184-186) in
1216     lsapi.py
1217
1218     .. code-block:: python
1219
1220         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1221         if ldpath == '':
1222          environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1223
1224     should be changed to:
1225
1226     .. code-block:: python
1227
1228         ldpath = os.environ.get('LD_LIBRARY_PATH', '')
1229         if not ldpath == '':
1230                environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
1231
1232 .. note:: The Spirent landslide TCL software package needs to be updated in case
1233   the user upgrades to a new version of Spirent landslide software.