Merge "Bugfix:proxy env, ansible multinode support"
[yardstick.git] / docs / testing / user / userguide / 12-nsb_installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
5
6 Yardstick - NSB Testing -Installation
7 =====================================
8
9 Abstract
10 --------
11
12 The Network Service Benchmarking (NSB) extends the yardstick framework to do
13 VNF characterization and benchmarking in three different execution
14 environments viz., bare metal i.e. native Linux environment, standalone virtual
15 environment and managed virtualized environment (e.g. Open stack etc.).
16 It also brings in the capability to interact with external traffic generators
17 both hardware & software based for triggering and validating the traffic
18 according to user defined profiles.
19
20 The steps needed to run Yardstick with NSB testing are:
21
22 * Install Yardstick (NSB Testing).
23 * Setup/Reference pod.yaml describing Test topology
24 * Create/Reference the test configuration yaml file.
25 * Run the test case.
26
27
28 Prerequisites
29 -------------
30
31 Refer chapter Yardstick Installation for more information on yardstick
32 prerequisites
33
34 Several prerequisites are needed for Yardstick(VNF testing):
35
36   - Python Modules: pyzmq, pika.
37
38   - flex
39
40   - bison
41
42   - build-essential
43
44   - automake
45
46   - libtool
47
48   - librabbitmq-dev
49
50   - rabbitmq-server
51
52   - collectd
53
54   - intel-cmt-cat
55
56 Hardware & Software Ingredients
57 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
58
59 SUT requirements:
60
61
62    +-----------+--------------------+
63    | Item      | Description        |
64    +-----------+--------------------+
65    | Memory    | Min 20GB           |
66    +-----------+--------------------+
67    | NICs      | 2 x 10G            |
68    +-----------+--------------------+
69    | OS        | Ubuntu 16.04.3 LTS |
70    +-----------+--------------------+
71    | kernel    | 4.4.0-34-generic   |
72    +-----------+--------------------+
73    | DPDK      | 17.02              |
74    +-----------+--------------------+
75
76 Boot and BIOS settings:
77
78
79    +------------------+---------------------------------------------------+
80    | Boot settings    | default_hugepagesz=1G hugepagesz=1G hugepages=16  |
81    |                  | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33  |
82    |                  | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33         |
83    |                  | iommu=on iommu=pt intel_iommu=on                  |
84    |                  | Note: nohz_full and rcu_nocbs is to disable Linux |
85    |                  | kernel interrupts                                 |
86    +------------------+---------------------------------------------------+
87    |BIOS              | CPU Power and Performance Policy <Performance>    |
88    |                  | CPU C-state Disabled                              |
89    |                  | CPU P-state Disabled                              |
90    |                  | Enhanced IntelĀ® SpeedstepĀ® Tech Disabled          |
91    |                  | Hyper-Threading Technology (If supported) Enabled |
92    |                  | Virtualization Techology Enabled                  |
93    |                  | Intel(R) VT for Direct I/O Enabled                |
94    |                  | Coherency Enabled                                 |
95    |                  | Turbo Boost Disabled                              |
96    +------------------+---------------------------------------------------+
97
98
99
100 Install Yardstick (NSB Testing)
101 -------------------------------
102
103 Download the source code and install Yardstick from it
104
105 .. code-block:: console
106
107   git clone https://gerrit.opnfv.org/gerrit/yardstick
108
109   cd yardstick
110
111   # Switch to latest stable branch
112   # git checkout <tag or stable branch>
113   git checkout stable/euphrates
114
115 Configure the network proxy, either using the environment variables or setting
116 the global environment file:
117
118 .. code-block:: ini
119     cat /etc/environment
120     http_proxy='http://proxy.company.com:port'
121     https_proxy='http://proxy.company.com:port'
122
123 .. code-block:: console
124     export http_proxy='http://proxy.company.com:port'
125     export https_proxy='http://proxy.company.com:port'
126
127 The last step is to modify the Yardstick installation inventory, used by
128 Ansible:
129
130 .. code-block:: ini
131   cat ./ansible/yardstick-install-inventory.ini
132   [jumphost]
133   localhost  ansible_connection=local
134
135   [yardstick-standalone]
136   yardstick-standalone-node ansible_host=192.168.1.2
137   yardstick-standalone-node-2 ansible_host=192.168.1.3
138
139   # section below is only due backward compatibility.
140   # it will be removed later
141   [yardstick:children]
142   jumphost
143
144   [all:vars]
145   ansible_user=root
146   ansible_pass=root
147
148
149 To execute an installation for a Bare-Metal or a Standalone context:
150
151 .. code-block:: console
152
153     ./nsb_setup.sh
154
155
156 To execute an installation for an OpenStack context:
157
158 .. code-block:: console
159
160     ./nsb_setup.sh <path to admin-openrc.sh>
161
162 Above command setup docker with latest yardstick code. To execute
163
164 .. code-block:: console
165
166   docker exec -it yardstick bash
167
168 It will also automatically download all the packages needed for NSB Testing setup.
169 Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
170
171 System Topology:
172 ----------------
173
174 .. code-block:: console
175
176   +----------+              +----------+
177   |          |              |          |
178   |          | (0)----->(0) |          |
179   |    TG1   |              |    DUT   |
180   |          |              |          |
181   |          | (1)<-----(1) |          |
182   +----------+              +----------+
183   trafficgen_1                   vnf
184
185
186 Environment parameters and credentials
187 --------------------------------------
188
189 Config yardstick conf
190 ^^^^^^^^^^^^^^^^^^^^^
191
192 If user did not run 'yardstick env influxdb' inside the container, which will generate
193 correct yardstick.conf, then create the config file manually (run inside the container):
194
195     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
196     vi /etc/yardstick/yardstick.conf
197
198 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
199
200 ::
201
202   [DEFAULT]
203   debug = True
204   dispatcher = file, influxdb
205
206   [dispatcher_influxdb]
207   timeout = 5
208   target = http://{YOUR_IP_HERE}:8086
209   db_name = yardstick
210   username = root
211   password = root
212
213   [nsb]
214   trex_path=/opt/nsb_bin/trex/scripts
215   bin_path=/opt/nsb_bin
216   trex_client_lib=/opt/nsb_bin/trex_client/stl
217
218 Run Yardstick - Network Service Testcases
219 -----------------------------------------
220
221
222 NS testing - using yardstick CLI
223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
224
225   See :doc:`04-installation`
226
227 .. code-block:: console
228
229
230   docker exec -it yardstick /bin/bash
231   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
232   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
233   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
234
235 Network Service Benchmarking - Bare-Metal
236 -----------------------------------------
237
238 Bare-Metal Config pod.yaml describing Topology
239 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
240
241 Bare-Metal 2-Node setup:
242 ########################
243 .. code-block:: console
244
245   +----------+              +----------+
246   |          |              |          |
247   |          | (0)----->(0) |          |
248   |    TG1   |              |    DUT   |
249   |          |              |          |
250   |          | (n)<-----(n) |          |
251   +----------+              +----------+
252   trafficgen_1                   vnf
253
254 Bare-Metal 3-Node setup - Correlated Traffic:
255 #############################################
256 .. code-block:: console
257
258   +----------+              +----------+            +------------+
259   |          |              |          |            |            |
260   |          |              |          |            |            |
261   |          | (0)----->(0) |          |            |    UDP     |
262   |    TG1   |              |    DUT   |            |   Replay   |
263   |          |              |          |            |            |
264   |          |              |          |(1)<---->(0)|            |
265   +----------+              +----------+            +------------+
266   trafficgen_1                   vnf                 trafficgen_2
267
268
269 Bare-Metal Config pod.yaml
270 ^^^^^^^^^^^^^^^^^^^^^^^^^^
271 Before executing Yardstick test cases, make sure that pod.yaml reflects the
272 topology and update all the required fields.::
273
274     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
275
276 .. code-block:: YAML
277
278     nodes:
279     -
280         name: trafficgen_1
281         role: TrafficGen
282         ip: 1.1.1.1
283         user: root
284         password: r00t
285         interfaces:
286             xe0:  # logical name from topology.yaml and vnfd.yaml
287                 vpci:      "0000:07:00.0"
288                 driver:    i40e # default kernel driver
289                 dpdk_port_num: 0
290                 local_ip: "152.16.100.20"
291                 netmask:   "255.255.255.0"
292                 local_mac: "00:00:00:00:00:01"
293             xe1:  # logical name from topology.yaml and vnfd.yaml
294                 vpci:      "0000:07:00.1"
295                 driver:    i40e # default kernel driver
296                 dpdk_port_num: 1
297                 local_ip: "152.16.40.20"
298                 netmask:   "255.255.255.0"
299                 local_mac: "00:00.00:00:00:02"
300
301     -
302         name: vnf
303         role: vnf
304         ip: 1.1.1.2
305         user: root
306         password: r00t
307         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
308         interfaces:
309             xe0:  # logical name from topology.yaml and vnfd.yaml
310                 vpci:      "0000:07:00.0"
311                 driver:    i40e # default kernel driver
312                 dpdk_port_num: 0
313                 local_ip: "152.16.100.19"
314                 netmask:   "255.255.255.0"
315                 local_mac: "00:00:00:00:00:03"
316
317             xe1:  # logical name from topology.yaml and vnfd.yaml
318                 vpci:      "0000:07:00.1"
319                 driver:    i40e # default kernel driver
320                 dpdk_port_num: 1
321                 local_ip: "152.16.40.19"
322                 netmask:   "255.255.255.0"
323                 local_mac: "00:00:00:00:00:04"
324         routing_table:
325         - network: "152.16.100.20"
326           netmask: "255.255.255.0"
327           gateway: "152.16.100.20"
328           if: "xe0"
329         - network: "152.16.40.20"
330           netmask: "255.255.255.0"
331           gateway: "152.16.40.20"
332           if: "xe1"
333         nd_route_tbl:
334         - network: "0064:ff9b:0:0:0:0:9810:6414"
335           netmask: "112"
336           gateway: "0064:ff9b:0:0:0:0:9810:6414"
337           if: "xe0"
338         - network: "0064:ff9b:0:0:0:0:9810:2814"
339           netmask: "112"
340           gateway: "0064:ff9b:0:0:0:0:9810:2814"
341           if: "xe1"
342
343
344 Network Service Benchmarking - Standalone Virtualization
345 --------------------------------------------------------
346
347 SR-IOV:
348 ^^^^^^^
349
350 SR-IOV Pre-requisites
351 #####################
352
353 On Host:
354  a) Create a bridge for VM to connect to external network
355
356   .. code-block:: console
357
358       brctl addbr br-int
359       brctl addif br-int <interface_name>    #This interface is connected to internet
360
361  b) Build guest image for VNF to run.
362     Most of the sample test cases in Yardstick are using a guest image called
363     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
364     Yardstick has a tool for building this custom image with samplevnf.
365     It is necessary to have ``sudo`` rights to use this tool.
366
367     Also you may need to install several additional packages to use this tool, by
368     following the commands below::
369
370        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
371
372     This image can be built using the following command in the directory where Yardstick is installed
373
374     .. code-block:: console
375
376        export YARD_IMG_ARCH='amd64'
377        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
378
379     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
380
381     for more details refer to chapter :doc:`04-installation`
382
383     .. note:: VM should be build with static IP and should be accessible from yardstick host.
384
385
386 SR-IOV Config pod.yaml describing Topology
387 ##########################################
388
389 SR-IOV 2-Node setup:
390 ####################
391 .. code-block:: console
392
393                                +--------------------+
394                                |                    |
395                                |                    |
396                                |        DUT         |
397                                |       (VNF)        |
398                                |                    |
399                                +--------------------+
400                                | VF NIC |  | VF NIC |
401                                +--------+  +--------+
402                                      ^          ^
403                                      |          |
404                                      |          |
405   +----------+               +-------------------------+
406   |          |               |       ^          ^      |
407   |          |               |       |          |      |
408   |          | (0)<----->(0) | ------           |      |
409   |    TG1   |               |           SUT    |      |
410   |          |               |                  |      |
411   |          | (n)<----->(n) |------------------       |
412   +----------+               +-------------------------+
413   trafficgen_1                          host
414
415
416
417 SR-IOV 3-Node setup - Correlated Traffic
418 ########################################
419 .. code-block:: console
420
421                                +--------------------+
422                                |                    |
423                                |                    |
424                                |        DUT         |
425                                |       (VNF)        |
426                                |                    |
427                                +--------------------+
428                                | VF NIC |  | VF NIC |
429                                +--------+  +--------+
430                                      ^          ^
431                                      |          |
432                                      |          |
433   +----------+               +-------------------------+            +--------------+
434   |          |               |       ^          ^      |            |              |
435   |          |               |       |          |      |            |              |
436   |          | (0)<----->(0) | ------           |      |            |     TG2      |
437   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
438   |          |               |                  |      |            |              |
439   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
440   +----------+               +-------------------------+            +--------------+
441   trafficgen_1                          host                       trafficgen_2
442
443 Before executing Yardstick test cases, make sure that pod.yaml reflects the
444 topology and update all the required fields.
445
446 .. code-block:: console
447
448     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
449     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
450
451 .. note:: Update all the required fields like ip, user, password, pcis, etc...
452
453 SR-IOV Config pod_trex.yaml
454 ###########################
455
456 .. code-block:: YAML
457
458     nodes:
459     -
460         name: trafficgen_1
461         role: TrafficGen
462         ip: 1.1.1.1
463         user: root
464         password: r00t
465         key_filename: /root/.ssh/id_rsa
466         interfaces:
467             xe0:  # logical name from topology.yaml and vnfd.yaml
468                 vpci:      "0000:07:00.0"
469                 driver:    i40e # default kernel driver
470                 dpdk_port_num: 0
471                 local_ip: "152.16.100.20"
472                 netmask:   "255.255.255.0"
473                 local_mac: "00:00:00:00:00:01"
474             xe1:  # logical name from topology.yaml and vnfd.yaml
475                 vpci:      "0000:07:00.1"
476                 driver:    i40e # default kernel driver
477                 dpdk_port_num: 1
478                 local_ip: "152.16.40.20"
479                 netmask:   "255.255.255.0"
480                 local_mac: "00:00.00:00:00:02"
481
482 SR-IOV Config host_sriov.yaml
483 #############################
484
485 .. code-block:: YAML
486
487     nodes:
488     -
489        name: sriov
490        role: Sriov
491        ip: 192.168.100.101
492        user: ""
493        password: ""
494
495 SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
496
497 Update "contexts" section
498 """""""""""""""""""""""""
499
500 .. code-block:: YAML
501
502   contexts:
503    - name: yardstick
504      type: Node
505      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
506    - type: StandaloneSriov
507      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
508      name: yardstick
509      vm_deploy: True
510      flavor:
511        images: "/var/lib/libvirt/images/ubuntu.qcow2"
512        ram: 4096
513        extra_specs:
514          hw:cpu_sockets: 1
515          hw:cpu_cores: 6
516          hw:cpu_threads: 2
517        user: "" # update VM username
518        password: "" # update password
519      servers:
520        vnf:
521          network_ports:
522            mgmt:
523              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
524            xe0:
525              - uplink_0
526            xe1:
527              - downlink_0
528      networks:
529        uplink_0:
530          phy_port: "0000:05:00.0"
531          vpci: "0000:00:07.0"
532          cidr: '152.16.100.10/24'
533          gateway_ip: '152.16.100.20'
534        downlink_0:
535          phy_port: "0000:05:00.1"
536          vpci: "0000:00:08.0"
537          cidr: '152.16.40.10/24'
538          gateway_ip: '152.16.100.20'
539
540
541
542 OVS-DPDK:
543 ^^^^^^^^^
544
545 OVS-DPDK Pre-requisites
546 #######################
547
548 On Host:
549  a) Create a bridge for VM to connect to external network
550
551   .. code-block:: console
552
553       brctl addbr br-int
554       brctl addif br-int <interface_name>    #This interface is connected to internet
555
556  b) Build guest image for VNF to run.
557     Most of the sample test cases in Yardstick are using a guest image called
558     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
559     Yardstick has a tool for building this custom image with samplevnf.
560     It is necessary to have ``sudo`` rights to use this tool.
561
562     Also you may need to install several additional packages to use this tool, by
563     following the commands below::
564
565        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
566
567     This image can be built using the following command in the directory where Yardstick is installed::
568
569        export YARD_IMG_ARCH='amd64'
570        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
571        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
572
573     for more details refer to chapter :doc:`04-installation`
574
575     .. note::  VM should be build with static IP and should be accessible from yardstick host.
576
577  c) OVS & DPDK version.
578      - OVS 2.7 and DPDK 16.11.1 above version is supported
579
580  d) Setup OVS/DPDK on host.
581      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
582
583
584 OVS-DPDK Config pod.yaml describing Topology
585 ############################################
586
587 OVS-DPDK 2-Node setup:
588 ######################
589
590
591 .. code-block:: console
592
593                                +--------------------+
594                                |                    |
595                                |                    |
596                                |        DUT         |
597                                |       (VNF)        |
598                                |                    |
599                                +--------------------+
600                                | virtio |  | virtio |
601                                +--------+  +--------+
602                                     ^          ^
603                                     |          |
604                                     |          |
605                                +--------+  +--------+
606                                | vHOST0 |  | vHOST1 |
607   +----------+               +-------------------------+
608   |          |               |       ^          ^      |
609   |          |               |       |          |      |
610   |          | (0)<----->(0) | ------           |      |
611   |    TG1   |               |          SUT     |      |
612   |          |               |       (ovs-dpdk) |      |
613   |          | (n)<----->(n) |------------------       |
614   +----------+               +-------------------------+
615   trafficgen_1                          host
616
617
618 OVS-DPDK 3-Node setup - Correlated Traffic
619 ##########################################
620
621 .. code-block:: console
622
623                                +--------------------+
624                                |                    |
625                                |                    |
626                                |        DUT         |
627                                |       (VNF)        |
628                                |                    |
629                                +--------------------+
630                                | virtio |  | virtio |
631                                +--------+  +--------+
632                                     ^          ^
633                                     |          |
634                                     |          |
635                                +--------+  +--------+
636                                | vHOST0 |  | vHOST1 |
637   +----------+               +-------------------------+          +------------+
638   |          |               |       ^          ^      |          |            |
639   |          |               |       |          |      |          |            |
640   |          | (0)<----->(0) | ------           |      |          |    TG2     |
641   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
642   |          |               |      (ovs-dpdk)  |      |          |            |
643   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
644   +----------+               +-------------------------+          +------------+
645   trafficgen_1                          host                       trafficgen_2
646
647
648 Before executing Yardstick test cases, make sure that pod.yaml reflects the
649 topology and update all the required fields.
650
651 .. code-block:: console
652
653   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
654   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
655
656 .. note:: Update all the required fields like ip, user, password, pcis, etc...
657
658 OVS-DPDK Config pod_trex.yaml
659 #############################
660
661 .. code-block:: YAML
662
663     nodes:
664     -
665       name: trafficgen_1
666       role: TrafficGen
667       ip: 1.1.1.1
668       user: root
669       password: r00t
670       interfaces:
671           xe0:  # logical name from topology.yaml and vnfd.yaml
672               vpci:      "0000:07:00.0"
673               driver:    i40e # default kernel driver
674               dpdk_port_num: 0
675               local_ip: "152.16.100.20"
676               netmask:   "255.255.255.0"
677               local_mac: "00:00:00:00:00:01"
678           xe1:  # logical name from topology.yaml and vnfd.yaml
679               vpci:      "0000:07:00.1"
680               driver:    i40e # default kernel driver
681               dpdk_port_num: 1
682               local_ip: "152.16.40.20"
683               netmask:   "255.255.255.0"
684               local_mac: "00:00.00:00:00:02"
685
686 OVS-DPDK Config host_ovs.yaml
687 #############################
688
689 .. code-block:: YAML
690
691     nodes:
692     -
693        name: ovs_dpdk
694        role: OvsDpdk
695        ip: 192.168.100.101
696        user: ""
697        password: ""
698
699 ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
700
701 Update "contexts" section
702 """""""""""""""""""""""""
703
704 .. code-block:: YAML
705
706   contexts:
707    - name: yardstick
708      type: Node
709      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
710    - type: StandaloneOvsDpdk
711      name: yardstick
712      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
713      vm_deploy: True
714      ovs_properties:
715        version:
716          ovs: 2.7.0
717          dpdk: 16.11.1
718        pmd_threads: 2
719        ram:
720          socket_0: 2048
721          socket_1: 2048
722        queues: 4
723        vpath: "/usr/local"
724
725      flavor:
726        images: "/var/lib/libvirt/images/ubuntu.qcow2"
727        ram: 4096
728        extra_specs:
729          hw:cpu_sockets: 1
730          hw:cpu_cores: 6
731          hw:cpu_threads: 2
732        user: "" # update VM username
733        password: "" # update password
734      servers:
735        vnf:
736          network_ports:
737            mgmt:
738              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
739            xe0:
740              - uplink_0
741            xe1:
742              - downlink_0
743      networks:
744        uplink_0:
745          phy_port: "0000:05:00.0"
746          vpci: "0000:00:07.0"
747          cidr: '152.16.100.10/24'
748          gateway_ip: '152.16.100.20'
749        downlink_0:
750          phy_port: "0000:05:00.1"
751          vpci: "0000:00:08.0"
752          cidr: '152.16.40.10/24'
753          gateway_ip: '152.16.100.20'
754
755
756 Enabling other Traffic generator
757 --------------------------------
758
759 IxLoad:
760 ^^^^^^^
761
762 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
763                      Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
764    If the installation was not done inside the container, after installing the IXIA client,
765    check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
766    inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
767    to /usr/bin/ixiapython<ver> inside the container.
768
769 2. Update pod_ixia.yaml file with ixia details.
770
771   .. code-block:: console
772
773     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
774
775   Config pod_ixia.yaml
776
777   .. code-block:: yaml
778
779
780       nodes:
781           -
782             name: trafficgen_1
783             role: IxNet
784             ip: 1.2.1.1 #ixia machine ip
785             user: user
786             password: r00t
787             key_filename: /root/.ssh/id_rsa
788             tg_config:
789                 ixchassis: "1.2.1.7" #ixia chassis ip
790                 tcl_port: "8009" # tcl server port
791                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
792                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
793                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
794                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
795                 dut_result_dir: "/mnt/ixia"
796                 version: 8.1
797             interfaces:
798                 xe0:  # logical name from topology.yaml and vnfd.yaml
799                     vpci: "2:5" # Card:port
800                     driver:    "none"
801                     dpdk_port_num: 0
802                     local_ip: "152.16.100.20"
803                     netmask:   "255.255.0.0"
804                     local_mac: "00:98:10:64:14:00"
805                 xe1:  # logical name from topology.yaml and vnfd.yaml
806                     vpci: "2:6" # [(Card, port)]
807                     driver:    "none"
808                     dpdk_port_num: 1
809                     local_ip: "152.40.40.20"
810                     netmask:   "255.255.0.0"
811                     local_mac: "00:98:28:28:14:00"
812
813   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
814
815 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
816    You will also need to configure the IxLoad machine to start the IXIA
817    IxosTclServer. This can be started like so:
818
819    - Connect to the IxLoad machine using RDP
820    - Go to:
821     ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
822      or
823     ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
824
825 4. Create a folder "Results" in c:\ and share the folder on the network.
826
827 5. execute testcase in samplevnf folder.
828    eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
829
830 IxNetwork:
831 ^^^^^^^^^^
832
833 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
834                      Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
835 2. Update pod_ixia.yaml file with ixia details.
836
837   .. code-block:: console
838
839     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
840
841   Config pod_ixia.yaml
842
843   .. code-block:: yaml
844
845       nodes:
846           -
847             name: trafficgen_1
848             role: IxNet
849             ip: 1.2.1.1 #ixia machine ip
850             user: user
851             password: r00t
852             key_filename: /root/.ssh/id_rsa
853             tg_config:
854                 ixchassis: "1.2.1.7" #ixia chassis ip
855                 tcl_port: "8009" # tcl server port
856                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
857                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
858                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
859                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
860                 dut_result_dir: "/mnt/ixia"
861                 version: 8.1
862             interfaces:
863                 xe0:  # logical name from topology.yaml and vnfd.yaml
864                     vpci: "2:5" # Card:port
865                     driver:    "none"
866                     dpdk_port_num: 0
867                     local_ip: "152.16.100.20"
868                     netmask:   "255.255.0.0"
869                     local_mac: "00:98:10:64:14:00"
870                 xe1:  # logical name from topology.yaml and vnfd.yaml
871                     vpci: "2:6" # [(Card, port)]
872                     driver:    "none"
873                     dpdk_port_num: 1
874                     local_ip: "152.40.40.20"
875                     netmask:   "255.255.0.0"
876                     local_mac: "00:98:28:28:14:00"
877
878   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
879
880 3. Start IxNetwork TCL Server
881    You will also need to configure the IxNetwork machine to start the IXIA
882    IxNetworkTclServer. This can be started like so:
883
884     - Connect to the IxNetwork machine using RDP
885     - Go to:     ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
886
887 4. execute testcase in samplevnf folder.
888    eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
889