Merge "nsb_installation: updates"
[yardstick.git] / docs / testing / user / userguide / 12-nsb_installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
5
6 Yardstick - NSB Testing -Installation
7 =====================================
8
9 Abstract
10 --------
11
12 The Network Service Benchmarking (NSB) extends the yardstick framework to do
13 VNF characterization and benchmarking in three different execution
14 environments viz., bare metal i.e. native Linux environment, standalone virtual
15 environment and managed virtualized environment (e.g. Open stack etc.).
16 It also brings in the capability to interact with external traffic generators
17 both hardware & software based for triggering and validating the traffic
18 according to user defined profiles.
19
20 The steps needed to run Yardstick with NSB testing are:
21
22 * Install Yardstick (NSB Testing).
23 * Setup/Reference pod.yaml describing Test topology
24 * Create/Reference the test configuration yaml file.
25 * Run the test case.
26
27
28 Prerequisites
29 -------------
30
31 Refer chapter Yardstick Installation for more information on yardstick
32 prerequisites
33
34 Several prerequisites are needed for Yardstick(VNF testing):
35
36   - Python Modules: pyzmq, pika.
37
38   - flex
39
40   - bison
41
42   - build-essential
43
44   - automake
45
46   - libtool
47
48   - librabbitmq-dev
49
50   - rabbitmq-server
51
52   - collectd
53
54   - intel-cmt-cat
55
56 Hardware & Software Ingredients
57 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
58
59 SUT requirements:
60
61
62    +-----------+--------------------+
63    | Item      | Description        |
64    +-----------+--------------------+
65    | Memory    | Min 20GB           |
66    +-----------+--------------------+
67    | NICs      | 2 x 10G            |
68    +-----------+--------------------+
69    | OS        | Ubuntu 16.04.3 LTS |
70    +-----------+--------------------+
71    | kernel    | 4.4.0-34-generic   |
72    +-----------+--------------------+
73    | DPDK      | 17.02              |
74    +-----------+--------------------+
75
76 Boot and BIOS settings:
77
78
79    +------------------+---------------------------------------------------+
80    | Boot settings    | default_hugepagesz=1G hugepagesz=1G hugepages=16  |
81    |                  | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33  |
82    |                  | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33         |
83    |                  | iommu=on iommu=pt intel_iommu=on                  |
84    |                  | Note: nohz_full and rcu_nocbs is to disable Linux |
85    |                  | kernel interrupts                                 |
86    +------------------+---------------------------------------------------+
87    |BIOS              | CPU Power and Performance Policy <Performance>    |
88    |                  | CPU C-state Disabled                              |
89    |                  | CPU P-state Disabled                              |
90    |                  | Enhanced IntelĀ® SpeedstepĀ® Tech Disabled          |
91    |                  | Hyper-Threading Technology (If supported) Enabled |
92    |                  | Virtualization Techology Enabled                  |
93    |                  | Intel(R) VT for Direct I/O Enabled                |
94    |                  | Coherency Enabled                                 |
95    |                  | Turbo Boost Disabled                              |
96    +------------------+---------------------------------------------------+
97
98
99
100 Install Yardstick (NSB Testing)
101 -------------------------------
102
103 Download the source code and install Yardstick from it
104
105 .. code-block:: console
106
107   git clone https://gerrit.opnfv.org/gerrit/yardstick
108
109   cd yardstick
110
111   # Switch to latest stable branch
112   # git checkout <tag or stable branch>
113   git checkout stable/euphrates
114
115   # For Bare-Metal or Standalone Virtualization
116   ./nsb_setup.sh
117
118   # For OpenStack
119   ./nsb_setup.sh <path to admin-openrc.sh>
120
121
122 Above command setup docker with latest yardstick code. To execute
123
124 .. code-block:: console
125
126   docker exec -it yardstick bash
127
128 It will also automatically download all the packages needed for NSB Testing setup.
129 Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
130
131 System Topology:
132 ----------------
133
134 .. code-block:: console
135
136   +----------+              +----------+
137   |          |              |          |
138   |          | (0)----->(0) |          |
139   |    TG1   |              |    DUT   |
140   |          |              |          |
141   |          | (1)<-----(1) |          |
142   +----------+              +----------+
143   trafficgen_1                   vnf
144
145
146 Environment parameters and credentials
147 --------------------------------------
148
149 Config yardstick conf
150 ^^^^^^^^^^^^^^^^^^^^^
151
152 If user did not run 'yardstick env influxdb' inside the container, which will generate
153 correct yardstick.conf, then create the config file manually (run inside the container):
154
155     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
156     vi /etc/yardstick/yardstick.conf
157
158 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
159
160 ::
161
162   [DEFAULT]
163   debug = True
164   dispatcher = file, influxdb
165
166   [dispatcher_influxdb]
167   timeout = 5
168   target = http://{YOUR_IP_HERE}:8086
169   db_name = yardstick
170   username = root
171   password = root
172
173   [nsb]
174   trex_path=/opt/nsb_bin/trex/scripts
175   bin_path=/opt/nsb_bin
176   trex_client_lib=/opt/nsb_bin/trex_client/stl
177
178 Run Yardstick - Network Service Testcases
179 -----------------------------------------
180
181
182 NS testing - using yardstick CLI
183 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
184
185   See :doc:`04-installation`
186
187 .. code-block:: console
188
189
190   docker exec -it yardstick /bin/bash
191   source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
192   export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
193   yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
194
195 Network Service Benchmarking - Bare-Metal
196 -----------------------------------------
197
198 Bare-Metal Config pod.yaml describing Topology
199 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
200
201 Bare-Metal 2-Node setup:
202 ########################
203 .. code-block:: console
204
205   +----------+              +----------+
206   |          |              |          |
207   |          | (0)----->(0) |          |
208   |    TG1   |              |    DUT   |
209   |          |              |          |
210   |          | (n)<-----(n) |          |
211   +----------+              +----------+
212   trafficgen_1                   vnf
213
214 Bare-Metal 3-Node setup - Correlated Traffic:
215 #############################################
216 .. code-block:: console
217
218   +----------+              +----------+            +------------+
219   |          |              |          |            |            |
220   |          |              |          |            |            |
221   |          | (0)----->(0) |          |            |    UDP     |
222   |    TG1   |              |    DUT   |            |   Replay   |
223   |          |              |          |            |            |
224   |          |              |          |(1)<---->(0)|            |
225   +----------+              +----------+            +------------+
226   trafficgen_1                   vnf                 trafficgen_2
227
228
229 Bare-Metal Config pod.yaml
230 ^^^^^^^^^^^^^^^^^^^^^^^^^^
231 Before executing Yardstick test cases, make sure that pod.yaml reflects the
232 topology and update all the required fields.::
233
234     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
235
236 .. code-block:: YAML
237
238     nodes:
239     -
240         name: trafficgen_1
241         role: TrafficGen
242         ip: 1.1.1.1
243         user: root
244         password: r00t
245         interfaces:
246             xe0:  # logical name from topology.yaml and vnfd.yaml
247                 vpci:      "0000:07:00.0"
248                 driver:    i40e # default kernel driver
249                 dpdk_port_num: 0
250                 local_ip: "152.16.100.20"
251                 netmask:   "255.255.255.0"
252                 local_mac: "00:00:00:00:00:01"
253             xe1:  # logical name from topology.yaml and vnfd.yaml
254                 vpci:      "0000:07:00.1"
255                 driver:    i40e # default kernel driver
256                 dpdk_port_num: 1
257                 local_ip: "152.16.40.20"
258                 netmask:   "255.255.255.0"
259                 local_mac: "00:00.00:00:00:02"
260
261     -
262         name: vnf
263         role: vnf
264         ip: 1.1.1.2
265         user: root
266         password: r00t
267         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
268         interfaces:
269             xe0:  # logical name from topology.yaml and vnfd.yaml
270                 vpci:      "0000:07:00.0"
271                 driver:    i40e # default kernel driver
272                 dpdk_port_num: 0
273                 local_ip: "152.16.100.19"
274                 netmask:   "255.255.255.0"
275                 local_mac: "00:00:00:00:00:03"
276
277             xe1:  # logical name from topology.yaml and vnfd.yaml
278                 vpci:      "0000:07:00.1"
279                 driver:    i40e # default kernel driver
280                 dpdk_port_num: 1
281                 local_ip: "152.16.40.19"
282                 netmask:   "255.255.255.0"
283                 local_mac: "00:00:00:00:00:04"
284         routing_table:
285         - network: "152.16.100.20"
286           netmask: "255.255.255.0"
287           gateway: "152.16.100.20"
288           if: "xe0"
289         - network: "152.16.40.20"
290           netmask: "255.255.255.0"
291           gateway: "152.16.40.20"
292           if: "xe1"
293         nd_route_tbl:
294         - network: "0064:ff9b:0:0:0:0:9810:6414"
295           netmask: "112"
296           gateway: "0064:ff9b:0:0:0:0:9810:6414"
297           if: "xe0"
298         - network: "0064:ff9b:0:0:0:0:9810:2814"
299           netmask: "112"
300           gateway: "0064:ff9b:0:0:0:0:9810:2814"
301           if: "xe1"
302
303
304 Network Service Benchmarking - Standalone Virtualization
305 --------------------------------------------------------
306
307 SR-IOV:
308 ^^^^^^^
309
310 SR-IOV Pre-requisites
311 #####################
312
313 On Host:
314  a) Create a bridge for VM to connect to external network
315
316   .. code-block:: console
317
318       brctl addbr br-int
319       brctl addif br-int <interface_name>    #This interface is connected to internet
320
321  b) Build guest image for VNF to run.
322     Most of the sample test cases in Yardstick are using a guest image called
323     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
324     Yardstick has a tool for building this custom image with samplevnf.
325     It is necessary to have ``sudo`` rights to use this tool.
326
327     Also you may need to install several additional packages to use this tool, by
328     following the commands below::
329
330        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
331
332     This image can be built using the following command in the directory where Yardstick is installed
333
334     .. code-block:: console
335
336        export YARD_IMG_ARCH='amd64'
337        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
338
339     Please use ansible script to generate a cloud image refer to :doc:`04-installation`
340
341     for more details refer to chapter :doc:`04-installation`
342
343     .. note:: VM should be build with static IP and should be accessible from yardstick host.
344
345
346 SR-IOV Config pod.yaml describing Topology
347 ##########################################
348
349 SR-IOV 2-Node setup:
350 ####################
351 .. code-block:: console
352
353                                +--------------------+
354                                |                    |
355                                |                    |
356                                |        DUT         |
357                                |       (VNF)        |
358                                |                    |
359                                +--------------------+
360                                | VF NIC |  | VF NIC |
361                                +--------+  +--------+
362                                      ^          ^
363                                      |          |
364                                      |          |
365   +----------+               +-------------------------+
366   |          |               |       ^          ^      |
367   |          |               |       |          |      |
368   |          | (0)<----->(0) | ------           |      |
369   |    TG1   |               |           SUT    |      |
370   |          |               |                  |      |
371   |          | (n)<----->(n) |------------------       |
372   +----------+               +-------------------------+
373   trafficgen_1                          host
374
375
376
377 SR-IOV 3-Node setup - Correlated Traffic
378 ########################################
379 .. code-block:: console
380
381                                +--------------------+
382                                |                    |
383                                |                    |
384                                |        DUT         |
385                                |       (VNF)        |
386                                |                    |
387                                +--------------------+
388                                | VF NIC |  | VF NIC |
389                                +--------+  +--------+
390                                      ^          ^
391                                      |          |
392                                      |          |
393   +----------+               +-------------------------+            +--------------+
394   |          |               |       ^          ^      |            |              |
395   |          |               |       |          |      |            |              |
396   |          | (0)<----->(0) | ------           |      |            |     TG2      |
397   |    TG1   |               |           SUT    |      |            | (UDP Replay) |
398   |          |               |                  |      |            |              |
399   |          | (n)<----->(n) |                  ------ | (n)<-->(n) |              |
400   +----------+               +-------------------------+            +--------------+
401   trafficgen_1                          host                       trafficgen_2
402
403 Before executing Yardstick test cases, make sure that pod.yaml reflects the
404 topology and update all the required fields.
405
406 .. code-block:: console
407
408     cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
409     cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
410
411 .. note:: Update all the required fields like ip, user, password, pcis, etc...
412
413 SR-IOV Config pod_trex.yaml
414 ###########################
415
416 .. code-block:: YAML
417
418     nodes:
419     -
420         name: trafficgen_1
421         role: TrafficGen
422         ip: 1.1.1.1
423         user: root
424         password: r00t
425         key_filename: /root/.ssh/id_rsa
426         interfaces:
427             xe0:  # logical name from topology.yaml and vnfd.yaml
428                 vpci:      "0000:07:00.0"
429                 driver:    i40e # default kernel driver
430                 dpdk_port_num: 0
431                 local_ip: "152.16.100.20"
432                 netmask:   "255.255.255.0"
433                 local_mac: "00:00:00:00:00:01"
434             xe1:  # logical name from topology.yaml and vnfd.yaml
435                 vpci:      "0000:07:00.1"
436                 driver:    i40e # default kernel driver
437                 dpdk_port_num: 1
438                 local_ip: "152.16.40.20"
439                 netmask:   "255.255.255.0"
440                 local_mac: "00:00.00:00:00:02"
441
442 SR-IOV Config host_sriov.yaml
443 #############################
444
445 .. code-block:: YAML
446
447     nodes:
448     -
449        name: sriov
450        role: Sriov
451        ip: 192.168.100.101
452        user: ""
453        password: ""
454
455 SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
456
457 Update "contexts" section
458 """""""""""""""""""""""""
459
460 .. code-block:: YAML
461
462   contexts:
463    - name: yardstick
464      type: Node
465      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
466    - type: StandaloneSriov
467      file: /etc/yardstick/nodes/standalone/host_sriov.yaml
468      name: yardstick
469      vm_deploy: True
470      flavor:
471        images: "/var/lib/libvirt/images/ubuntu.qcow2"
472        ram: 4096
473        extra_specs:
474          hw:cpu_sockets: 1
475          hw:cpu_cores: 6
476          hw:cpu_threads: 2
477        user: "" # update VM username
478        password: "" # update password
479      servers:
480        vnf:
481          network_ports:
482            mgmt:
483              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
484            xe0:
485              - uplink_0
486            xe1:
487              - downlink_0
488      networks:
489        uplink_0:
490          phy_port: "0000:05:00.0"
491          vpci: "0000:00:07.0"
492          cidr: '152.16.100.10/24'
493          gateway_ip: '152.16.100.20'
494        downlink_0:
495          phy_port: "0000:05:00.1"
496          vpci: "0000:00:08.0"
497          cidr: '152.16.40.10/24'
498          gateway_ip: '152.16.100.20'
499
500
501
502 OVS-DPDK:
503 ^^^^^^^^^
504
505 OVS-DPDK Pre-requisites
506 #######################
507
508 On Host:
509  a) Create a bridge for VM to connect to external network
510
511   .. code-block:: console
512
513       brctl addbr br-int
514       brctl addif br-int <interface_name>    #This interface is connected to internet
515
516  b) Build guest image for VNF to run.
517     Most of the sample test cases in Yardstick are using a guest image called
518     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
519     Yardstick has a tool for building this custom image with samplevnf.
520     It is necessary to have ``sudo`` rights to use this tool.
521
522     Also you may need to install several additional packages to use this tool, by
523     following the commands below::
524
525        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
526
527     This image can be built using the following command in the directory where Yardstick is installed::
528
529        export YARD_IMG_ARCH='amd64'
530        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
531        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
532
533     for more details refer to chapter :doc:`04-installation`
534
535     .. note::  VM should be build with static IP and should be accessible from yardstick host.
536
537  c) OVS & DPDK version.
538      - OVS 2.7 and DPDK 16.11.1 above version is supported
539
540  d) Setup OVS/DPDK on host.
541      Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
542
543
544 OVS-DPDK Config pod.yaml describing Topology
545 ############################################
546
547 OVS-DPDK 2-Node setup:
548 ######################
549
550
551 .. code-block:: console
552
553                                +--------------------+
554                                |                    |
555                                |                    |
556                                |        DUT         |
557                                |       (VNF)        |
558                                |                    |
559                                +--------------------+
560                                | virtio |  | virtio |
561                                +--------+  +--------+
562                                     ^          ^
563                                     |          |
564                                     |          |
565                                +--------+  +--------+
566                                | vHOST0 |  | vHOST1 |
567   +----------+               +-------------------------+
568   |          |               |       ^          ^      |
569   |          |               |       |          |      |
570   |          | (0)<----->(0) | ------           |      |
571   |    TG1   |               |          SUT     |      |
572   |          |               |       (ovs-dpdk) |      |
573   |          | (n)<----->(n) |------------------       |
574   +----------+               +-------------------------+
575   trafficgen_1                          host
576
577
578 OVS-DPDK 3-Node setup - Correlated Traffic
579 ##########################################
580
581 .. code-block:: console
582
583                                +--------------------+
584                                |                    |
585                                |                    |
586                                |        DUT         |
587                                |       (VNF)        |
588                                |                    |
589                                +--------------------+
590                                | virtio |  | virtio |
591                                +--------+  +--------+
592                                     ^          ^
593                                     |          |
594                                     |          |
595                                +--------+  +--------+
596                                | vHOST0 |  | vHOST1 |
597   +----------+               +-------------------------+          +------------+
598   |          |               |       ^          ^      |          |            |
599   |          |               |       |          |      |          |            |
600   |          | (0)<----->(0) | ------           |      |          |    TG2     |
601   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
602   |          |               |      (ovs-dpdk)  |      |          |            |
603   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
604   +----------+               +-------------------------+          +------------+
605   trafficgen_1                          host                       trafficgen_2
606
607
608 Before executing Yardstick test cases, make sure that pod.yaml reflects the
609 topology and update all the required fields.
610
611 .. code-block:: console
612
613   cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
614   cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
615
616 .. note:: Update all the required fields like ip, user, password, pcis, etc...
617
618 OVS-DPDK Config pod_trex.yaml
619 #############################
620
621 .. code-block:: YAML
622
623     nodes:
624     -
625       name: trafficgen_1
626       role: TrafficGen
627       ip: 1.1.1.1
628       user: root
629       password: r00t
630       interfaces:
631           xe0:  # logical name from topology.yaml and vnfd.yaml
632               vpci:      "0000:07:00.0"
633               driver:    i40e # default kernel driver
634               dpdk_port_num: 0
635               local_ip: "152.16.100.20"
636               netmask:   "255.255.255.0"
637               local_mac: "00:00:00:00:00:01"
638           xe1:  # logical name from topology.yaml and vnfd.yaml
639               vpci:      "0000:07:00.1"
640               driver:    i40e # default kernel driver
641               dpdk_port_num: 1
642               local_ip: "152.16.40.20"
643               netmask:   "255.255.255.0"
644               local_mac: "00:00.00:00:00:02"
645
646 OVS-DPDK Config host_ovs.yaml
647 #############################
648
649 .. code-block:: YAML
650
651     nodes:
652     -
653        name: ovs_dpdk
654        role: OvsDpdk
655        ip: 192.168.100.101
656        user: ""
657        password: ""
658
659 ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
660
661 Update "contexts" section
662 """""""""""""""""""""""""
663
664 .. code-block:: YAML
665
666   contexts:
667    - name: yardstick
668      type: Node
669      file: /etc/yardstick/nodes/standalone/pod_trex.yaml
670    - type: StandaloneOvsDpdk
671      name: yardstick
672      file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
673      vm_deploy: True
674      ovs_properties:
675        version:
676          ovs: 2.7.0
677          dpdk: 16.11.1
678        pmd_threads: 2
679        ram:
680          socket_0: 2048
681          socket_1: 2048
682        queues: 4
683        vpath: "/usr/local"
684
685      flavor:
686        images: "/var/lib/libvirt/images/ubuntu.qcow2"
687        ram: 4096
688        extra_specs:
689          hw:cpu_sockets: 1
690          hw:cpu_cores: 6
691          hw:cpu_threads: 2
692        user: "" # update VM username
693        password: "" # update password
694      servers:
695        vnf:
696          network_ports:
697            mgmt:
698              cidr: '1.1.1.61/24'  # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
699            xe0:
700              - uplink_0
701            xe1:
702              - downlink_0
703      networks:
704        uplink_0:
705          phy_port: "0000:05:00.0"
706          vpci: "0000:00:07.0"
707          cidr: '152.16.100.10/24'
708          gateway_ip: '152.16.100.20'
709        downlink_0:
710          phy_port: "0000:05:00.1"
711          vpci: "0000:00:08.0"
712          cidr: '152.16.40.10/24'
713          gateway_ip: '152.16.100.20'
714
715
716 Enabling other Traffic generator
717 --------------------------------
718
719 IxLoad:
720 ^^^^^^^
721
722 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
723                      Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
724    If the installation was not done inside the container, after installing the IXIA client,
725    check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
726    inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
727    to /usr/bin/ixiapython<ver> inside the container.
728
729 2. Update pod_ixia.yaml file with ixia details.
730
731   .. code-block:: console
732
733     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
734
735   Config pod_ixia.yaml
736
737   .. code-block:: yaml
738
739
740       nodes:
741           -
742             name: trafficgen_1
743             role: IxNet
744             ip: 1.2.1.1 #ixia machine ip
745             user: user
746             password: r00t
747             key_filename: /root/.ssh/id_rsa
748             tg_config:
749                 ixchassis: "1.2.1.7" #ixia chassis ip
750                 tcl_port: "8009" # tcl server port
751                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
752                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
753                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
754                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
755                 dut_result_dir: "/mnt/ixia"
756                 version: 8.1
757             interfaces:
758                 xe0:  # logical name from topology.yaml and vnfd.yaml
759                     vpci: "2:5" # Card:port
760                     driver:    "none"
761                     dpdk_port_num: 0
762                     local_ip: "152.16.100.20"
763                     netmask:   "255.255.0.0"
764                     local_mac: "00:98:10:64:14:00"
765                 xe1:  # logical name from topology.yaml and vnfd.yaml
766                     vpci: "2:6" # [(Card, port)]
767                     driver:    "none"
768                     dpdk_port_num: 1
769                     local_ip: "152.40.40.20"
770                     netmask:   "255.255.0.0"
771                     local_mac: "00:98:28:28:14:00"
772
773   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
774
775 3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
776    You will also need to configure the IxLoad machine to start the IXIA
777    IxosTclServer. This can be started like so:
778
779    - Connect to the IxLoad machine using RDP
780    - Go to:
781     ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
782      or
783     ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
784
785 4. Create a folder "Results" in c:\ and share the folder on the network.
786
787 5. execute testcase in samplevnf folder.
788    eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
789
790 IxNetwork:
791 ^^^^^^^^^^
792
793 1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
794                      Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
795 2. Update pod_ixia.yaml file with ixia details.
796
797   .. code-block:: console
798
799     cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
800
801   Config pod_ixia.yaml
802
803   .. code-block:: yaml
804
805       nodes:
806           -
807             name: trafficgen_1
808             role: IxNet
809             ip: 1.2.1.1 #ixia machine ip
810             user: user
811             password: r00t
812             key_filename: /root/.ssh/id_rsa
813             tg_config:
814                 ixchassis: "1.2.1.7" #ixia chassis ip
815                 tcl_port: "8009" # tcl server port
816                 lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
817                 root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
818                 py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
819                 py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
820                 dut_result_dir: "/mnt/ixia"
821                 version: 8.1
822             interfaces:
823                 xe0:  # logical name from topology.yaml and vnfd.yaml
824                     vpci: "2:5" # Card:port
825                     driver:    "none"
826                     dpdk_port_num: 0
827                     local_ip: "152.16.100.20"
828                     netmask:   "255.255.0.0"
829                     local_mac: "00:98:10:64:14:00"
830                 xe1:  # logical name from topology.yaml and vnfd.yaml
831                     vpci: "2:6" # [(Card, port)]
832                     driver:    "none"
833                     dpdk_port_num: 1
834                     local_ip: "152.40.40.20"
835                     netmask:   "255.255.0.0"
836                     local_mac: "00:98:28:28:14:00"
837
838   for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
839
840 3. Start IxNetwork TCL Server
841    You will also need to configure the IxNetwork machine to start the IXIA
842    IxNetworkTclServer. This can be started like so:
843
844     - Connect to the IxNetwork machine using RDP
845     - Go to:     ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
846
847 4. execute testcase in samplevnf folder.
848    eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
849