Added NSB sample Test Case documentation.
[yardstick.git] / docs / testing / user / userguide / 14-nsb_installation.rst
1 .. This work is licensed under a Creative Commons Attribution 4.0 International
2 .. License.
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2017 Intel Corporation.
5
6 Yardstick - NSB Testing -Installation
7 =====================================
8
9 Abstract
10 --------
11
12 The Network Service Benchmarking (NSB) extends the yardstick framework to do
13 VNF characterization and benchmarking in three different execution
14 environments viz., bare metal i.e. native Linux environment, standalone virtual
15 environment and managed virtualized environment (e.g. Open stack etc.).
16 It also brings in the capability to interact with external traffic generators
17 both hardware & software based for triggering and validating the traffic
18 according to user defined profiles.
19
20 The steps needed to run Yardstick with NSB testing are:
21
22 * Install Yardstick (NSB Testing).
23 * Setup pod.yaml describing Test topology
24 * Create the test configuration yaml file.
25 * Run the test case.
26
27
28 Prerequisites
29 -------------
30
31 Refer chapter Yardstick Instalaltion for more information on yardstick
32 prerequisites
33
34 Several prerequisites are needed for Yardstick(VNF testing):
35
36 - Python Modules: pyzmq, pika.
37
38 - flex
39
40 - bison
41
42 - build-essential
43
44 - automake
45
46 - libtool
47
48 - librabbitmq-dev
49
50 - rabbitmq-server
51
52 - collectd
53
54 - intel-cmt-cat
55
56 Install Yardstick (NSB Testing)
57 -------------------------------
58
59 Using Docker
60 ------------
61 Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (**recommended**)**
62
63 Install directly in Ubuntu
64 --------------------------
65 .. _install-framework:
66
67 Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker image. No matter which way you choose to install Yardstick, the following installation steps are identical.
68
69 If you choose to use the Ubuntu Docker image, you can pull the Ubuntu
70 Docker image from Docker hub::
71
72   docker pull ubuntu:16.04
73
74 Install Yardstick
75 ^^^^^^^^^^^^^^^^^^^^^
76
77 Prerequisite preparation::
78
79   apt-get update && apt-get install -y git python-setuptools python-pip
80   easy_install -U setuptools==30.0.0
81   pip install appdirs==1.4.0
82   pip install virtualenv
83
84 Create a virtual environment::
85
86   virtualenv ~/yardstick_venv
87   export YARDSTICK_VENV=~/yardstick_venv
88   source ~/yardstick_venv/bin/activate
89
90 Download the source code and install Yardstick from it::
91
92   git clone https://gerrit.opnfv.org/gerrit/yardstick
93   export YARDSTICK_REPO_DIR=~/yardstick
94   cd yardstick
95   ./install.sh
96
97
98 After *Yardstick* is installed, executing the "nsb_setup.sh" script to setup
99 NSB testing::
100
101   ./nsb_setup.sh
102
103 It will also automatically download all the packages needed for NSB Testing setup.
104
105 System Topology:
106 -----------------
107
108 .. code-block:: console
109
110   +----------+              +----------+
111   |          |              |          |
112   |          | (0)----->(0) |          |
113   |    TG1   |              |    DUT   |
114   |          |              |          |
115   |          | (1)<-----(1) |          |
116   +----------+              +----------+
117   trafficgen_1                   vnf
118
119
120 Environment parameters and credentials
121 --------------------------------------
122
123 Environment variables
124 ^^^^^^^^^^^^^^^^^^^^^
125
126 Before running Yardstick (NSB Testing) it is necessary to export traffic
127 generator libraries.::
128
129     source ~/.bash_profile
130
131 Config yardstick conf
132 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
133 ::
134
135     cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
136     vi /etc/yardstick/yardstick.conf
137
138 Add trex_path, trex_client_lib and bin_path in 'nsb' section.
139
140 ::
141
142   [DEFAULT]
143   debug = True
144   dispatcher = file, influxdb
145
146   [dispatcher_influxdb]
147   timeout = 5
148   target = http://{YOUR_IP_HERE}:8086
149   db_name = yardstick
150   username = root
151   password = root
152
153   [nsb]
154   trex_path=/opt/nsb_bin/trex/scripts
155   bin_path=/opt/nsb_bin
156   trex_client_lib=/opt/nsb_bin/trex_client/stl
157
158 Network Service Benchmarking - Bare-Metal
159 -----------------------------------------
160
161 Config pod.yaml describing Topology
162 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
163
164 2-Node setup:
165 ^^^^^^^^^^^^^
166 .. code-block:: console
167   +----------+              +----------+
168   |          |              |          |
169   |          | (0)----->(0) |          |
170   |    TG1   |              |    DUT   |
171   |          |              |          |
172   |          | (n)<-----(n) |          |
173   +----------+              +----------+
174   trafficgen_1                   vnf
175
176 3-Node setup - Correlated Traffic
177 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
178 .. code-block:: console
179   +----------+              +----------+            +------------+
180   |          |              |          |            |            |
181   |          |              |          |            |            |
182   |          | (0)----->(0) |          |            |    UDP     |
183   |    TG1   |              |    DUT   |            |   Replay   |
184   |          |              |          |            |            |
185   |          |              |          |(1)<---->(0)|            |
186   +----------+              +----------+            +------------+
187   trafficgen_1                   vnf                 trafficgen_2
188
189 Before executing Yardstick test cases, make sure that pod.yaml reflects the
190 topology and update all the required fields.::
191
192     cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
193
194 Config pod.yaml
195 ::
196     nodes:
197     -
198         name: trafficgen_1
199         role: TrafficGen
200         ip: 1.1.1.1
201         user: root
202         password: r00t
203         interfaces:
204             xe0:  # logical name from topology.yaml and vnfd.yaml
205                 vpci:      "0000:07:00.0"
206                 driver:    i40e # default kernel driver
207                 dpdk_port_num: 0
208                 local_ip: "152.16.100.20"
209                 netmask:   "255.255.255.0"
210                 local_mac: "00:00:00:00:00:01"
211             xe1:  # logical name from topology.yaml and vnfd.yaml
212                 vpci:      "0000:07:00.1"
213                 driver:    i40e # default kernel driver
214                 dpdk_port_num: 1
215                 local_ip: "152.16.40.20"
216                 netmask:   "255.255.255.0"
217                 local_mac: "00:00.00:00:00:02"
218
219     -
220         name: vnf
221         role: vnf
222         ip: 1.1.1.2
223         user: root
224         password: r00t
225         host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
226         interfaces:
227             xe0:  # logical name from topology.yaml and vnfd.yaml
228                 vpci:      "0000:07:00.0"
229                 driver:    i40e # default kernel driver
230                 dpdk_port_num: 0
231                 local_ip: "152.16.100.19"
232                 netmask:   "255.255.255.0"
233                 local_mac: "00:00:00:00:00:03"
234
235             xe1:  # logical name from topology.yaml and vnfd.yaml
236                 vpci:      "0000:07:00.1"
237                 driver:    i40e # default kernel driver
238                 dpdk_port_num: 1
239                 local_ip: "152.16.40.19"
240                 netmask:   "255.255.255.0"
241                 local_mac: "00:00:00:00:00:04"
242         routing_table:
243         - network: "152.16.100.20"
244           netmask: "255.255.255.0"
245           gateway: "152.16.100.20"
246           if: "xe0"
247         - network: "152.16.40.20"
248           netmask: "255.255.255.0"
249           gateway: "152.16.40.20"
250           if: "xe1"
251         nd_route_tbl:
252         - network: "0064:ff9b:0:0:0:0:9810:6414"
253           netmask: "112"
254           gateway: "0064:ff9b:0:0:0:0:9810:6414"
255           if: "xe0"
256         - network: "0064:ff9b:0:0:0:0:9810:2814"
257           netmask: "112"
258           gateway: "0064:ff9b:0:0:0:0:9810:2814"
259           if: "xe1"
260
261 Enable yardstick virtual environment
262 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
263
264 Before executing yardstick test cases, make sure to activate yardstick
265 python virtual environment if runnin on ubuntu without docker::
266
267     source /opt/nsb_bin/yardstick_venv/bin/activate
268
269 On docker, virtual env is in main path.
270
271 Run Yardstick - Network Service Testcases
272 -----------------------------------------
273
274 NS testing - using NSBperf CLI
275 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
276 ::
277
278   PYTHONPATH: ". ~/.bash_profile"
279   cd <yardstick_repo>/yardstick/cmd
280
281  Execute command: ./NSPerf.py -h
282       ./NSBperf.py --vnf <selected vnf> --test <rfc test>
283       eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml
284
285 NS testing - using yardstick CLI
286 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
287 ::
288   PYTHONPATH: ". ~/.bash_profile"
289
290 Go to test case forlder type we want to execute.
291       e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
292       run: yardstick --debug task start <test_case.yaml>
293
294 Network Service Benchmarking - Standalone Virtualization
295 --------------------------------------------------------
296
297 SRIOV:
298 -----
299
300 Pre-requisites
301 ^^^^^^^^^^^^^^
302
303 On Host:
304  a) Create a bridge for VM to connect to external network
305     brctl addbr br-int
306     brctl addif br-int <interface_name>    #This interface is connected to internet
307
308  b) Build guest image for VNF to run.
309     Most of the sample test cases in Yardstick are using a guest image called
310     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
311     Yardstick has a tool for building this custom image with samplevnf.
312     It is necessary to have ``sudo`` rights to use this tool.
313
314     Also you may need to install several additional packages to use this tool, by
315     follwing the commands below::
316
317        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
318
319     This image can be built using the following command in the directory where Yardstick is installed::
320
321        export YARD_IMG_ARCH='amd64'
322        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
323        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
324
325     for more details refer chapter :doc:`04-installation``
326
327 Note: VM should be build with static IP and should be accessiable from yardstick host.
328
329 Config pod.yaml describing Topology
330 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
331
332 2-Node setup:
333 ^^^^^^^^^^^^^
334 .. code-block:: console
335                                +--------------------+
336                                |                    |
337                                |                    |
338                                |        DUT         |
339                                |       (VNF)        |
340                                |                    |
341                                +--------------------+
342                                | VF NIC |  | VF NIC |
343                                +--------+  +--------+
344                                     ^          ^
345                                     |          |
346                                     |          |
347                                +--------+  +--------+
348                                - PF NIC -  - PF NIC -
349   +----------+               +-------------------------+
350   |          |               |       ^          ^      |
351   |          |               |       |          |      |
352   |          | (0)<----->(0) | ------           |      |
353   |    TG1   |               |           SUT    |      |
354   |          |               |                  |      |
355   |          | (n)<----->(n) |------------------       |
356   +----------+               +-------------------------+
357   trafficgen_1                          host
358
359
360 3-Node setup - Correlated Traffic
361 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
362 .. code-block:: console
363
364                                +--------------------+
365                                |                    |
366                                |                    |
367                                |        DUT         |
368                                |       (VNF)        |
369                                |                    |
370                                +--------------------+
371                                | VF NIC |  | VF NIC |
372                                +--------+  +--------+
373                                     ^          ^
374                                     |          |
375                                     |          |
376                                +--------+  +--------+
377                                | PF NIC -  - PF NIC -
378   +----------+               +-------------------------+          +------------+
379   |          |               |       ^          ^      |          |            |
380   |          |               |       |          |      |          |            |
381   |          | (0)<----->(0) | ------           |      |          |    TG2     |
382   |    TG1   |               |           SUT    |      |          |(UDP Replay)|
383   |          |               |                  |      |          |            |
384   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
385   +----------+               +-------------------------+          +------------+
386   trafficgen_1                          host                       trafficgen_2
387
388 Before executing Yardstick test cases, make sure that pod.yaml reflects the
389 topology and update all the required fields.
390
391 ::
392
393     cp /etc/yardstick/nodes/pod.yaml.nsb.sriov.sample /etc/yardstick/nodes/pod.yaml
394
395 Config pod.yaml
396 ::
397     nodes:
398     -
399         name: trafficgen_1
400         role: TrafficGen
401         ip: 1.1.1.1
402         user: root
403         password: r00t
404         interfaces:
405             xe0:  # logical name from topology.yaml and vnfd.yaml
406                 vpci:      "0000:07:00.0"
407                 driver:    i40e # default kernel driver
408                 dpdk_port_num: 0
409                 local_ip: "152.16.100.20"
410                 netmask:   "255.255.255.0"
411                 local_mac: "00:00:00:00:00:01"
412             xe1:  # logical name from topology.yaml and vnfd.yaml
413                 vpci:      "0000:07:00.1"
414                 driver:    i40e # default kernel driver
415                 dpdk_port_num: 1
416                 local_ip: "152.16.40.20"
417                 netmask:   "255.255.255.0"
418                 local_mac: "00:00.00:00:00:02"
419
420 -
421     name: sriov
422     role: Sriov
423     ip: 2.2.2.2
424     user: root
425     auth_type: password
426     password: password
427     vf_macs:
428      - "00:00:00:00:00:03"
429      - "00:00:00:00:00:04"
430     phy_ports: # Physical ports to configure sriov
431      - "0000:06:00.0"
432      - "0000:06:00.1"
433     phy_driver:    i40e # kernel driver
434     images: "/var/lib/libvirt/images/ubuntu1.img"
435
436     -
437         name: vnf
438         role: vnf
439         ip: 1.1.1.2
440         user: root
441         password: r00t
442         host: 2.2.2.2 #BM - host == ip, virtualized env - Host - compute node
443         interfaces:
444             xe0:  # logical name from topology.yaml and vnfd.yaml
445                 vpci:      "0000:00:07.0"
446                 driver:    i40evf # default kernel driver
447                 dpdk_port_num: 0
448                 local_ip: "152.16.100.10"
449                 netmask:   "255.255.255.0"
450                 local_mac: "00:00:00:00:00:03"
451
452             xe1:  # logical name from topology.yaml and vnfd.yaml
453                 vpci:      "0000:00:08.0"
454                 driver:    i40evf # default kernel driver
455                 dpdk_port_num: 1
456                 local_ip: "152.16.40.10"
457                 netmask:   "255.255.255.0"
458                 local_mac: "00:00:00:00:00:04"
459         routing_table:
460         - network: "152.16.100.10"
461           netmask: "255.255.255.0"
462           gateway: "152.16.100.20"
463           if: "xe0"
464         - network: "152.16.40.10"
465           netmask: "255.255.255.0"
466           gateway: "152.16.40.20"
467           if: "xe1"
468         nd_route_tbl:
469         - network: "0064:ff9b:0:0:0:0:9810:6414"
470           netmask: "112"
471           gateway: "0064:ff9b:0:0:0:0:9810:6414"
472           if: "xe0"
473         - network: "0064:ff9b:0:0:0:0:9810:2814"
474           netmask: "112"
475           gateway: "0064:ff9b:0:0:0:0:9810:2814"
476           if: "xe1"
477
478 Enable yardstick virtual environment
479 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
480
481 Before executing yardstick test cases, make sure to activate yardstick
482 python virtual environment if runnin on ubuntu without docker::
483
484     source /opt/nsb_bin/yardstick_venv/bin/activate
485
486 On docker, virtual env is in main path.
487
488 Run Yardstick - Network Service Testcases
489 -----------------------------------------
490
491 NS testing - using NSBperf CLI
492 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
493 ::
494
495   PYTHONPATH: ". ~/.bash_profile"
496   cd <yardstick_repo>/yardstick/cmd
497
498  Execute command: ./NSPerf.py -h
499       ./NSBperf.py --vnf <selected vnf> --test <rfc test>
500       eg: ./NSBperf.py --vnf vfw --test tc_sriov_rfc2544_ipv4_1flow_64B.yaml
501
502 NS testing - using yardstick CLI
503 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
504 ::
505   PYTHONPATH: ". ~/.bash_profile"
506
507 Go to test case forlder type we want to execute.
508       e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
509       run: yardstick --debug task start <test_case.yaml>
510
511 OVS-DPDK:
512 -----
513
514 Pre-requisites
515 ^^^^^^^^^^^^^^
516
517 On Host:
518  a) Create a bridge for VM to connect to external network
519     brctl addbr br-int
520     brctl addif br-int <interface_name>    #This interface is connected to internet
521
522  b) Build guest image for VNF to run.
523     Most of the sample test cases in Yardstick are using a guest image called
524     ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
525     Yardstick has a tool for building this custom image with samplevnf.
526     It is necessary to have ``sudo`` rights to use this tool.
527
528     Also you may need to install several additional packages to use this tool, by
529     follwing the commands below::
530
531        sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
532
533     This image can be built using the following command in the directory where Yardstick is installed::
534
535        export YARD_IMG_ARCH='amd64'
536        sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
537        sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
538
539     for more details refer chapter :doc:`04-installation``
540
541 Note: VM should be build with static IP and should be accessiable from yardstick host.
542
543   c) OVS & DPDK version.
544      - OVS 2.7 and DPDK 16.11.1 above version is supported
545
546   d) Setup OVS/DPDK on host.
547      Please refer below link on how to setup .. _ovs-dpdk: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
548
549 Config pod.yaml describing Topology
550 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
551
552 2-Node setup:
553 ^^^^^^^^^^^^^
554 .. code-block:: console
555                                +--------------------+
556                                |                    |
557                                |                    |
558                                |        DUT         |
559                                |       (VNF)        |
560                                |                    |
561                                +--------------------+
562                                | virtio |  | virtio |
563                                +--------+  +--------+
564                                     ^          ^
565                                     |          |
566                                     |          |
567                                +--------+  +--------+
568                                | vHOST0 |  | vHOST1 |
569   +----------+               +-------------------------+
570   |          |               |       ^          ^      |
571   |          |               |       |          |      |
572   |          | (0)<----->(0) | ------           |      |
573   |    TG1   |               |          SUT     |      |
574   |          |               |       (ovs-dpdk) |      |
575   |          | (n)<----->(n) |------------------       |
576   +----------+               +-------------------------+
577   trafficgen_1                          host
578
579
580 3-Node setup - Correlated Traffic
581 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
582 .. code-block:: console
583
584                                +--------------------+
585                                |                    |
586                                |                    |
587                                |        DUT         |
588                                |       (VNF)        |
589                                |                    |
590                                +--------------------+
591                                | virtio |  | virtio |
592                                +--------+  +--------+
593                                     ^          ^
594                                     |          |
595                                     |          |
596                                +--------+  +--------+
597                                | vHOST0 |  | vHOST1 |
598   +----------+               +-------------------------+          +------------+
599   |          |               |       ^          ^      |          |            |
600   |          |               |       |          |      |          |            |
601   |          | (0)<----->(0) | ------           |      |          |    TG2     |
602   |    TG1   |               |          SUT     |      |          |(UDP Replay)|
603   |          |               |      (ovs-dpdk)  |      |          |            |
604   |          | (n)<----->(n) |                  ------ |(n)<-->(n)|            |
605   +----------+               +-------------------------+          +------------+
606   trafficgen_1                          host                       trafficgen_2
607
608
609 Before executing Yardstick test cases, make sure that pod.yaml reflects the
610 topology and update all the required fields.::
611
612     cp /etc/yardstick/nodes/pod.yaml.nsb.ovs.sample /etc/yardstick/nodes/pod.yaml
613
614 Config pod.yaml
615 ::
616     nodes:
617     -
618         name: trafficgen_1
619         role: TrafficGen
620         ip: 1.1.1.1
621         user: root
622         password: r00t
623         interfaces:
624             xe0:  # logical name from topology.yaml and vnfd.yaml
625                 vpci:      "0000:07:00.0"
626                 driver:    i40e # default kernel driver
627                 dpdk_port_num: 0
628                 local_ip: "152.16.100.20"
629                 netmask:   "255.255.255.0"
630                 local_mac: "00:00:00:00:00:01"
631             xe1:  # logical name from topology.yaml and vnfd.yaml
632                 vpci:      "0000:07:00.1"
633                 driver:    i40e # default kernel driver
634                 dpdk_port_num: 1
635                 local_ip: "152.16.40.20"
636                 netmask:   "255.255.255.0"
637                 local_mac: "00:00.00:00:00:02"
638
639 -
640     name: ovs
641     role: Ovsdpdk
642     ip: 2.2.2.2
643     user: root
644     auth_type: password
645     password: <password>
646     vpath: "/usr/local/"
647     vports:
648      - dpdkvhostuser0
649      - dpdkvhostuser1
650     vports_mac:
651      - "00:00:00:00:00:03"
652      - "00:00:00:00:00:04"
653     phy_ports: # Physical ports to configure ovs
654      - "0000:06:00.0"
655      - "0000:06:00.1"
656     flow:
657      - ovs-ofctl add-flow br0 in_port=1,action=output:3
658      - ovs-ofctl add-flow br0 in_port=3,action=output:1
659      - ovs-ofctl add-flow br0 in_port=4,action=output:2
660      - ovs-ofctl add-flow br0 in_port=2,action=output:4
661     phy_driver:    i40e # kernel driver
662     images: "/var/lib/libvirt/images/ubuntu1.img"
663
664     -
665         name: vnf
666         role: vnf
667         ip: 1.1.1.2
668         user: root
669         password: r00t
670         host: 2.2.2.2 #BM - host == ip, virtualized env - Host - compute node
671         interfaces:
672             xe0:  # logical name from topology.yaml and vnfd.yaml
673                 vpci:      "0000:00:04.0"
674                 driver:    virtio-pci # default kernel driver
675                 dpdk_port_num: 0
676                 local_ip: "152.16.100.10"
677                 netmask:   "255.255.255.0"
678                 local_mac: "00:00:00:00:00:03"
679
680             xe1:  # logical name from topology.yaml and vnfd.yaml
681                 vpci:      "0000:00:05.0"
682                 driver:    virtio-pci # default kernel driver
683                 dpdk_port_num: 1
684                 local_ip: "152.16.40.10"
685                 netmask:   "255.255.255.0"
686                 local_mac: "00:00:00:00:00:04"
687         routing_table:
688         - network: "152.16.100.10"
689           netmask: "255.255.255.0"
690           gateway: "152.16.100.20"
691           if: "xe0"
692         - network: "152.16.40.10"
693           netmask: "255.255.255.0"
694           gateway: "152.16.40.20"
695           if: "xe1"
696         nd_route_tbl:
697         - network: "0064:ff9b:0:0:0:0:9810:6414"
698           netmask: "112"
699           gateway: "0064:ff9b:0:0:0:0:9810:6414"
700           if: "xe0"
701         - network: "0064:ff9b:0:0:0:0:9810:2814"
702           netmask: "112"
703           gateway: "0064:ff9b:0:0:0:0:9810:2814"
704           if: "xe1"
705
706 Enable yardstick virtual environment
707 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
708
709 Before executing yardstick test cases, make sure to activate yardstick
710 python virtual environment if runnin on ubuntu without docker::
711
712     source /opt/nsb_bin/yardstick_venv/bin/activate
713
714 On docker, virtual env is in main path.
715
716 Run Yardstick - Network Service Testcases
717 -----------------------------------------
718
719 NS testing - using NSBperf CLI
720 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
721 ::
722
723   PYTHONPATH: ". ~/.bash_profile"
724   cd <yardstick_repo>/yardstick/cmd
725
726  Execute command: ./NSPerf.py -h
727       ./NSBperf.py --vnf <selected vnf> --test <rfc test>
728       eg: ./NSBperf.py --vnf vfw --test tc_ovs_rfc2544_ipv4_1flow_64B.yaml
729
730 NS testing - using yardstick CLI
731 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
732 ::
733   PYTHONPATH: ". ~/.bash_profile"
734
735 Go to test case forlder type we want to execute.
736       e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
737       run: yardstick --debug task start <test_case.yaml>