4 This document describes the steps to create a new NSB PROX test based on
5 existing PROX functionalities. NSB PROX provides is a simple approximation
6 of an operation and can be used to develop best practices and TCO models
7 for Telco customers, investigate the impact of new Intel compute,
8 network and storage technologies, characterize performance, and develop
9 optimal system architectures and configurations.
11 NSB PROX Supports Baremetal, Openstack and standalone configuration.
18 In order to integrate PROX tests into NSB, the following prerequisites are
21 .. _`dpdk wiki page`: https://www.dpdk.org/
22 .. _`yardstick wiki page`: https://wiki.opnfv.org/display/yardstick/
23 .. _`Prox documentation`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation
24 .. _`openstack wiki page`: https://wiki.openstack.org/wiki/Main_Page
25 .. _`grafana getting started`: http://docs.grafana.org/guides/gettingstarted/
26 .. _`opnfv grafana dashboard`: https://wiki.opnfv.org/display/yardstick/How+to+work+with+grafana+dashboard
27 .. _`Prox command line`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation#Command_line_options
28 .. _`grafana deployment`: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally
29 .. _`Prox options`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation#.5Beal_options.5D
30 .. _`NSB Installation`: http://artifacts.opnfv.org/yardstick/docs/userguide/index.html#document-09-installation
32 * A working knowledge of Yardstick. See `yardstick wiki page`_.
33 * A working knowledge of PROX. See `Prox documentation`_.
34 * Knowledge of Openstack. See `openstack wiki page`_.
35 * Knowledge of how to use Grafana. See `grafana getting started`_.
36 * How to Deploy InfluxDB & Grafana. See `grafana deployment`_.
37 * How to use Grafana in OPNFV/Yardstick. See `opnfv grafana dashboard`_.
38 * How to install NSB. See `NSB Installation`_
40 Sample Prox Test Hardware Architecture
41 ======================================
43 The following is a diagram of a sample NSB PROX Hardware Architecture
44 for both NSB PROX on Bare metal and on Openstack.
46 In this example when running yardstick on baremetal, yardstick will
47 run on the deployment node, the generator will run on the deployment node
48 and the SUT(SUT) will run on the Controller Node.
51 .. image:: images/PROX_Hardware_Arch.png
53 :alt: Sample NSB PROX Hard Architecture
55 Prox Test Architecture
56 ======================
58 In order to create a new test, one must understand the architecture of
61 A NSB Prox test architecture is composed of:
63 * A traffic generator. This provides blocks of data on 1 or more ports
65 The traffic generator also consumes the result packets from the system
67 * A SUT consumes the packets generated by the packet
68 generator, and applies one or more tasks to the packets and return the
69 modified packets to the traffic generator.
71 This is an example of a sample NSB PROX test architecture.
73 .. image:: images/PROX_Software_Arch.png
75 :alt: NSB PROX test Architecture
77 This diagram is of a sample NSB PROX test application.
81 * Generator Tasks - Composted of 1 or more tasks (It is possible to
82 have multiple tasks sending packets to same port No. See Tasks Ai and Aii
85 * Task Ai - Generates Packets on Port 0 of Traffic Generator
86 and send to Port 0 of SUT Port 0
87 * Task Aii - Generates Packets on Port 0 of Traffic Generator
88 and send to Port 0 of SUT Port 0
89 * Task B - Generates Packets on Port 1 of Traffic Generator
90 and send to Port 1 of SUT Port 1
91 * Task C - Generates Packets on Port 2 of Traffic Generator
92 and send to Port 2 of SUT Port 2
93 * Task Di - Generates Packets on Port 3 of Traffic Generator
94 and send to Port 3 of SUT Port 3
95 * Task Dii - Generates Packets on Port 0 of Traffic Generator
96 and send to Port 0 of SUT Port 0
98 * Verifier Tasks - Composed of 1 or more tasks which receives
101 * Task E - Receives packets on Port 0 of Traffic Generator sent
102 from Port 0 of SUT Port 0
103 * Task F - Receives packets on Port 1 of Traffic Generator sent
104 from Port 1 of SUT Port 1
105 * Task G - Receives packets on Port 2 of Traffic Generator sent
106 from Port 2 of SUT Port 2
107 * Task H - Receives packets on Port 3 of Traffic Generator sent
108 from Port 3 of SUT Port 3
112 * Receiver Tasks - Receives packets from generator - Composed on 1 or
113 more tasks which consume the packs sent from Traffic Generator
115 * Task A - Receives Packets on Port 0 of System-Under-Test from
116 Traffic Generator Port 0, and forwards packets to Task E
117 * Task B - Receives Packets on Port 1 of System-Under-Test from
118 Traffic Generator Port 1, and forwards packets to Task E
119 * Task C - Receives Packets on Port 2 of System-Under-Test from
120 Traffic Generator Port 2, and forwards packets to Task E
121 * Task D - Receives Packets on Port 3 of System-Under-Test from
122 Traffic Generator Port 3, and forwards packets to Task E
124 * Processing Tasks - Composed of multiple tasks in series which carry
125 out some processing on received packets before forwarding to the
128 * Task E - This receives packets from the Receiver Tasks,
129 carries out some operation on the data and forwards to result
130 packets to the next task in the sequence - Task F
131 * Task F - This receives packets from the previous Task - Task
132 E, carries out some operation on the data and forwards to result
133 packets to the next task in the sequence - Task G
134 * Task G - This receives packets from the previous Task - Task F
135 and distributes the result packages to the Transmitter tasks
137 * Transmitter Tasks - Composed on 1 or more tasks which send the
138 processed packets back to the Traffic Generator
140 * Task H - Receives Packets from Task G of System-Under-Test and
141 sends packets to Traffic Generator Port 0
142 * Task I - Receives Packets from Task G of System-Under-Test and
143 sends packets to Traffic Generator Port 1
144 * Task J - Receives Packets from Task G of System-Under-Test and
145 sends packets to Traffic Generator Port 2
146 * Task K - Receives Packets From Task G of System-Under-Test and
147 sends packets to Traffic Generator Port 3
152 A NSB Prox test is composed of the following components :-
154 * Test Description File. Usually called
155 ``tc_prox_<context>_<test>-<ports>.yaml`` where
157 * <context> is either ``baremetal`` or ``heat_context``
158 * <test> is the a one or 2 word description of the test.
159 * <ports> is the number of ports used
161 Example tests ``tc_prox_baremetal_l2fwd-2.yaml`` or
162 ``tc_prox_heat_context_vpe-4.yaml``. This file describes the components
163 of the test, in the case of openstack the network description and
164 server descriptions, in the case of baremetal the hardware
165 description location. It also contains the name of the Traffic Generator,
166 the SUT config file and the traffic profile description, all described below.
167 See `Test Description File`_
169 * Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the
170 packet size, tolerated loss, initial line rate to start traffic at, test
171 interval etc See `Traffic Profile File`_
173 * Traffic Generator Config file. Usually called ``gen_<test>-<ports>.cfg``.
175 This describes the activity of the traffic generator
177 * What each core of the traffic generator does,
178 * The packet of data sent by a core on a port of the traffic generator
179 to the system under test
180 * What core is used to wait on what port for data from the system
183 Example traffic generator config file ``gen_l2fwd-4.cfg``
184 See `Traffic Generator Config file`_
186 * SUT Config file. Usually called ``handle_<test>-<ports>.cfg``.
188 This describes the activity of the SUTs
190 * What each core of the does,
191 * What cores receives packets from what ports
192 * What cores perform operations on the packets and pass the packets onto
194 * What cores receives packets from what cores and transmit the packets on
195 the ports to the Traffic Verifier tasks of the Traffic Generator.
197 Example traffic generator config file ``handle_l2fwd-4.cfg``
198 See `SUT Config File`_
200 * NSB PROX Baremetal Configuration file. Usually called
201 ``prox-baremetal-<ports>.yaml``
203 * <ports> is the number of ports used
205 This is required for baremetal only. This describes hardware, NICs,
206 IP addresses, Network drivers, usernames and passwords.
207 See `Baremetal Configuration File`_
209 * Grafana Dashboard. Usually called
210 ``Prox_<context>_<test>-<port>-<DateAndTime>.json`` where
212 * <context> Is ``BM``,``heat``,``ovs_dpdk`` or ``sriov``
213 * <test> Is the a one or 2 word description of the test.
214 * <port> is the number of ports used express as ``2Port`` or ``4Port``
215 * <DateAndTime> is the Date and Time expressed as a string.
217 Example grafana dashboard ``Prox_BM_L2FWD-4Port-1507804504588.json``
219 Other files may be required. These are test specific files and will be
223 Test Description File
224 ---------------------
226 Here we will discuss the test description for
227 baremetal, openstack and standalone.
229 Test Description File for Baremetal
230 -----------------------------------
232 This section will introduce the meaning of the Test case description
233 file. We will use ``tc_prox_baremetal_l2fwd-2.yaml`` as an example to
234 show you how to understand the test description file.
236 .. image:: images/PROX_Test_BM_Script.png
238 :alt: NSB PROX Test Description File
240 Now let's examine the components of the file in detail
242 1. ``traffic_profile`` - This specifies the traffic profile for the
243 test. In this case ``prox_binsearch.yaml`` is used. See
244 `Traffic Profile File`_
246 2. ``topology`` - This is either ``prox-tg-topology-1.yaml`` or
247 ``prox-tg-topology-2.yaml`` or ``prox-tg-topology-4.yaml``
248 depending on number of ports required.
250 3. ``nodes`` - This names the Traffic Generator and the System
251 under Test. Does not need to change.
253 4. ``interface_speed_gbps`` - This is an optional parameter. If not present
254 the system defaults to 10Gbps. This defines the speed of the interfaces.
256 5. ``collectd`` - (Optional) This specifies we want to collect NFVI statistics
257 like CPU Utilization,
259 6. ``prox_path`` - Location of the Prox executable on the traffic
260 generator (Either baremetal or Openstack Virtual Machine)
262 7. ``prox_config`` - This is the ``SUT Config File``.
263 In this case it is ``handle_l2fwd-2.cfg``
265 A number of additional parameters can be added. This example
269 interface_speed_gbps: 10
280 prox_path: /opt/nsb_bin/prox
281 prox_config: ``configs/handle_vpe-4.cfg``
285 ``configs/vpe_ipv4.lua`` : ````
286 ``configs/vpe_dscp.lua`` : ````
287 ``configs/vpe_cpe_table.lua`` : ````
288 ``configs/vpe_user_table.lua`` : ````
289 ``configs/vpe_rules.lua`` : ````
290 prox_generate_parameter: True
292 ``interface_speed_gbps`` - this specifies the speed of the interface
293 in Gigabits Per Second. This is used to calculate pps(packets per second).
294 If the interfaces are of different speeds, then this specifies the speed
295 of the slowest interface. This parameter is optional. If omitted the
296 interface speed defaults to 10Gbps.
298 ``traffic_config`` - This allows the values here to override the values in
299 in the traffic_profile file. e.g. "prox_binsearch.yaml". Values provided
300 here override values provided in the "traffic_profile" section of the
301 traffic_profile file. Some, all or none of the values can be provided here.
303 The values describes the packet size, tolerated loss, initial line rate
304 to start traffic at, test interval etc See `Traffic Profile File`_
306 ``prox_files`` - this specified that a number of addition files
307 need to be provided for the test to run correctly. This files
308 could provide routing information,hashing information or a
309 hashing algorithm and ip/mac information.
311 ``prox_generate_parameter`` - this specifies that the NSB application
312 is required to provide information to the nsb Prox in the form
313 of a file called ``parameters.lua``, which contains information
314 retrieved from either the hardware or the openstack configuration.
316 8. ``prox_args`` - this specifies the command line arguments to start
317 prox. See `prox command line`_.
319 9. ``prox_config`` - This specifies the Traffic Generator config file.
321 10. ``runner`` - This is set to ``ProxDuration`` - This specifies that the
322 test runs for a set duration. Other runner types are available
323 but it is recommend to use ``ProxDuration``. The following parameters
326 ``interval`` - (optional) - This specifies the sampling interval.
329 ``sampled`` - (optional) - This specifies if sampling information is
330 required. Default ``no``
332 ``duration`` - This is the length of the test in seconds. Default
335 ``confirmation`` - This specifies the number of confirmation retests to
336 be made before deciding to increase or decrease line speed. Default 0.
338 11. ``context`` - This is ``context`` for a 2 port Baremetal configuration.
340 If a 4 port configuration was required then file
341 ``prox-baremetal-4.yaml`` would be used. This is the NSB Prox
342 baremetal configuration file.
347 This describes the details of the traffic flow. In this case
348 ``prox_binsearch.yaml`` is used.
350 .. image:: images/PROX_Traffic_profile.png
352 :alt: NSB PROX Traffic Profile
355 1. ``name`` - The name of the traffic profile. This name should match the
356 name specified in the ``traffic_profile`` field in the Test
359 2. ``traffic_type`` - This specifies the type of traffic pattern generated,
360 This name matches class name of the traffic generator. See::
362 network_services/traffic_profile/prox_binsearch.py class ProxBinSearchProfile(ProxProfile)
364 In this case it lowers the traffic rate until the number of packets
365 sent is equal to the number of packets received (plus a
366 tolerated loss). Once it achieves this it increases the traffic
367 rate in order to find the highest rate with no traffic loss.
369 Custom traffic types can be created by creating a new traffic profile class.
371 3. ``tolerated_loss`` - This specifies the percentage of packets that
372 can be lost/dropped before
373 we declare success or failure. Success is Transmitted-Packets from
374 Traffic Generator is greater than or equal to
375 packets received by Traffic Generator plus tolerated loss.
377 4. ``test_precision`` - This specifies the precision of the test
378 results. For some tests the success criteria may never be
379 achieved because the test precision may be greater than the
380 successful throughput. For finer results increase the precision
381 by making this value smaller.
383 5. ``packet_sizes`` - This specifies the range of packets size this
386 6. ``duration`` - This specifies the sample duration that the test
387 uses to check for success or failure.
389 7. ``lower_bound`` - This specifies the test initial lower bound sample rate.
390 On success this value is increased.
392 8. ``upper_bound`` - This specifies the test initial upper bound sample rate.
393 On success this value is decreased.
395 Other traffic profiles exist eg prox_ACL.yaml which does not
396 compare what is received with what is transmitted. It just
397 sends packet at max rate.
399 It is possible to create custom traffic profiles with by
400 creating new file in the same folder as prox_binsearch.yaml.
401 See this prox_vpe.yaml as example::
403 schema: ``nsb:traffic_profile:0.1``
406 description: Prox vPE traffic profile
409 traffic_type: ProxBinSearchProfile
410 tolerated_loss: 100.0 #0.001
412 # The minimum size of the Ethernet frame for the vPE test is 68 bytes.
418 Test Description File for Openstack
419 -----------------------------------
421 We will use ``tc_prox_heat_context_l2fwd-2.yaml`` as a example to show
422 you how to understand the test description file.
424 .. image:: images/PROX_Test_HEAT_Script1.png
426 :alt: NSB PROX Test Description File - Part 1
429 .. image:: images/PROX_Test_HEAT_Script2.png
431 :alt: NSB PROX Test Description File - Part 2
433 Now lets examine the components of the file in detail
435 Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section
436 ``10`` is replaced with sections A to F. Section 10 was for a baremetal
437 configuration file. This has no place in a heat configuration.
439 A. ``image`` - yardstick-samplevnfs. This is the name of the image
440 created during the installation of NSB. This is fixed.
442 B. ``flavor`` - The flavor is created dynamically. However we could
443 use an already existing flavor if required. In that case the
444 flavor would be named::
446 flavor: yardstick-flavor
448 C. ``extra_specs`` - This allows us to specify the number of
449 cores sockets and hyperthreading assigned to it. In this case
450 we have 1 socket with 10 codes and no hyperthreading enabled.
452 D. ``placement_groups`` - default. Do not change for NSB PROX.
454 E. ``servers`` - ``tg_0`` is the traffic generator and ``vnf_0``
455 is the system under test.
457 F. ``networks`` - is composed of a management network labeled ``mgmt``
458 and one uplink network labeled ``uplink_0`` and one downlink
459 network labeled ``downlink_0`` for 2 ports. If this was a 4 port
460 configuration there would be 2 extra downlink ports. See this
461 example from a 4 port l2fwd test.::
469 port_security_enabled: False
474 port_security_enabled: False
479 port_security_enabled: False
484 port_security_enabled: False
487 Test Description File for Standalone
488 ------------------------------------
490 We will use ``tc_prox_ovs-dpdk_l2fwd-2.yaml`` as a example to show
491 you how to understand the test description file.
493 .. image:: images/PROX_Test_ovs_dpdk_Script_1.png
495 :alt: NSB PROX Test Standalone Description File - Part 1
497 .. image:: images/PROX_Test_ovs_dpdk_Script_2.png
499 :alt: NSB PROX Test Standalone Description File - Part 2
501 Now lets examine the components of the file in detail
503 Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section
504 ``10`` is replaced with sections A to F. Section 10 was for a baremetal
505 configuration file. This has no place in a heat configuration.
507 A. ``file`` - Pod file for Baremetal Traffic Generator configuration:
508 IP Address, User/Password & Interfaces
510 B. ``type`` - This defines the type of standalone configuration.
511 Possible values are ``StandaloneOvsDpdk`` or ``StandaloneSriov``
513 C. ``file`` - Pod file for Standalone host configuration:
514 IP Address, User/Password & Interfaces
516 D. ``vm_deploy`` - Deploy a new VM or use an existing VM
518 E. ``ovs_properties`` - OVS Version, DPDK Version and configuration
521 F. ``flavor``- NSB image generated when installing NSB using ansible-playbook::
523 ram- Configurable RAM for SUT VM
525 hw:cpu_sockets - Configurable number of Sockets for SUT VM
526 hw:cpu_cores - Configurable number of Cores for SUT VM
527 hw:cpu_threads- Configurable number of Threads for SUT VM
529 G. ``mgmt`` - Management port of the SUT VM. Preconfig needed on TG & SUT host machines.
530 is the system under test.
533 H. ``xe0`` - Upline Network port
535 I. ``xe1`` - Downline Network port
537 J. ``uplink_0`` - Uplink Phy port of the NIC on the host. This will be used to create
538 the Virtual Functions.
540 K. ``downlink_0`` - Downlink Phy port of the NIC on the host. This will be used to
541 create the Virtual Functions.
543 Traffic Generator Config file
544 -----------------------------
546 This section will describe the traffic generator config file.
547 This is the same for both baremetal and heat. See this example
548 of ``gen_l2fwd_multiflow-2.cfg`` to explain the options.
550 .. image:: images/PROX_Gen_2port_cfg.png
552 :alt: NSB PROX Gen Config File
554 The configuration file is divided into multiple sections, each
555 of which is used to define some parameters and options.::
571 See `prox options`_ for details
573 Now let's examine the components of the file in detail
575 1. ``[eal options]`` - This specified the EAL (Environmental
576 Abstraction Layer) options. These are default values and
577 are not changed. See `dpdk wiki page`_.
579 2. ``[variables]`` - This section contains variables, as
580 the name suggests. Variables for Core numbers, mac
581 addresses, ip addresses etc. They are assigned as a
582 ``key = value`` where the key is used in place of the value.
585 A special case for valuables with a value beginning with
586 ``@@``. These values are dynamically updated by the NSB
587 application at run time. Values like MAC address,
590 3. ``[port 0]`` - This section describes the DPDK Port. The number
591 following the keyword ``port`` usually refers to the DPDK Port
592 Id. usually starting from ``0``. Because you can have multiple
593 ports this entry usually repeated. Eg. For a 2 port setup
594 ``[port0]`` and ``[port 1]`` and for a 4 port setup ``[port 0]``,
595 ``[port 1]``, ``[port 2]`` and ``[port 3]``::
604 a. In this example ``name = p0`` assigned the name ``p0`` to the
605 port. Any name can be assigned to a port.
606 b. ``mac=hardware`` sets the MAC address assigned by the hardware
607 to data from this port.
608 c. ``rx desc=2048`` sets the number of available descriptors to
609 allocate for receive packets. This can be changed and can
611 d. ``tx desc=2048`` sets the number of available descriptors to
612 allocate for transmit packets. This can be changed and can
614 e. ``promiscuous=yes`` this enables promiscuous mode for this port.
616 4. ``[defaults]`` - Here default operations and settings can be over
617 written. In this example ``mempool size=4K`` the number of mbufs
618 per task is altered. Altering this value could effect
619 performance. See `prox options`_ for details.
621 5. ``[global]`` - Here application wide setting are supported. Things
622 like application name, start time, duration and memory
623 configurations can be set here. In this example.::
629 a. ``start time=5`` Time is seconds after which average
630 stats will be started.
631 b. ``name=Basic Gen`` Name of the configuration.
633 6. ``[core 0]`` - This core is designated the master core. Every
634 Prox application must have a master core. The master mode must
635 be assigned to exactly one task, running alone on one core.::
640 7. ``[core 1]`` - This describes the activity on core 1. Cores can
641 be configured by means of a set of [core #] sections, where
644 a. an absolute core number: e.g. on a 10-core, dual socket
645 system with hyper-threading,
646 cores are numbered from 0 to 39.
648 b. PROX allows a core to be identified by a core number, the
649 letter 's', and a socket number.
651 It is possible to write a baremetal and an openstack test which use
652 the same traffic generator config file and SUT config file.
653 In this case it is advisable not to use physical
656 However it is also possible to write NSB Prox tests that
657 have been optimized for a particular hardware configuration.
658 In this case it is advisable to use the core numbering.
659 It is up to the user to make sure that cores from
660 the right sockets are used (i.e. from the socket on which the NIC
661 is attached to), to ensure good performance (EPA).
663 Each core can be assigned with a set of tasks, each running
664 one of the implemented packet processing modes.::
672 ; Ethernet + IP + UDP
673 pkt inline=${sut_mac0} 70 00 00 00 00 01 08 00 45 00 00 1c 00 01 00 00 40 11 f7 7d 98 10 64 01 98 10 64 02 13 88 13 88 00 08 55 7b
674 ; src_ip: 152.16.100.0/8
677 ; dst_ip: 152.16.100.0/8
680 random=0001001110001XXX0001001110001XXX
683 a. ``name=p0`` - Name assigned to the core.
684 b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
685 Task 1 can be defined later in this core or
686 can be defined in another ``[core 1]`` section with ``task=1``
687 later in configuration file. Sometimes running
688 multiple task related to the same packet on the same physical
689 core improves performance, however sometimes it
690 is optimal to move task to a separate core. This is best
691 decided by checking performance.
692 c. ``mode=gen`` - Specifies the action carried out by this task on
693 this core. Supported modes are: classify, drop, gen, lat, genl4, nop, l2fwd, gredecap,
694 greencap, lbpos, lbnetwork, lbqinq, lb5tuple, ipv6_decap, ipv6_encap,
695 qinqdecapv4, qinqencapv4, qos, routing, impair,
696 mirror, unmpls, tagmpls, nat, decapnsh, encapnsh, police, acl
701 * Basic Forwarding (no touch)
702 * L2 Forwarding (change MAC)
704 * Load balance based on packet fields
705 * Symmetric load balancing
706 * QinQ encap/decap IPv4/IPv6
715 In the traffic generator we expect a core to generate packets (``gen``)
716 and to receive packets & calculate latency (``lat``)
717 This core does ``gen`` . ie it is a traffic generator.
719 To understand what each of the modes support please see
720 `prox documentation`_.
722 d. ``tx port=p0`` - This specifies that the packets generated are
723 transmitted to port ``p0``
724 e. ``bps=1250000000`` - This indicates Bytes Per Second to
726 f. ``; Ethernet + IP + UDP`` - This is a comment. Items starting with
728 g. ``pkt inline=${sut_mac0} 70 00 00 00 ...`` - Defines the packet
729 format as a sequence of bytes (each
730 expressed in hexadecimal notation). This defines the packet
731 that is generated. This packets begins
732 with the hexadecimal sequence assigned to ``sut_mac`` and the
733 remainder of the bytes in the string.
734 This packet could now be sent or modified by ``random=..``
735 described below before being sent to target.
736 h. ``; src_ip: 152.16.100.0/8`` - Comment
737 i. ``random=0000XXX1`` - This describes a field of the packet
738 containing random data. This string can be
739 8,16,24 or 32 character long and represents 1,2,3 or 4
740 bytes of data. In this case it describes a byte of
741 data. Each character in string can be 0,1 or ``X``. 0 or 1
742 are fixed bit values in the data packet and ``X`` is a
743 random bit. So random=0000XXX1 generates 00000001(1),
744 00000011(3), 00000101(5), 00000111(7),
745 00001001(9), 00001011(11), 00001101(13) and 00001111(15)
747 j. ``rand_offset=29`` - Defines where to place the previously
748 defined random field.
749 k. ``; dst_ip: 152.16.100.0/8`` - Comment
750 l. ``random=0000XXX0`` - This is another random field which
751 generates a byte of 00000000(0), 00000010(2),
752 00000100(4), 00000110(6), 00001000(8), 00001010(10),
753 00001100(12) and 00001110(14) combinations.
754 m. ``rand_offset=33`` - Defines where to place the previously
755 defined random field.
756 n. ``random=0001001110001XXX0001001110001XXX`` - This is
757 another random field which generates 4 bytes.
758 o. ``rand_offset=34`` - Defines where to place the previously
759 defined 4 byte random field.
761 Core 2 executes same scenario as Core 1. The only difference
762 in this case is that the packets are generated
765 8. ``[core 3]`` - This defines the activities on core 3. The purpose
766 of ``core 3`` and ``core 4`` is to receive packets
776 a. ``name=rec 0`` - Name assigned to the core.
777 b. ``task=0`` - Each core can run a set of tasks. Starting with
778 ``0``. Task 1 can be defined later in this core or
779 can be defined in another ``[core 1]`` section with
780 ``task=1`` later in configuration file. Sometimes running
781 multiple task related to the same packet on the same
782 physical core improves performance, however sometimes it
783 is optimal to move task to a separate core. This is
784 best decided by checking performance.
785 c. ``mode=lat`` - Specifies the action carried out by this task on this
787 Supported modes are: ``acl``, ``classify``, ``drop``, ``gredecap``,
788 ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, ``lbnetwork``,
789 ``lbpos``, ``lbqinq``, ``nop``, ``police``, ``qinqdecapv4``,
790 ``qinqencapv4``, ``qos``, ``routing``, ``impair``, ``lb5tuple``,
791 ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, ``decapnsh``, ``encapnsh``,
792 ``gen``, ``genl4`` and ``lat``. This task(0) per core(3) receives packets
794 d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will
795 receive packets on ``Port 1``.
796 e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet.
797 Note that the packet length should be longer than ``lat pos`` + 4 bytes
798 to avoid truncation of the timestamp. It defines where the timestamp is
799 to be read from. Note that the SUT workload might cause the position of
800 the timestamp to change (i.e. due to encapsulation).
805 This section will describes the SUT(VNF) config file. This is the same for both
806 baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to
809 .. image:: images/PROX_Handle_2port_cfg.png
811 :alt: NSB PROX Handle Config File
813 See `prox options`_ for details
815 Now let's examine the components of the file in detail
817 1. ``[eal options]`` - same as the Generator config file. This specified the
818 EAL (Environmental Abstraction Layer) options. These are default values and
819 are not changed. See `dpdk wiki page`_.
821 2. ``[port 0]`` - This section describes the DPDK Port. The number following
822 the keyword ``port`` usually refers to the DPDK Port Id. usually starting
823 from ``0``. Because you can have multiple ports this entry usually
824 repeated. E.g. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4
825 port setup ``[port 0]``, ``[port 1]``, ``[port 2]`` and ``[port 3]``::
834 a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any
835 name can be assigned to a port.
836 b. ``mac=hardware`` sets the MAC address assigned by the hardware to data
838 c. ``rx desc=2048`` sets the number of available descriptors to allocate
839 for receive packets. This can be changed and can effect performance.
840 d. ``tx desc=2048`` sets the number of available descriptors to allocate
841 for transmit packets. This can be changed and can effect performance.
842 e. ``promiscuous=yes`` this enables promiscuous mode for this port.
844 3. ``[defaults]`` - Here default operations and settings can be over written.::
850 a. In this example ``mempool size=8K`` the number of mbufs per task is
851 altered. Altering this value could effect performance. See
852 `prox options`_ for details.
853 b. ``memcache size=512`` - number of mbufs cached per core, default is 256
854 this is the cache_size. Altering this value could affect performance.
856 4. ``[global]`` - Here application wide setting are supported. Things like
857 application name, start time, duration and memory configurations can be set
865 a. ``start time=5`` Time is seconds after which average stats will be
867 b. ``name=Handle L2FWD Multiflow (2x)`` Name of the configuration.
869 5. ``[core 0]`` - This core is designated the master core. Every Prox
870 application must have a master core. The master mode must be assigned to
871 exactly one task, running alone on one core.::
876 6. ``[core 1]`` - This describes the activity on core 1. Cores can be
877 configured by means of a set of [core #] sections, where # represents
880 a. an absolute core number: e.g. on a 10-core, dual socket system with
881 hyper-threading, cores are numbered from 0 to 39.
883 b. PROX allows a core to be identified by a core number, the letter 's',
884 and a socket number. However NSB PROX is hardware agnostic (physical and
885 virtual configurations are the same) it is advisable no to use physical
888 Each core can be assigned with a set of tasks, each running one of the
889 implemented packet processing modes.::
895 dst mac=@@tester_mac1
899 a. ``name=none`` - No name assigned to the core.
900 b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
901 Task 1 can be defined later in this core or can be defined in another
902 ``[core 1]`` section with ``task=1`` later in configuration file.
903 Sometimes running multiple task related to the same packet on the same
904 physical core improves performance, however sometimes it is optimal to
905 move task to a separate core. This is best decided by checking
907 c. ``mode=l2fwd`` - Specifies the action carried out by this task on this
908 core. Supported modes are: ``acl``, ``classify``, ``drop``,
909 ``gredecap``, ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``,
910 ``lbnetwork``, ``lbpos``, ``lbqinq``, ``nop``, ``police``,
911 ``qinqdecapv4``, ``qinqencapv4``, ``qos``, ``routing``, ``impair``,
912 ``lb5tuple``, ``mirror``, ``unmpls``, ``tagmpls``, ``nat``,
913 ``decapnsh``, ``encapnsh``, ``gen``, ``genl4`` and ``lat``. This code
914 does ``l2fwd``. i.e. it does the L2FWD.
916 d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet
917 will be set to the MAC address of ``Port 1`` of destination device.
918 (The Traffic Generator/Verifier)
919 e. ``rx port=if0`` - This specifies that the packets are received from
920 ``Port 0`` called if0
921 f. ``tx port=if1`` - This specifies that the packets are transmitted to
922 ``Port 1`` called if1
924 In this example we receive a packet on core on a port, carry out operation
925 on the packet on the core and transmit it on on another port still using
926 the same task on the same core.
928 On some implementation you may wish to use multiple tasks, like this.::
946 In this example you can see Core 1/Task 0 called ``rx_task`` receives the
947 packet from if0 and perform the l2fwd. However instead of sending the
948 packet to a port it sends it to a core see ``tx cores=1t1``. In this case it
949 sends it to Core 1/Task 1.
951 Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but
952 from the ring. See ``rx ring=yes``. It does not perform any operation on the
953 packet See ``mode=none`` and sends the packets to ``if0`` see
956 It is also possible to implement more complex operations by chaining
957 multiple operations in sequence and using rings to pass packets from one
960 In this example, we show a Broadband Network Gateway (BNG) with Quality of
961 Service (QoS). Communication from task to task is via rings.
963 .. image:: images/PROX_BNG_QOS.png
965 :alt: NSB PROX Config File for BNG_QOS
967 Baremetal Configuration File
968 ----------------------------
970 This is required for baremetal testing. It describes the IP address of the
971 various ports, the Network devices drivers and MAC addresses and the network
974 In this example we will describe a 2 port configuration. This file is the same
975 for all 2 port NSB Prox tests on the same platforms/configuration.
977 .. image:: images/PROX_Baremetal_config.png
979 :alt: NSB PROX Yardstick Config
981 Now let's describe the sections of the file.
983 1. ``TrafficGen`` - This section describes the Traffic Generator node of the
984 test configuration. The name of the node ``trafficgen_1`` must match the
985 node name in the ``Test Description File for Baremetal`` mentioned
986 earlier. The password attribute of the test needs to be configured. All
987 other parameters can remain as default settings.
988 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic
990 3. ``xe0`` is DPDK Port 0. ``lspci`` and ``./dpdk-devbind.py -s`` can be used
991 to provide the interface information. ``netmask`` and ``local_ip`` should
993 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1``
994 section needs to be repeated and modified accordingly.
995 5. ``vnf`` - This section describes the SUT of the test configuration. The
996 name of the node ``vnf`` must match the node name in the
997 ``Test Description File for Baremetal`` mentioned earlier. The password
998 attribute of the test needs to be configured. All other parameters can
999 remain as default settings
1000 6. ``interfaces`` - This defines the DPDK interfaces on the SUT
1001 7. ``xe0`` - Same as 3 but for the ``SUT``.
1002 8. ``xe1`` - Same as 4 but for the ``SUT`` also.
1003 9. ``routing_table`` - All parameters should remain unchanged.
1004 10. ``nd_route_tbl`` - All parameters should remain unchanged.
1009 The grafana dashboard visually displays the results of the tests. The steps
1010 required to produce a grafana dashboard are described here.
1012 .. _yardstick-config-label:
1014 a. Configure ``yardstick`` to use influxDB to store test results. See file
1015 ``/etc/yardstick/yardstick.conf``.
1017 .. image:: images/PROX_Yardstick_config.png
1019 :alt: NSB PROX Yardstick Config
1021 1. Specify the dispatcher to use influxDB to store results.
1022 2. "target = .. " - Specify location of influxDB to store results.
1023 "db_name = yardstick" - name of database. Do not change
1024 "username = root" - username to use to store result. (Many tests are
1026 "password = ... " - Please set to root user password
1028 b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See
1029 `grafana deployment`_.
1030 c. Generate the test data. Run the tests as follows .::
1032 yardstick --debug task start tc_prox_<context>_<test>-ports.yaml
1036 yardstick --debug task start tc_prox_heat_context_l2fwd-4.yaml
1038 d. Now build the dashboard for the test you just ran. The easiest way to do this is to copy an existing dashboard and rename the
1039 test and the field names. The procedure to do so is described here. See `opnfv grafana dashboard`_.
1041 How to run NSB Prox Test on an baremetal environment
1042 ====================================================
1044 In order to run the NSB PROX test.
1046 1. Install NSB on Traffic Generator node and Prox in SUT. See
1049 2. To enter container::
1051 docker exec -it yardstick /bin/bash
1053 3. Install baremetal configuration file (POD files)
1055 a. Go to location of PROX tests in container ::
1057 cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
1059 b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that
1060 topology into this directory as per `Baremetal Configuration File`_
1062 c. Install and configure ``yardstick.conf`` ::
1066 Modify /etc/yardstick/yardstick.conf as per yardstick-config-label_
1068 4. Execute the test. Eg.::
1070 yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
1072 How to run NSB Prox Test on an Openstack environment
1073 ====================================================
1075 In order to run the NSB PROX test.
1077 1. Install NSB on Openstack deployment node. See `NSB Installation`_
1079 2. To enter container::
1081 docker exec -it yardstick /bin/bash
1083 3. Install configuration file
1085 a. Goto location of PROX tests in container ::
1087 cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
1089 b. Install and configure ``yardstick.conf`` ::
1093 Modify /etc/yardstick/yardstick.conf as per yardstick-config-label_
1096 4. Execute the test. Eg.::
1098 yardstick --debug task start ./tc_prox_heat_context_l2fwd-4.yaml
1100 Frequently Asked Questions
1101 ==========================
1103 Here is a list of frequently asked questions.
1105 NSB Prox does not work on Baremetal, How do I resolve this?
1106 -----------------------------------------------------------
1108 If PROX NSB does not work on baremetal, problem is either in network
1109 configuration or test file.
1111 1. Verify network configuration. Execute existing baremetal test.::
1113 yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
1115 If test does not work then error in network configuration.
1117 a. Check DPDK on Traffic Generator and SUT via:- ::
1119 /root/dpdk-17./usertools/dpdk-devbind.py
1121 b. Verify MAC addresses match ``prox-baremetal-<ports>.yaml`` via ``ifconfig`` and ``dpdk-devbind``
1123 c. Check your eth port is what you expect. You would not be the first person to think that
1124 the port your cable is plugged into is ethX when in fact it is ethY. Use
1125 ethtool to visually confirm that the eth is where you expect.::
1129 A led should start blinking on port. (On both System-Under-Test and Traffic Generator)
1133 Install Linux kernel network driver and ensure your ports are
1134 ``bound`` to the driver via ``dpdk-devbind``. Bring up port on both
1135 SUT and Traffic Generator and check connection.
1137 i) On SUT and on Traffic Generator::
1139 ifconfig ethX/enoX up
1145 See ``Link detected`` if ``yes`` .... Cable is good. If ``no`` you have an issue with your cable/port.
1147 2. If existing baremetal works then issue is with your test. Check the traffic
1148 generator gen_<test>-<ports>.cfg to ensure it is producing a valid packet.
1150 How do I debug NSB Prox on Baremetal?
1151 -------------------------------------
1153 1. Execute the test as follows::
1155 yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
1157 2. Login to Traffic Generator as ``root``.::
1160 /opt/nsb_bin/prox -f /tmp/gen_<test>-<ports>.cfg
1162 3. Login to SUT as ``root``.::
1165 /opt/nsb_bin/prox -f /tmp/handle_<test>-<ports>.cfg
1167 4. Now let's examine the Generator Output. In this case the output of
1168 ``gen_l2fwd-4.cfg``.
1170 .. image:: images/PROX_Gen_GUI.png
1172 :alt: NSB PROX Traffic Generator GUI
1174 Now let's examine the output
1176 1. Indicates the amount of data successfully transmitted on Port 0
1177 2. Indicates the amount of data successfully received on port 1
1178 3. Indicates the amount of data successfully handled for port 1
1180 It appears what is transmitted is received.
1183 The number of packets MAY not exactly match because the ports are read in
1187 What is transmitted on PORT X may not always be received on same port.
1188 Please check the Test scenario.
1190 5. Now lets examine the SUT Output
1192 .. image:: images/PROX_SUT_GUI.png
1194 :alt: NSB PROX SUT GUI
1196 Now lets examine the output
1198 1. What is received on 0 is transmitted on 1, received on 1 transmitted on 0,
1199 received on 2 transmitted on 3 and received on 3 transmitted on 2.
1200 2. No packets are Failed.
1201 3. No packets are discarded.
1203 We can also dump the packets being received or transmitted via the following commands. ::
1205 dump Arguments: <core id> <task id> <nb packets>
1206 Create a hex dump of <nb_packets> from <task_id> on <core_id> showing how
1207 packets have changed between RX and TX.
1208 dump_rx Arguments: <core id> <task id> <nb packets>
1209 Create a hex dump of <nb_packets> from <task_id> on <core_id> at RX
1210 dump_tx Arguments: <core id> <task id> <nb packets>
1211 Create a hex dump of <nb_packets> from <task_id> on <core_id> at TX
1217 NSB Prox works on Baremetal but not in Openstack. How do I resolve this?
1218 ------------------------------------------------------------------------
1220 NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A
1221 badly formed packed may still work with PROX on Baremetal. However on
1222 Openstack the packet must be correct and all fields of the header correct.
1223 E.g. A packet with an invalid Protocol ID would still work in Baremetal but
1224 this packet would be rejected by openstack.
1227 1. Check the validity of the packet.
1228 2. Use a known good packet in your test
1229 3. If using ``Random`` fields in the traffic generator, disable them and
1233 How do I debug NSB Prox on Openstack?
1234 -------------------------------------
1236 1. Execute the test as follows::
1238 yardstick --debug task start --keep-deploy ./tc_prox_heat_context_l2fwd-4.yaml
1240 2. Access docker image if required via::
1242 docker exec -it yardstick /bin/bash
1244 3. Install openstack credentials.
1246 Depending on your openstack deployment, the location of these credentials
1248 On this platform I do this via::
1250 scp root@10.237.222.55:/etc/kolla/admin-openrc.sh .
1251 source ./admin-openrc.sh
1253 4. List Stack details
1255 a. Get the name of the Stack.
1257 .. image:: images/PROX_Openstack_stack_list.png
1259 :alt: NSB PROX openstack stack list
1261 b. Get the Floating IP of the Traffic Generator & SUT
1263 This generates a lot of information. Please note the floating IP of the
1264 VNF and the Traffic Generator.
1266 .. image:: images/PROX_Openstack_stack_show_a.png
1268 :alt: NSB PROX openstack stack show (Top)
1270 From here you can see the floating IP Address of the SUT / VNF
1272 .. image:: images/PROX_Openstack_stack_show_b.png
1274 :alt: NSB PROX openstack stack show (Top)
1276 From here you can see the floating IP Address of the Traffic Generator
1278 c. Get ssh identity file
1280 In the docker container locate the identity file.::
1282 cd /home/opnfv/repos/yardstick/yardstick/resources/files
1285 5. Login to SUT as ``Ubuntu``.::
1287 ssh -i ./yardstick_key-01029d1d ubuntu@172.16.2.158
1293 Now continue as baremetal.
1295 6. Login to SUT as ``Ubuntu``.::
1297 ssh -i ./yardstick_key-01029d1d ubuntu@172.16.2.156
1303 Now continue as baremetal.
1305 How do I resolve "Quota exceeded for resources"
1306 -----------------------------------------------
1308 This usually occurs due to 2 reasons when executing an openstack test.
1310 1. One or more stacks already exists and are consuming all resources. To resolve ::
1312 openstack stack list
1316 +--------------------------------------+--------------------+-----------------+----------------------+--------------+
1317 | ID | Stack Name | Stack Status | Creation Time | Updated Time |
1318 +--------------------------------------+--------------------+-----------------+----------------------+--------------+
1319 | acb559d7-f575-4266-a2d4-67290b556f15 | yardstick-e05ba5a4 | CREATE_COMPLETE | 2017-12-06T15:00:05Z | None |
1320 | 7edf21ce-8824-4c86-8edb-f7e23801a01b | yardstick-08bda9e3 | CREATE_COMPLETE | 2017-12-06T14:56:43Z | None |
1321 +--------------------------------------+--------------------+-----------------+----------------------+--------------+
1323 In this case 2 stacks already exist.
1327 openstack stack delete yardstick-08bda9e3
1328 Are you sure you want to delete this stack(s) [y/N]? y
1330 2. The openstack configuration quotas are too small.
1332 The solution is to increase the quota. Use below to query existing quotas::
1334 openstack quota show
1338 openstack quota set <resource>
1340 Openstack CLI fails or hangs. How do I resolve this?
1341 ----------------------------------------------------
1343 If it fails due to ::
1345 Missing value auth-url required for auth plugin password
1347 Check your shell environment for Openstack variables. One of them should
1348 contain the authentication URL ::
1351 OS_AUTH_URL=``https://192.168.72.41:5000/v3``
1353 Or similar. Ensure that openstack configurations are exported. ::
1355 cat /etc/kolla/admin-openrc.sh
1359 export OS_PROJECT_DOMAIN_NAME=default
1360 export OS_USER_DOMAIN_NAME=default
1361 export OS_PROJECT_NAME=admin
1362 export OS_TENANT_NAME=admin
1363 export OS_USERNAME=admin
1364 export OS_PASSWORD=BwwSEZqmUJA676klr9wa052PFjNkz99tOccS9sTc
1365 export OS_AUTH_URL=http://193.168.72.41:35357/v3
1366 export OS_INTERFACE=internal
1367 export OS_IDENTITY_API_VERSION=3
1368 export EXTERNAL_NETWORK=yardstick-public
1372 If the Openstack CLI appears to hang, then verify the proxys and ``no_proxy``
1373 are set correctly. They should be similar to ::
1375 FTP_PROXY="http://<your_proxy>:<port>/"
1376 HTTPS_PROXY="http://<your_proxy>:<port>/"
1377 HTTP_PROXY="http://<your_proxy>:<port>/"
1378 NO_PROXY="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
1379 ftp_proxy="http://<your_proxy>:<port>/"
1380 http_proxy="http://<your_proxy>:<port>/"
1381 https_proxy="http://<your_proxy>:<port>/"
1382 no_proxy="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
1386 1) 10.237.222.55 = IP Address of deployment node
1387 2) 10.237.223.80 = IP Address of Controller node
1388 3) 10.237.222.134 = IP Address of Compute Node
1390 How to Understand the Grafana output?
1391 -------------------------------------
1393 .. image:: images/PROX_Grafana_1.png
1395 :alt: NSB PROX Grafana_1
1397 .. image:: images/PROX_Grafana_2.png
1399 :alt: NSB PROX Grafana_2
1401 .. image:: images/PROX_Grafana_3.png
1403 :alt: NSB PROX Grafana_3
1405 .. image:: images/PROX_Grafana_4.png
1407 :alt: NSB PROX Grafana_4
1409 .. image:: images/PROX_Grafana_5.png
1411 :alt: NSB PROX Grafana_5
1413 .. image:: images/PROX_Grafana_6.png
1415 :alt: NSB PROX Grafana_6
1417 A. Test Parameters - Test interval, Duration, Tolerated Loss and Test Precision
1419 B. No. of packets send and received during test
1421 C. Generator Stats - Average Throughput per step (Step Duration is specified by
1422 "Duration" field in A above)
1426 E. No. of packets sent by the generator per second per interface in millions
1427 of packets per second.
1429 F. No. of packets recieved by the generator per second per interface in millions
1430 of packets per second.
1432 G. No. of packets received by the SUT from the generator in millions of packets
1435 H. No. of packets sent by the the SUT to the generator in millions of packets
1438 I. No. of packets sent by the Generator to the SUT per step per interface
1439 in millions of packets per second.
1441 J. No. of packets received by the Generator from the SUT per step per interface
1442 in millions of packets per second.
1444 K. No. of packets sent and received by the generator and lost by the SUT that
1445 meet the success criteria
1447 L. The change in the Percentage of Line Rate used over a test, The MAX and the
1448 MIN should converge to within the interval specified as the
1451 M. Packet size supported during test. If *N/A* appears in any field the
1452 result has not been decided.
1454 N. The Theretical Maximum no. of packets per second that can be sent for this
1457 O. No. of packets sent by the generator in MPPS
1459 P. No. of packets received by the generator in MPPS
1461 Q. No. of packets sent by SUT.
1463 R. No. of packets received by the SUT
1465 S. Total no. of dropped packets -- Packets sent but not received back by the
1466 generator, these may be dropped by the SUT or the generator.
1468 T. The tolerated no. of dropped packets.
1470 U. Test throughput in Gbps
1472 V. Latencey per Port
1479 * Wa - CPU Utilization of the Generator
1480 * Wb - CPU Utilization of the SUT