4 This document describes the steps to create a new NSB PROX test based on
5 existing PROX functionalities. NSB PROX provides is a simple approximation
6 of an operation and can be used to develop best practices and TCO models
7 for Telco customers, investigate the impact of new Intel compute,
8 network and storage technologies, characterize performance, and develop
9 optimal system architectures and configurations.
16 In order to integrate PROX tests into NSB, the following prerequisites are
19 .. _`dpdk wiki page`: https://www.dpdk.org/
20 .. _`yardstick wiki page`: https://wiki.opnfv.org/display/yardstick/
21 .. _`Prox documentation`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation
22 .. _`openstack wiki page`: https://wiki.openstack.org/wiki/Main_Page
23 .. _`grafana getting started`: http://docs.grafana.org/guides/gettingstarted/
24 .. _`opnfv grafana dashboard`: https://wiki.opnfv.org/display/yardstick/How+to+work+with+grafana+dashboard
25 .. _`Prox command line`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation#Command_line_options
26 .. _`grafana deployment`: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally
27 .. _`Prox options`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation#.5Beal_options.5D
28 .. _`NSB Installation`: http://artifacts.opnfv.org/yardstick/docs/userguide/index.html#document-09-installation
30 * A working knowledge of Yardstick. See `yardstick wiki page`_.
31 * A working knowledge of PROX. See `Prox documentation`_.
32 * Knowledge of Openstack. See `openstack wiki page`_.
33 * Knowledge of how to use Grafana. See `grafana getting started`_.
34 * How to Deploy InfluxDB & Grafana. See `grafana deployment`_.
35 * How to use Grafana in OPNFV/Yardstick. See `opnfv grafana dashboard`_.
36 * How to install NSB. See `NSB Installation`_
38 Sample Prox Test Hardware Architecture
39 ======================================
41 The following is a diagram of a sample NSB PROX Hardware Architecture
42 for both NSB PROX on Bare metal and on Openstack.
44 In this example when running yardstick on baremetal, yardstick will
45 run on the deployment node, the generator will run on the deployment node
46 and the SUT(SUT) will run on the Controller Node.
49 .. image:: images/PROX_Hardware_Arch.png
51 :alt: Sample NSB PROX Hard Architecture
53 Prox Test Architecture
54 ======================
56 In order to create a new test, one must understand the architecture of
59 A NSB Prox test architecture is composed of:
61 * A traffic generator. This provides blocks of data on 1 or more ports
63 The traffic generator also consumes the result packets from the system
65 * A SUT consumes the packets generated by the packet
66 generator, and applies one or more tasks to the packets and return the
67 modified packets to the traffic generator.
69 This is an example of a sample NSB PROX test architecture.
71 .. image:: images/PROX_Software_Arch.png
73 :alt: NSB PROX test Architecture
75 This diagram is of a sample NSB PROX test application.
79 * Generator Tasks - Composted of 1 or more tasks (It is possible to
80 have multiple tasks sending packets to same port No. See Tasks Ai and Aii
83 * Task Ai - Generates Packets on Port 0 of Traffic Generator
84 and send to Port 0 of SUT Port 0
85 * Task Aii - Generates Packets on Port 0 of Traffic Generator
86 and send to Port 0 of SUT Port 0
87 * Task B - Generates Packets on Port 1 of Traffic Generator
88 and send to Port 1 of SUT Port 1
89 * Task C - Generates Packets on Port 2 of Traffic Generator
90 and send to Port 2 of SUT Port 2
91 * Task Di - Generates Packets on Port 3 of Traffic Generator
92 and send to Port 3 of SUT Port 3
93 * Task Dii - Generates Packets on Port 0 of Traffic Generator
94 and send to Port 0 of SUT Port 0
96 * Verifier Tasks - Composed of 1 or more tasks which receives
99 * Task E - Receives packets on Port 0 of Traffic Generator sent
100 from Port 0 of SUT Port 0
101 * Task F - Receives packets on Port 1 of Traffic Generator sent
102 from Port 1 of SUT Port 1
103 * Task G - Receives packets on Port 2 of Traffic Generator sent
104 from Port 2 of SUT Port 2
105 * Task H - Receives packets on Port 3 of Traffic Generator sent
106 from Port 3 of SUT Port 3
110 * Receiver Tasks - Receives packets from generator - Composed on 1 or
111 more tasks which consume the packs sent from Traffic Generator
113 * Task A - Receives Packets on Port 0 of System-Under-Test from
114 Traffic Generator Port 0, and forwards packets to Task E
115 * Task B - Receives Packets on Port 1 of System-Under-Test from
116 Traffic Generator Port 1, and forwards packets to Task E
117 * Task C - Receives Packets on Port 2 of System-Under-Test from
118 Traffic Generator Port 2, and forwards packets to Task E
119 * Task D - Receives Packets on Port 3 of System-Under-Test from
120 Traffic Generator Port 3, and forwards packets to Task E
122 * Processing Tasks - Composed of multiple tasks in series which carry
123 out some processing on received packets before forwarding to the
126 * Task E - This receives packets from the Receiver Tasks,
127 carries out some operation on the data and forwards to result
128 packets to the next task in the sequence - Task F
129 * Task F - This receives packets from the previous Task - Task
130 E, carries out some operation on the data and forwards to result
131 packets to the next task in the sequence - Task G
132 * Task G - This receives packets from the previous Task - Task F
133 and distributes the result packages to the Transmitter tasks
135 * Transmitter Tasks - Composed on 1 or more tasks which send the
136 processed packets back to the Traffic Generator
138 * Task H - Receives Packets from Task G of System-Under-Test and
139 sends packets to Traffic Generator Port 0
140 * Task I - Receives Packets from Task G of System-Under-Test and
141 sends packets to Traffic Generator Port 1
142 * Task J - Receives Packets from Task G of System-Under-Test and
143 sends packets to Traffic Generator Port 2
144 * Task K - Receives Packets From Task G of System-Under-Test and
145 sends packets to Traffic Generator Port 3
150 A NSB Prox test is composed of the following components :-
152 * Test Description File. Usually called
153 ``tc_prox_<context>_<test>-<ports>.yaml`` where
155 * <context> is either ``baremetal`` or ``heat_context``
156 * <test> is the a one or 2 word description of the test.
157 * <ports> is the number of ports used
159 Example tests ``tc_prox_baremetal_l2fwd-2.yaml`` or
160 ``tc_prox_heat_context_vpe-4.yaml``. This file describes the components
161 of the test, in the case of openstack the network description and
162 server descriptions, in the case of baremetal the hardware
163 description location. It also contains the name of the Traffic Generator,
164 the SUT config file and the traffic profile description, all described below.
165 See nsb-test-description-label_
167 * Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the
168 packet size, tolerated loss, initial line rate to start traffic at, test
169 interval etc See nsb-traffic-profile-label_
171 * Traffic Generator Config file. Usually called ``gen_<test>-<ports>.cfg``.
173 This describes the activity of the traffic generator
175 * What each core of the traffic generator does,
176 * The packet of data sent by a core on a port of the traffic generator
177 to the system under test
178 * What core is used to wait on what port for data from the system
181 Example traffic generator config file ``gen_l2fwd-4.cfg``
182 See nsb-traffic-generator-label_
184 * SUT Config file. Usually called ``handle_<test>-<ports>.cfg``.
186 This describes the activity of the SUTs
188 * What each core of the does,
189 * What cores receives packets from what ports
190 * What cores perform operations on the packets and pass the packets onto
192 * What cores receives packets from what cores and transmit the packets on
193 the ports to the Traffic Verifier tasks of the Traffic Generator.
195 Example traffic generator config file ``handle_l2fwd-4.cfg``
196 See nsb-sut-generator-label_
198 * NSB PROX Baremetal Configuration file. Usually called
199 ``prox-baremetal-<ports>.yaml``
201 * <ports> is the number of ports used
203 This is required for baremetal only. This describes hardware, NICs,
204 IP addresses, Network drivers, usernames and passwords.
205 See baremetal-config-label_
207 * Grafana Dashboard. Usually called
208 ``Prox_<context>_<test>-<port>-<DateAndTime>.json`` where
210 * <context> Is either ``BM`` or ``heat``
211 * <test> Is the a one or 2 word description of the test.
212 * <port> is the number of ports used express as ``2Port`` or ``4Port``
213 * <DateAndTime> is the Date and Time expressed as a string.
215 Example grafana dashboard ``Prox_BM_L2FWD-4Port-1507804504588.json``
217 Other files may be required. These are test specific files and will be
220 .. _nsb-test-description-label:
222 **Test Description File**
224 Here we will discuss the test description for both
225 baremetal and openstack.
227 *Test Description File for Baremetal*
228 -------------------------------------
230 This section will introduce the meaning of the Test case description
231 file. We will use ``tc_prox_baremetal_l2fwd-2.yaml`` as an example to
232 show you how to understand the test description file.
234 .. image:: images/PROX_Test_BM_Script.png
236 :alt: NSB PROX Test Description File
238 Now let's examine the components of the file in detail
240 1. ``traffic_profile`` - This specifies the traffic profile for the
241 test. In this case ``prox_binsearch.yaml`` is used. See
242 nsb-traffic-profile-label_
244 2. ``topology`` - This is either ``prox-tg-topology-1.yaml`` or
245 ``prox-tg-topology-2.yaml`` or ``prox-tg-topology-4.yaml``
246 depending on number of ports required.
248 3. ``nodes`` - This names the Traffic Generator and the System
249 under Test. Does not need to change.
251 4. ``interface_speed_gbps`` - This is an optional parameter. If not present
252 the system defaults to 10Gbps. This defines the speed of the interfaces.
254 5. ``prox_path`` - Location of the Prox executable on the traffic
255 generator (Either baremetal or Openstack Virtual Machine)
257 6. ``prox_config`` - This is the ``SUT Config File``.
258 In this case it is ``handle_l2fwd-2.cfg``
260 A number of additional parameters can be added. This example
264 interface_speed_gbps: 10
267 prox_path: /opt/nsb_bin/prox
268 prox_config: ``configs/handle_vpe-4.cfg``
272 ``configs/vpe_ipv4.lua`` : ````
273 ``configs/vpe_dscp.lua`` : ````
274 ``configs/vpe_cpe_table.lua`` : ````
275 ``configs/vpe_user_table.lua`` : ````
276 ``configs/vpe_rules.lua`` : ````
277 prox_generate_parameter: True
279 ``interface_speed_gbps`` - this specifies the speed of the interface
280 in Gigabits Per Second. This is used to calculate pps(packets per second).
281 If the interfaces are of different speeds, then this specifies the speed
282 of the slowest interface. This parameter is optional. If omitted the
283 interface speed defaults to 10Gbps.
285 ``prox_files`` - this specified that a number of addition files
286 need to be provided for the test to run correctly. This files
287 could provide routing information,hashing information or a
288 hashing algorithm and ip/mac information.
290 ``prox_generate_parameter`` - this specifies that the NSB application
291 is required to provide information to the nsb Prox in the form
292 of a file called ``parameters.lua``, which contains information
293 retrieved from either the hardware or the openstack configuration.
295 7. ``prox_args`` - this specifies the command line arguments to start
296 prox. See `prox command line`_.
298 8. ``prox_config`` - This specifies the Traffic Generator config file.
300 9. ``runner`` - This is set to ``ProxDuration`` - This specifies that the
301 test runs for a set duration. Other runner types are available
302 but it is recommend to use ``ProxDuration``
304 The following parrameters are supported
306 ``interval`` - (optional) - This specifies the sampling interval.
309 ``sampled`` - (optional) - This specifies if sampling information is
310 required. Default ``no``
312 ``duration`` - This is the length of the test in seconds. Default
315 ``confirmation`` - This specifies the number of confirmation retests to
316 be made before deciding to increase or decrease line speed. Default 0.
318 10. ``context`` - This is ``context`` for a 2 port Baremetal configuration.
320 If a 4 port configuration was required then file
321 ``prox-baremetal-4.yaml`` would be used. This is the NSB Prox
322 baremetal configuration file.
324 .. _nsb-traffic-profile-label:
326 *Traffic Profile file*
327 ----------------------
329 This describes the details of the traffic flow. In this case
330 ``prox_binsearch.yaml`` is used.
332 .. image:: images/PROX_Traffic_profile.png
334 :alt: NSB PROX Traffic Profile
337 1. ``name`` - The name of the traffic profile. This name should match the name
338 specified in the ``traffic_profile`` field in the Test Description File.
340 2. ``traffic_type`` - This specifies the type of traffic pattern generated,
341 This name matches class name of the traffic generator. See::
343 network_services/traffic_profile/prox_binsearch.py class ProxBinSearchProfile(ProxProfile)
345 In this case it lowers the traffic rate until the number of packets
346 sent is equal to the number of packets received (plus a
347 tolerated loss). Once it achieves this it increases the traffic
348 rate in order to find the highest rate with no traffic loss.
350 Custom traffic types can be created by creating a new traffic profile class.
352 3. ``tolerated_loss`` - This specifies the percentage of packets that
353 can be lost/dropped before
354 we declare success or failure. Success is Transmitted-Packets from
355 Traffic Generator is greater than or equal to
356 packets received by Traffic Generator plus tolerated loss.
358 4. ``test_precision`` - This specifies the precision of the test
359 results. For some tests the success criteria may never be
360 achieved because the test precision may be greater than the
361 successful throughput. For finer results increase the precision
362 by making this value smaller.
364 5. ``packet_sizes`` - This specifies the range of packets size this
367 6. ``duration`` - This specifies the sample duration that the test
368 uses to check for success or failure.
370 7. ``lower_bound`` - This specifies the test initial lower bound sample rate.
371 On success this value is increased.
373 8. ``upper_bound`` - This specifies the test initial upper bound sample rate.
374 On success this value is decreased.
376 Other traffic profiles exist eg prox_ACL.yaml which does not
377 compare what is received with what is transmitted. It just
378 sends packet at max rate.
380 It is possible to create custom traffic profiles with by
381 creating new file in the same folder as prox_binsearch.yaml.
382 See this prox_vpe.yaml as example::
384 schema: ``nsb:traffic_profile:0.1``
387 description: Prox vPE traffic profile
390 traffic_type: ProxBinSearchProfile
391 tolerated_loss: 100.0 #0.001
393 # The minimum size of the Ethernet frame for the vPE test is 68 bytes.
399 *Test Description File for Openstack*
400 -------------------------------------
402 We will use ``tc_prox_heat_context_l2fwd-2.yaml`` as a example to show
403 you how to understand the test description file.
405 .. image:: images/PROX_Test_HEAT_Script1.png
407 :alt: NSB PROX Test Description File - Part 1
409 .. image:: images/PROX_Test_HEAT_Script2.png
411 :alt: NSB PROX Test Description File - Part 2
413 Now lets examine the components of the file in detail
415 Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section
416 ``10`` is replaced with sections A to F. Section 10 was for a baremetal
417 configuration file. This has no place in a heat configuration.
419 A. ``image`` - yardstick-samplevnfs. This is the name of the image
420 created during the installation of NSB. This is fixed.
422 B. ``flavor`` - The flavor is created dynamically. However we could
423 use an already existing flavor if required. In that case the
424 flavor would be named::
426 flavor: yardstick-flavor
428 C. ``extra_specs`` - This allows us to specify the number of
429 cores sockets and hyperthreading assigned to it. In this case
430 we have 1 socket with 10 codes and no hyperthreading enabled.
432 D. ``placement_groups`` - default. Do not change for NSB PROX.
434 E. ``servers`` - ``tg_0`` is the traffic generator and ``vnf_0``
435 is the system under test.
437 F. ``networks`` - is composed of a management network labeled ``mgmt``
438 and one uplink network labeled ``uplink_0`` and one downlink
439 network labeled ``downlink_0`` for 2 ports. If this was a 4 port
440 configuration there would be 2 extra downlink ports. See this
441 example from a 4 port l2fwd test.::
449 port_security_enabled: False
454 port_security_enabled: False
459 port_security_enabled: False
464 port_security_enabled: False
467 .. _nsb-traffic-generator-label:
469 *Traffic Generator Config file*
470 -------------------------------
472 This section will describe the traffic generator config file.
473 This is the same for both baremetal and heat. See this example
474 of ``gen_l2fwd_multiflow-2.cfg`` to explain the options.
476 .. image:: images/PROX_Gen_2port_cfg.png
478 :alt: NSB PROX Gen Config File
480 The configuration file is divided into multiple sections, each
481 of which is used to define some parameters and options.::
497 See `prox options`_ for details
499 Now let's examine the components of the file in detail
501 1. ``[eal options]`` - This specified the EAL (Environmental
502 Abstraction Layer) options. These are default values and
503 are not changed. See `dpdk wiki page`_.
505 2. ``[variables]`` - This section contains variables, as
506 the name suggests. Variables for Core numbers, mac
507 addresses, ip addresses etc. They are assigned as a
508 ``key = value`` where the key is used in place of the value.
511 A special case for valuables with a value beginning with
512 ``@@``. These values are dynamically updated by the NSB
513 application at run time. Values like MAC address,
516 3. ``[port 0]`` - This section describes the DPDK Port. The number
517 following the keyword ``port`` usually refers to the DPDK Port
518 Id. usually starting from ``0``. Because you can have multiple
519 ports this entry usually repeated. Eg. For a 2 port setup
520 ``[port0]`` and ``[port 1]`` and for a 4 port setup ``[port 0]``,
521 ``[port 1]``, ``[port 2]`` and ``[port 3]``::
530 a. In this example ``name = p0`` assigned the name ``p0`` to the
531 port. Any name can be assigned to a port.
532 b. ``mac=hardware`` sets the MAC address assigned by the hardware
533 to data from this port.
534 c. ``rx desc=2048`` sets the number of available descriptors to
535 allocate for receive packets. This can be changed and can
537 d. ``tx desc=2048`` sets the number of available descriptors to
538 allocate for transmit packets. This can be changed and can
540 e. ``promiscuous=yes`` this enables promiscuous mode for this port.
542 4. ``[defaults]`` - Here default operations and settings can be over
543 written. In this example ``mempool size=4K`` the number of mbufs
544 per task is altered. Altering this value could effect
545 performance. See `prox options`_ for details.
547 5. ``[global]`` - Here application wide setting are supported. Things
548 like application name, start time, duration and memory
549 configurations can be set here. In this example.::
555 a. ``start time=5`` Time is seconds after which average
556 stats will be started.
557 b. ``name=Basic Gen`` Name of the configuration.
559 6. ``[core 0]`` - This core is designated the master core. Every
560 Prox application must have a master core. The master mode must
561 be assigned to exactly one task, running alone on one core.::
566 7. ``[core 1]`` - This describes the activity on core 1. Cores can
567 be configured by means of a set of [core #] sections, where
570 a. an absolute core number: e.g. on a 10-core, dual socket
571 system with hyper-threading,
572 cores are numbered from 0 to 39.
574 b. PROX allows a core to be identified by a core number, the
575 letter 's', and a socket number.
577 It is possible to write a baremetal and an openstack test which use
578 the same traffic generator config file and SUT config file.
579 In this case it is advisable not to use physical
582 However it is also possible to write NSB Prox tests that
583 have been optimized for a particular hardware configuration.
584 In this case it is advisable to use the core numbering.
585 It is up to the user to make sure that cores from
586 the right sockets are used (i.e. from the socket on which the NIC
587 is attached to), to ensure good performance (EPA).
589 Each core can be assigned with a set of tasks, each running
590 one of the implemented packet processing modes.::
598 ; Ethernet + IP + UDP
599 pkt inline=${sut_mac0} 70 00 00 00 00 01 08 00 45 00 00 1c 00 01 00 00 40 11 f7 7d 98 10 64 01 98 10 64 02 13 88 13 88 00 08 55 7b
600 ; src_ip: 152.16.100.0/8
603 ; dst_ip: 152.16.100.0/8
606 random=0001001110001XXX0001001110001XXX
609 a. ``name=p0`` - Name assigned to the core.
610 b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
611 Task 1 can be defined later in this core or
612 can be defined in another ``[core 1]`` section with ``task=1``
613 later in configuration file. Sometimes running
614 multiple task related to the same packet on the same physical
615 core improves performance, however sometimes it
616 is optimal to move task to a separate core. This is best
617 decided by checking performance.
618 c. ``mode=gen`` - Specifies the action carried out by this task on
619 this core. Supported modes are: classify, drop, gen, lat, genl4, nop, l2fwd, gredecap,
620 greencap, lbpos, lbnetwork, lbqinq, lb5tuple, ipv6_decap, ipv6_encap,
621 qinqdecapv4, qinqencapv4, qos, routing, impair,
622 mirror, unmpls, tagmpls, nat, decapnsh, encapnsh, police, acl
627 * Basic Forwarding (no touch)
628 * L2 Forwarding (change MAC)
630 * Load balance based on packet fields
631 * Symmetric load balancing
632 * QinQ encap/decap IPv4/IPv6
641 In the traffic generator we expect a core to generate packets (``gen``)
642 and to receive packets & calculate latency (``lat``)
643 This core does ``gen`` . ie it is a traffic generator.
645 To understand what each of the modes support please see
646 `prox documentation`_.
648 d. ``tx port=p0`` - This specifies that the packets generated are
649 transmitted to port ``p0``
650 e. ``bps=1250000000`` - This indicates Bytes Per Second to
652 f. ``; Ethernet + IP + UDP`` - This is a comment. Items starting with
654 g. ``pkt inline=${sut_mac0} 70 00 00 00 ...`` - Defines the packet
655 format as a sequence of bytes (each
656 expressed in hexadecimal notation). This defines the packet
657 that is generated. This packets begins
658 with the hexadecimal sequence assigned to ``sut_mac`` and the
659 remainder of the bytes in the string.
660 This packet could now be sent or modified by ``random=..``
661 described below before being sent to target.
662 h. ``; src_ip: 152.16.100.0/8`` - Comment
663 i. ``random=0000XXX1`` - This describes a field of the packet
664 containing random data. This string can be
665 8,16,24 or 32 character long and represents 1,2,3 or 4
666 bytes of data. In this case it describes a byte of
667 data. Each character in string can be 0,1 or ``X``. 0 or 1
668 are fixed bit values in the data packet and ``X`` is a
669 random bit. So random=0000XXX1 generates 00000001(1),
670 00000011(3), 00000101(5), 00000111(7),
671 00001001(9), 00001011(11), 00001101(13) and 00001111(15)
673 j. ``rand_offset=29`` - Defines where to place the previously
674 defined random field.
675 k. ``; dst_ip: 152.16.100.0/8`` - Comment
676 l. ``random=0000XXX0`` - This is another random field which
677 generates a byte of 00000000(0), 00000010(2),
678 00000100(4), 00000110(6), 00001000(8), 00001010(10),
679 00001100(12) and 00001110(14) combinations.
680 m. ``rand_offset=33`` - Defines where to place the previously
681 defined random field.
682 n. ``random=0001001110001XXX0001001110001XXX`` - This is
683 another random field which generates 4 bytes.
684 o. ``rand_offset=34`` - Defines where to place the previously
685 defined 4 byte random field.
687 Core 2 executes same scenario as Core 1. The only difference
688 in this case is that the packets are generated
691 8. ``[core 3]`` - This defines the activities on core 3. The purpose
692 of ``core 3`` and ``core 4`` is to receive packets
702 a. ``name=rec 0`` - Name assigned to the core.
703 b. ``task=0`` - Each core can run a set of tasks. Starting with
704 ``0``. Task 1 can be defined later in this core or
705 can be defined in another ``[core 1]`` section with
706 ``task=1`` later in configuration file. Sometimes running
707 multiple task related to the same packet on the same
708 physical core improves performance, however sometimes it
709 is optimal to move task to a separate core. This is
710 best decided by checking performance.
711 c. ``mode=lat`` - Specifies the action carried out by this task on this
713 Supported modes are: ``acl``, ``classify``, ``drop``, ``gredecap``,
714 ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, ``lbnetwork``,
715 ``lbpos``, ``lbqinq``, ``nop``, ``police``, ``qinqdecapv4``,
716 ``qinqencapv4``, ``qos``, ``routing``, ``impair``, ``lb5tuple``,
717 ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, ``decapnsh``, ``encapnsh``,
718 ``gen``, ``genl4`` and ``lat``. This task(0) per core(3) receives packets
720 d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will
721 receive packets on ``Port 1``.
722 e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet.
723 Note that the packet length should be longer than ``lat pos`` + 4 bytes
724 to avoid truncation of the timestamp. It defines where the timestamp is
725 to be read from. Note that the SUT workload might cause the position of
726 the timestamp to change (i.e. due to encapsulation).
728 .. _nsb-sut-generator-label:
731 -------------------------------
733 This section will describes the SUT(VNF) config file. This is the same for both
734 baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to
737 .. image:: images/PROX_Handle_2port_cfg.png
739 :alt: NSB PROX Handle Config File
741 See `prox options`_ for details
743 Now let's examine the components of the file in detail
745 1. ``[eal options]`` - same as the Generator config file. This specified the
746 EAL (Environmental Abstraction Layer) options. These are default values and
747 are not changed. See `dpdk wiki page`_.
749 2. ``[port 0]`` - This section describes the DPDK Port. The number following
750 the keyword ``port`` usually refers to the DPDK Port Id. usually starting
751 from ``0``. Because you can have multiple ports this entry usually
752 repeated. E.g. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4
753 port setup ``[port 0]``, ``[port 1]``, ``[port 2]`` and ``[port 3]``::
762 a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any
763 name can be assigned to a port.
764 b. ``mac=hardware`` sets the MAC address assigned by the hardware to data
766 c. ``rx desc=2048`` sets the number of available descriptors to allocate
767 for receive packets. This can be changed and can effect performance.
768 d. ``tx desc=2048`` sets the number of available descriptors to allocate
769 for transmit packets. This can be changed and can effect performance.
770 e. ``promiscuous=yes`` this enables promiscuous mode for this port.
772 3. ``[defaults]`` - Here default operations and settings can be over written.::
778 a. In this example ``mempool size=8K`` the number of mbufs per task is
779 altered. Altering this value could effect performance. See
780 `prox options`_ for details.
781 b. ``memcache size=512`` - number of mbufs cached per core, default is 256
782 this is the cache_size. Altering this value could affect performance.
784 4. ``[global]`` - Here application wide setting are supported. Things like
785 application name, start time, duration and memory configurations can be set
793 a. ``start time=5`` Time is seconds after which average stats will be
795 b. ``name=Handle L2FWD Multiflow (2x)`` Name of the configuration.
797 5. ``[core 0]`` - This core is designated the master core. Every Prox
798 application must have a master core. The master mode must be assigned to
799 exactly one task, running alone on one core.::
804 6. ``[core 1]`` - This describes the activity on core 1. Cores can be
805 configured by means of a set of [core #] sections, where # represents
808 a. an absolute core number: e.g. on a 10-core, dual socket system with
809 hyper-threading, cores are numbered from 0 to 39.
811 b. PROX allows a core to be identified by a core number, the letter 's',
812 and a socket number. However NSB PROX is hardware agnostic (physical and
813 virtual configurations are the same) it is advisable no to use physical
816 Each core can be assigned with a set of tasks, each running one of the
817 implemented packet processing modes.::
823 dst mac=@@tester_mac1
827 a. ``name=none`` - No name assigned to the core.
828 b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
829 Task 1 can be defined later in this core or can be defined in another
830 ``[core 1]`` section with ``task=1`` later in configuration file.
831 Sometimes running multiple task related to the same packet on the same
832 physical core improves performance, however sometimes it is optimal to
833 move task to a separate core. This is best decided by checking
835 c. ``mode=l2fwd`` - Specifies the action carried out by this task on this
836 core. Supported modes are: ``acl``, ``classify``, ``drop``,
837 ``gredecap``, ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``,
838 ``lbnetwork``, ``lbpos``, ``lbqinq``, ``nop``, ``police``,
839 ``qinqdecapv4``, ``qinqencapv4``, ``qos``, ``routing``, ``impair``,
840 ``lb5tuple``, ``mirror``, ``unmpls``, ``tagmpls``, ``nat``,
841 ``decapnsh``, ``encapnsh``, ``gen``, ``genl4`` and ``lat``. This code
842 does ``l2fwd``. i.e. it does the L2FWD.
844 d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet
845 will be set to the MAC address of ``Port 1`` of destination device.
846 (The Traffic Generator/Verifier)
847 e. ``rx port=if0`` - This specifies that the packets are received from
848 ``Port 0`` called if0
849 f. ``tx port=if1`` - This specifies that the packets are transmitted to
850 ``Port 1`` called if1
852 In this example we receive a packet on core on a port, carry out operation
853 on the packet on the core and transmit it on on another port still using
854 the same task on the same core.
856 On some implementation you may wish to use multiple tasks, like this.::
874 In this example you can see Core 1/Task 0 called ``rx_task`` receives the
875 packet from if0 and perform the l2fwd. However instead of sending the
876 packet to a port it sends it to a core see ``tx cores=1t1``. In this case it
877 sends it to Core 1/Task 1.
879 Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but
880 from the ring. See ``rx ring=yes``. It does not perform any operation on the
881 packet See ``mode=none`` and sends the packets to ``if0`` see
884 It is also possible to implement more complex operations by chaining
885 multiple operations in sequence and using rings to pass packets from one
888 In this example, we show a Broadband Network Gateway (BNG) with Quality of
889 Service (QoS). Communication from task to task is via rings.
891 .. image:: images/PROX_BNG_QOS.png
893 :alt: NSB PROX Config File for BNG_QOS
895 *Baremetal Configuration file*
896 ------------------------------
898 .. _baremetal-config-label:
900 This is required for baremetal testing. It describes the IP address of the
901 various ports, the Network devices drivers and MAC addresses and the network
904 In this example we will describe a 2 port configuration. This file is the same
905 for all 2 port NSB Prox tests on the same platforms/configuration.
907 .. image:: images/PROX_Baremetal_config.png
909 :alt: NSB PROX Yardstick Config
911 Now let's describe the sections of the file.
913 1. ``TrafficGen`` - This section describes the Traffic Generator node of the
914 test configuration. The name of the node ``trafficgen_1`` must match the
915 node name in the ``Test Description File for Baremetal`` mentioned
916 earlier. The password attribute of the test needs to be configured. All
917 other parameters can remain as default settings.
918 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic
920 3. ``xe0`` is DPDK Port 0. ``lspci`` and ``./dpdk-devbind.py -s`` can be used
921 to provide the interface information. ``netmask`` and ``local_ip`` should
923 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1``
924 section needs to be repeated and modified accordingly.
925 5. ``vnf`` - This section describes the SUT of the test configuration. The
926 name of the node ``vnf`` must match the node name in the
927 ``Test Description File for Baremetal`` mentioned earlier. The password
928 attribute of the test needs to be configured. All other parameters can
929 remain as default settings
930 6. ``interfaces`` - This defines the DPDK interfaces on the SUT
931 7. ``xe0`` - Same as 3 but for the ``SUT``.
932 8. ``xe1`` - Same as 4 but for the ``SUT`` also.
933 9. ``routing_table`` - All parameters should remain unchanged.
934 10. ``nd_route_tbl`` - All parameters should remain unchanged.
939 The grafana dashboard visually displays the results of the tests. The steps
940 required to produce a grafana dashboard are described here.
942 .. _yardstick-config-label:
944 a. Configure ``yardstick`` to use influxDB to store test results. See file
945 ``/etc/yardstick/yardstick.conf``.
947 .. image:: images/PROX_Yardstick_config.png
949 :alt: NSB PROX Yardstick Config
951 1. Specify the dispatcher to use influxDB to store results.
952 2. "target = .. " - Specify location of influxDB to store results.
953 "db_name = yardstick" - name of database. Do not change
954 "username = root" - username to use to store result. (Many tests are
956 "password = ... " - Please set to root user password
958 b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See
959 `grafana deployment`_.
960 c. Generate the test data. Run the tests as follows .::
962 yardstick --debug task start tc_prox_<context>_<test>-ports.yaml
966 yardstick --debug task start tc_prox_heat_context_l2fwd-4.yaml
968 d. Now build the dashboard for the test you just ran. The easiest way to do this is to copy an existing dashboard and rename the
969 test and the field names. The procedure to do so is described here. See `opnfv grafana dashboard`_.
971 How to run NSB Prox Test on an baremetal environment
972 ====================================================
974 In order to run the NSB PROX test.
976 1. Install NSB on Traffic Generator node and Prox in SUT. See
979 2. To enter container::
981 docker exec -it yardstick /bin/bash
983 3. Install baremetal configuration file (POD files)
985 a. Go to location of PROX tests in container ::
987 cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
989 b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that
990 topology into this directory as per baremetal-config-label_
992 c. Install and configure ``yardstick.conf`` ::
996 Modify /etc/yardstick/yardstick.conf as per yardstick-config-label_
998 4. Execute the test. Eg.::
1000 yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
1002 How to run NSB Prox Test on an Openstack environment
1003 ====================================================
1005 In order to run the NSB PROX test.
1007 1. Install NSB on Openstack deployment node. See `NSB Installation`_
1009 2. To enter container::
1011 docker exec -it yardstick /bin/bash
1013 3. Install configuration file
1015 a. Goto location of PROX tests in container ::
1017 cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
1019 b. Install and configure ``yardstick.conf`` ::
1023 Modify /etc/yardstick/yardstick.conf as per yardstick-config-label_
1026 4. Execute the test. Eg.::
1028 yardstick --debug task start ./tc_prox_heat_context_l2fwd-4.yaml
1030 Frequently Asked Questions
1031 ==========================
1033 Here is a list of frequently asked questions.
1035 *NSB Prox does not work on Baremetal, How do I resolve this?*
1036 -------------------------------------------------------------
1038 If PROX NSB does not work on baremetal, problem is either in network
1039 configuration or test file.
1043 1. Verify network configuration. Execute existing baremetal test.::
1045 yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
1047 If test does not work then error in network configuration.
1049 a. Check DPDK on Traffic Generator and SUT via:- ::
1051 /root/dpdk-17./usertools/dpdk-devbind.py
1053 b. Verify MAC addresses match ``prox-baremetal-<ports>.yaml`` via ``ifconfig`` and ``dpdk-devbind``
1055 c. Check your eth port is what you expect. You would not be the first person to think that
1056 the port your cable is plugged into is ethX when in fact it is ethY. Use
1057 ethtool to visually confirm that the eth is where you expect.::
1061 A led should start blinking on port. (On both System-Under-Test and Traffic Generator)
1065 Install Linux kernel network driver and ensure your ports are
1066 ``bound`` to the driver via ``dpdk-devbind``. Bring up port on both
1067 SUT and Traffic Generator and check connection.
1069 i) On SUT and on Traffic Generator::
1071 ifconfig ethX/enoX up
1077 See ``Link detected`` if ``yes`` .... Cable is good. If ``no`` you have an issue with your cable/port.
1079 2. If existing baremetal works then issue is with your test. Check the traffic
1080 generator gen_<test>-<ports>.cfg to ensure it is producing a valid packet.
1082 *How do I debug NSB Prox on Baremetal?*
1083 ---------------------------------------
1087 1. Execute the test as follows::
1089 yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
1091 2. Login to Traffic Generator as ``root``.::
1094 /opt/nsb_bin/prox -f /tmp/gen_<test>-<ports>.cfg
1096 3. Login to SUT as ``root``.::
1099 /opt/nsb_bin/prox -f /tmp/handle_<test>-<ports>.cfg
1101 4. Now let's examine the Generator Output. In this case the output of
1102 ``gen_l2fwd-4.cfg``.
1104 .. image:: images/PROX_Gen_GUI.png
1106 :alt: NSB PROX Traffic Generator GUI
1108 Now let's examine the output
1110 1. Indicates the amount of data successfully transmitted on Port 0
1111 2. Indicates the amount of data successfully received on port 1
1112 3. Indicates the amount of data successfully handled for port 1
1114 It appears what is transmitted is received.
1117 The number of packets MAY not exactly match because the ports are read in
1121 What is transmitted on PORT X may not always be received on same port.
1122 Please check the Test scenario.
1124 5. Now lets examine the SUT Output
1126 .. image:: images/PROX_SUT_GUI.png
1128 :alt: NSB PROX SUT GUI
1130 Now lets examine the output
1132 1. What is received on 0 is transmitted on 1, received on 1 transmitted on 0,
1133 received on 2 transmitted on 3 and received on 3 transmitted on 2.
1134 2. No packets are Failed.
1135 3. No packets are discarded.
1137 We can also dump the packets being received or transmitted via the following commands. ::
1139 dump Arguments: <core id> <task id> <nb packets>
1140 Create a hex dump of <nb_packets> from <task_id> on <core_id> showing how
1141 packets have changed between RX and TX.
1142 dump_rx Arguments: <core id> <task id> <nb packets>
1143 Create a hex dump of <nb_packets> from <task_id> on <core_id> at RX
1144 dump_tx Arguments: <core id> <task id> <nb packets>
1145 Create a hex dump of <nb_packets> from <task_id> on <core_id> at TX
1151 *NSB Prox works on Baremetal but not in Openstack. How do I resolve this?*
1152 --------------------------------------------------------------------------
1154 NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A
1155 badly formed packed may still work with PROX on Baremetal. However on
1156 Openstack the packet must be correct and all fields of the header correct.
1157 E.g. A packet with an invalid Protocol ID would still work in Baremetal but
1158 this packet would be rejected by openstack.
1162 1. Check the validity of the packet.
1163 2. Use a known good packet in your test
1164 3. If using ``Random`` fields in the traffic generator, disable them and
1168 *How do I debug NSB Prox on Openstack?*
1169 ---------------------------------------
1173 1. Execute the test as follows::
1175 yardstick --debug task start --keep-deploy ./tc_prox_heat_context_l2fwd-4.yaml
1177 2. Access docker image if required via::
1179 docker exec -it yardstick /bin/bash
1181 3. Install openstack credentials.
1183 Depending on your openstack deployment, the location of these credentials
1185 On this platform I do this via::
1187 scp root@10.237.222.55:/etc/kolla/admin-openrc.sh .
1188 source ./admin-openrc.sh
1190 4. List Stack details
1192 a. Get the name of the Stack.
1194 .. image:: images/PROX_Openstack_stack_list.png
1196 :alt: NSB PROX openstack stack list
1198 b. Get the Floating IP of the Traffic Generator & SUT
1200 This generates a lot of information. Please note the floating IP of the
1201 VNF and the Traffic Generator.
1203 .. image:: images/PROX_Openstack_stack_show_a.png
1205 :alt: NSB PROX openstack stack show (Top)
1207 From here you can see the floating IP Address of the SUT / VNF
1209 .. image:: images/PROX_Openstack_stack_show_b.png
1211 :alt: NSB PROX openstack stack show (Top)
1213 From here you can see the floating IP Address of the Traffic Generator
1215 c. Get ssh identity file
1217 In the docker container locate the identity file.::
1219 cd /home/opnfv/repos/yardstick/yardstick/resources/files
1222 5. Login to SUT as ``Ubuntu``.::
1224 ssh -i ./yardstick_key-01029d1d ubuntu@172.16.2.158
1230 Now continue as baremetal.
1232 6. Login to SUT as ``Ubuntu``.::
1234 ssh -i ./yardstick_key-01029d1d ubuntu@172.16.2.156
1240 Now continue as baremetal.
1242 *How do I resolve "Quota exceeded for resources"*
1243 -------------------------------------------------
1247 This usually occurs due to 2 reasons when executing an openstack test.
1249 1. One or more stacks already exists and are consuming all resources. To resolve ::
1251 openstack stack list
1255 +--------------------------------------+--------------------+-----------------+----------------------+--------------+
1256 | ID | Stack Name | Stack Status | Creation Time | Updated Time |
1257 +--------------------------------------+--------------------+-----------------+----------------------+--------------+
1258 | acb559d7-f575-4266-a2d4-67290b556f15 | yardstick-e05ba5a4 | CREATE_COMPLETE | 2017-12-06T15:00:05Z | None |
1259 | 7edf21ce-8824-4c86-8edb-f7e23801a01b | yardstick-08bda9e3 | CREATE_COMPLETE | 2017-12-06T14:56:43Z | None |
1260 +--------------------------------------+--------------------+-----------------+----------------------+--------------+
1262 In this case 2 stacks already exist.
1266 openstack stack delete yardstick-08bda9e3
1267 Are you sure you want to delete this stack(s) [y/N]? y
1269 2. The openstack configuration quotas are too small.
1271 The solution is to increase the quota. Use below to query existing quotas::
1273 openstack quota show
1277 openstack quota set <resource>
1279 *Openstack Cli fails or hangs. How do I resolve this?*
1280 ------------------------------------------------------
1284 If it fails due to ::
1286 Missing value auth-url required for auth plugin password
1288 Check your shell environment for Openstack variables. One of them should
1289 contain the authentication URL ::
1292 OS_AUTH_URL=``https://192.168.72.41:5000/v3``
1294 Or similar. Ensure that openstack configurations are exported. ::
1296 cat /etc/kolla/admin-openrc.sh
1300 export OS_PROJECT_DOMAIN_NAME=default
1301 export OS_USER_DOMAIN_NAME=default
1302 export OS_PROJECT_NAME=admin
1303 export OS_TENANT_NAME=admin
1304 export OS_USERNAME=admin
1305 export OS_PASSWORD=BwwSEZqmUJA676klr9wa052PFjNkz99tOccS9sTc
1306 export OS_AUTH_URL=http://193.168.72.41:35357/v3
1307 export OS_INTERFACE=internal
1308 export OS_IDENTITY_API_VERSION=3
1309 export EXTERNAL_NETWORK=yardstick-public
1313 If the Openstack ClI appears to hang, then verify the proxys and ``no_proxy``
1314 are set correctly. They should be similar to ::
1316 FTP_PROXY="http://<your_proxy>:<port>/"
1317 HTTPS_PROXY="http://<your_proxy>:<port>/"
1318 HTTP_PROXY="http://<your_proxy>:<port>/"
1319 NO_PROXY="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
1320 ftp_proxy="http://<your_proxy>:<port>/"
1321 http_proxy="http://<your_proxy>:<port>/"
1322 https_proxy="http://<your_proxy>:<port>/"
1323 no_proxy="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
1327 1) 10.237.222.55 = IP Address of deployment node
1328 2) 10.237.223.80 = IP Address of Controller node
1329 3) 10.237.222.134 = IP Address of Compute Node
1331 *How to Understand the Grafana output?*
1332 ---------------------------------------
1334 .. image:: images/PROX_Grafana_1.png
1336 :alt: NSB PROX Grafana_1
1338 .. image:: images/PROX_Grafana_2.png
1340 :alt: NSB PROX Grafana_2
1342 .. image:: images/PROX_Grafana_3.png
1344 :alt: NSB PROX Grafana_3
1346 .. image:: images/PROX_Grafana_4.png
1348 :alt: NSB PROX Grafana_4
1350 A. Test Parameters - Test interval, Duartion, Tolerated Loss and Test Precision
1352 B. No. of packets send and received during test
1354 C. Generator Stats - packets sent, received and attempted by Generator
1358 E. No. of packets received by SUT
1360 F. No. of packets forwarded by SUT
1362 G. No. of packets sent by the generator per port, for each interval.
1364 H. No. of packets received by the generator per port, for each interval.
1366 I. No. of packets sent and received by the generator and lost by the SUT that
1367 meet the success criteria
1369 J. The change in the Percentage of Line Rate used over a test, The MAX and the
1370 MIN should converge to within the interval specified as the
1373 K. Packet size supported during test. If *N/A* appears in any field the
1374 result has not been decided.
1376 L. Calculated throughput in MPPS (Million Packets Per second) for this line
1379 M. No. of packets sent by the generator in MPPS
1381 N. No. of packets received by the generator in MPPS
1383 O. No. of packets sent by SUT.
1385 P. No. of packets received by the SUT
1387 Q. Total no. of dropped packets -- Packets sent but not received back by the
1388 generator, these may be dropped by the SUT or the generator.
1390 R. The tolerated no. of dropped packets.
1392 S. Test throughput in Gbps
1394 T. Latencey per Port