1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
12 This document is intended to aid those who want to modify the vsperf code. Or
13 to extend it - for example to add support for new traffic generators,
14 deployment scenarios and so on.
19 Example Connectivity to DUT
20 ---------------------------
22 Establish connectivity to the VSPERF DUT Linux host, such as the DUT in Pod 3,
23 by following the steps in `Testbed POD3
24 <https://wiki.opnfv.org/get_started/pod_3_-_characterize_vswitch_performance>`__
26 The steps cover booking the DUT and establishing the VSPERF environment.
31 List all the cli options:
33 .. code-block:: console
37 Run all tests that have ``tput`` in their name - ``phy2phy_tput``, ``pvp_tput`` etc.:
39 .. code-block:: console
41 $ ./vsperf --tests 'tput'
43 As above but override default configuration with settings in '10_custom.conf'.
44 This is useful as modifying configuration directly in the configuration files
45 in ``conf/NN_*.py`` shows up as changes under git source control:
47 .. code-block:: console
49 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests 'tput'
51 Override specific test parameters. Useful for shortening the duration of tests
52 for development purposes:
54 .. code-block:: console
56 $ ./vsperf --test-params 'duration=10;rfc2544_tests=1;pkt_sizes=64' --tests 'pvp_tput'
61 This is a typical flow of control for a test.
69 The conf package contains the configuration files (``*.conf``) for all system
70 components, it also provides a ``settings`` object that exposes all of these
73 Settings are not passed from component to component. Rather they are available
74 globally to all components once they import the conf package.
76 .. code-block:: python
78 from conf import settings
80 log_file = settings.getValue('LOG_FILE_DEFAULT')
82 Settings files (``*.conf``) are valid python code so can be set to complex
83 types such as lists and dictionaries as well as scalar types:
85 .. code-block:: python
87 first_packet_size = settings.getValue('PACKET_SIZE_LIST')[0]
89 Configuration Procedure and Precedence
90 --------------------------------------
92 Configuration files follow a strict naming convention that allows them to be
93 processed in a specific order. All the .conf files are named ``NN_name.conf``,
94 where NN is a decimal number. The files are processed in order from 00_name.conf
95 to 99_name.conf so that if the name setting is given in both a lower and higher
96 numbered conf file then the higher numbered file is the effective setting as it
97 is processed after the setting in the lower numbered file.
99 The values in the file specified by ``--conf-file`` takes precedence over all
100 the other configuration files and does not have to follow the naming
103 Configuration of GUEST options
104 ------------------------------
106 VSPERF is able to setup scenarios involving a number of VMs in series or in parallel.
107 All configuration options related to a particular VM instance are defined as
108 lists and prefixed with ``GUEST_`` label. It is essential, that there is enough
109 items in all ``GUEST_`` options to cover all VM instances involved in the test.
110 In case there is not enough items, then VSPERF will use the first item of
111 particular ``GUEST_`` option to expand the list to required length.
113 Example of option expansion for 4 VMs:
115 .. code-block:: python
121 GUEST_MEMORY = ['2048', '4096']
124 Values after automatic expansion:
126 GUEST_SMP = ['2', '2', '2', '2']
127 GUEST_MEMORY = ['2048', '4096', '2048', '2048']
130 First option can contain macros starting with ``#`` to generate VM specific values.
131 These macros can be used only for options of ``list`` or ``str`` types with ``GUEST_``
134 Example of macros and their expnasion for 2 VMs:
136 .. code-block:: python
141 GUEST_SHARE_DIR = ['/tmp/qemu#VMINDEX_share']
142 GUEST_BRIDGE_IP = ['#IP(1.1.1.5)/16']
145 Values after automatic expansion:
147 GUEST_SHARE_DIR = ['/tmp/qemu0_share', '/tmp/qemu1_share']
148 GUEST_BRIDGE_IP = ['1.1.1.5/16', '1.1.1.6/16']
150 Additional examples are available at ``04_vnf.conf``.
152 Note: In case, that macro is detected in the first item of the list, then
153 all other items are ignored and list content is created automatically.
155 Multiple macros can be used inside one configuration option definition, but macros
156 cannot be used inside other macros. The only exception is macro ``#VMINDEX``, which
157 is expanded first and thus it can be used inside other macros.
159 Following macros are supported:
161 * ``#VMINDEX`` - it is replaced by index of VM being executed; This macro
162 is expanded first, so it can be used inside other macros.
166 .. code-block:: python
168 GUEST_SHARE_DIR = ['/tmp/qemu#VMINDEX_share']
170 * ``#MAC(mac_address[, step])`` - it will iterate given ``mac_address``
171 with optional ``step``. In case that step is not defined, then it is set to 1.
172 It means, that first VM will use the value of ``mac_address``, second VM
173 value of ``mac_address`` increased by ``step``, etc.
177 .. code-block:: python
179 GUEST_NICS = [[{'mac' : '#MAC(00:00:00:00:00:01,2)'}]]
181 * ``#IP(ip_address[, step])`` - it will iterate given ``ip_address``
182 with optional ``step``. In case that step is not defined, then it is set to 1.
183 It means, that first VM will use the value of ``ip_address``, second VM
184 value of ``ip_address`` increased by ``step``, etc.
188 .. code-block:: python
190 GUEST_BRIDGE_IP = ['#IP(1.1.1.5)/16']
192 * ``#EVAL(expression)`` - it will evaluate given ``expression`` as python code;
193 Only simple expressions should be used. Call of the functions is not supported.
197 .. code-block:: python
199 GUEST_CORE_BINDING = [('#EVAL(6+2*#VMINDEX)', '#EVAL(7+2*#VMINDEX)')]
204 ``conf.settings`` also loads configuration from the command line and from the environment.
209 Every testcase uses one of the supported deployment scenarios to setup test environment.
210 The controller responsible for a given scenario configures flows in the vswitch to route
211 traffic among physical interfaces connected to the traffic generator and virtual
212 machines. VSPERF supports several deployments including PXP deployment, which can
213 setup various scenarios with multiple VMs.
215 These scenarios are realized by VswitchControllerPXP class, which can configure and
216 execute given number of VMs in serial or parallel configurations. Every VM can be
217 configured with just one or an even number of interfaces. In case that VM has more than
218 2 interfaces, then traffic is properly routed among pairs of interfaces.
220 Example of traffic routing for VM with 4 NICs in serial configuration:
222 .. code-block:: console
224 +------------------------------------------+
226 | +---------------+ +---------------+ |
227 | | Application | | Application | |
228 | +---------------+ +---------------+ |
231 | +---------------+ +---------------+ |
232 | | logical ports | | logical ports | |
234 +--+---------------+----+---------------+--+
238 +-----------+---------------+----+---------------+----------+
239 | vSwitch | 0 1 | | 2 3 | |
240 | | logical ports | | logical ports | |
241 | previous +---------------+ +---------------+ next |
242 | VM or PHY ^ | ^ | VM or PHY|
243 | port -----+ +------------+ +---> port |
244 +-----------------------------------------------------------+
246 It is also possible to define different number of interfaces for each VM to better
247 simulate real scenarios.
249 Example of traffic routing for 2 VMs in serial configuration, where 1st VM has
250 4 NICs and 2nd VM 2 NICs:
252 .. code-block:: console
254 +------------------------------------------+ +---------------------+
255 | 1st VM with 4 NICs | | 2nd VM with 2 NICs |
256 | +---------------+ +---------------+ | | +---------------+ |
257 | | Application | | Application | | | | Application | |
258 | +---------------+ +---------------+ | | +---------------+ |
261 | +---------------+ +---------------+ | | +---------------+ |
262 | | logical ports | | logical ports | | | | logical ports | |
263 | | 0 1 | | 2 3 | | | | 0 1 | |
264 +--+---------------+----+---------------+--+ +--+---------------+--+
268 +-----------+---------------+----+---------------+-------+---------------+----------+
269 | vSwitch | 0 1 | | 2 3 | | 4 5 | |
270 | | logical ports | | logical ports | | logical ports | |
271 | previous +---------------+ +---------------+ +---------------+ next |
272 | VM or PHY ^ | ^ | ^ | VM or PHY|
273 | port -----+ +------------+ +---------------+ +----> port |
274 +-----------------------------------------------------------------------------------+
276 The number of VMs involved in the test and the type of their connection is defined
277 by deployment name as follows:
279 * ``pvvp[number]`` - configures scenario with VMs connected in series with
280 optional ``number`` of VMs. In case that ``number`` is not specified, then
283 Example of 2 VMs in a serial configuration:
285 .. code-block:: console
287 +----------------------+ +----------------------+
288 | 1st VM | | 2nd VM |
289 | +---------------+ | | +---------------+ |
290 | | Application | | | | Application | |
291 | +---------------+ | | +---------------+ |
294 | +---------------+ | | +---------------+ |
295 | | logical ports | | | | logical ports | |
296 | | 0 1 | | | | 0 1 | |
297 +---+---------------+--+ +---+---------------+--+
301 +---+---------------+---------+---------------+--+
303 | | logical ports | vSwitch | logical ports | |
304 | +---------------+ +---------------+ |
306 | | +-----------------+ v |
307 | +----------------------------------------+ |
308 | | physical ports | |
310 +---+----------------------------------------+---+
314 +------------------------------------------------+
316 | traffic generator |
318 +------------------------------------------------+
320 * ``pvpv[number]`` - configures scenario with VMs connected in parallel with
321 optional ``number`` of VMs. In case that ``number`` is not specified, then
322 2 VMs will be used. Multistream feature is used to route traffic to particular
323 VMs (or NIC pairs of every VM). It means, that VSPERF will enable multistream
324 feaure and sets the number of streams to the number of VMs and their NIC
325 pairs. Traffic will be dispatched based on Stream Type, i.e. by UDP port,
326 IP address or MAC address.
328 Example of 2 VMs in a parallel configuration, where traffic is dispatched
329 based on the UDP port.
331 .. code-block:: console
333 +----------------------+ +----------------------+
334 | 1st VM | | 2nd VM |
335 | +---------------+ | | +---------------+ |
336 | | Application | | | | Application | |
337 | +---------------+ | | +---------------+ |
340 | +---------------+ | | +---------------+ |
341 | | logical ports | | | | logical ports | |
342 | | 0 1 | | | | 0 1 | |
343 +---+---------------+--+ +---+---------------+--+
347 +---+---------------+---------+---------------+--+
349 | | logical ports | vSwitch | logical ports | |
350 | +---------------+ +---------------+ |
352 | | ......................: : |
354 | port| port: +--------------------+ : |
357 | +----------------------------------------+ |
358 | | physical ports | |
360 +---+----------------------------------------+---+
364 +------------------------------------------------+
366 | traffic generator |
368 +------------------------------------------------+
371 PXP deployment is backward compatible with PVP deployment, where ``pvp`` is
372 an alias for ``pvvp1`` and it executes just one VM.
374 The number of interfaces used by VMs is defined by configuration option
375 ``GUEST_NICS_NR``. In case that more than one pair of interfaces is defined
378 * for ``pvvp`` (serial) scenario every NIC pair is connected in serial
379 before connection to next VM is created
380 * for ``pvpv`` (parallel) scenario every NIC pair is directly connected
381 to the physical ports and unique traffic stream is assigned to it
385 * Deployment ``pvvp10`` will start 10 VMs and connects them in series
386 * Deployment ``pvpv4`` will start 4 VMs and connects them in parallel
387 * Deployment ``pvpv1`` and GUEST_NICS_NR = [4] will start 1 VM with
388 4 interfaces and every NIC pair is directly connected to the
390 * Deployment ``pvvp`` and GUEST_NICS_NR = [2, 4] will start 2 VMs;
391 1st VM will have 2 interfaces and 2nd VM 4 interfaces. These interfaces
392 will be connected in serial, i.e. traffic will flow as follows:
393 PHY1 -> VM1_1 -> VM1_2 -> VM2_1 -> VM2_2 -> VM2_3 -> VM2_4 -> PHY2
395 Note: In case that only 1 or more than 2 NICs are configured for VM,
396 then ``testpmd`` should be used as forwarding application inside the VM.
397 As it is able to forward traffic between multiple VM NIC pairs.
399 Note: In case of ``linux_bridge``, all NICs are connected to the same
400 bridge inside the VM.
402 VM, vSwitch, Traffic Generator Independence
403 ===========================================
405 VSPERF supports different vSwithes, Traffic Generators, VNFs
406 and Forwarding Applications by using standard object-oriented polymorphism:
408 * Support for vSwitches is implemented by a class inheriting from IVSwitch.
409 * Support for Traffic Generators is implemented by a class inheriting from
411 * Support for VNF is implemented by a class inheriting from IVNF.
412 * Support for Forwarding Applications is implemented by a class inheriting
415 By dealing only with the abstract interfaces the core framework can support
416 many implementations of different vSwitches, Traffic Generators, VNFs
417 and Forwarding Applications.
422 .. code-block:: python
427 add_switch(switch_name)
428 del_switch(switch_name)
429 add_phy_port(switch_name)
430 add_vport(switch_name)
431 get_ports(switch_name)
432 del_port(switch_name, port_name)
433 add_flow(switch_name, flow)
434 del_flow(switch_name, flow=None)
439 .. code-block:: python
441 class ITrafficGenerator:
445 send_burst_traffic(traffic, numpkts, time, framerate)
447 send_cont_traffic(traffic, time, framerate)
448 start_cont_traffic(traffic, time, framerate)
449 stop_cont_traffic(self):
451 send_rfc2544_throughput(traffic, tests, duration, lossrate)
452 start_rfc2544_throughput(traffic, tests, duration, lossrate)
453 wait_rfc2544_throughput(self)
455 send_rfc2544_back2back(traffic, tests, duration, lossrate)
456 start_rfc2544_back2back(traffic, , tests, duration, lossrate)
457 wait_rfc2544_back2back()
459 Note ``send_xxx()`` blocks whereas ``start_xxx()`` does not and must be followed by a subsequent call to ``wait_xxx()``.
464 .. code-block:: python
468 monitor_path, shared_path_host,
469 shared_path_guest, guest_prompt)
473 execute_and_wait (command)
478 .. code-block:: python
488 Controllers are used in conjunction with abstract interfaces as way
489 of decoupling the control of vSwtiches, VNFs, TrafficGenerators
490 and Forwarding Applications from other components.
492 The controlled classes provide basic primitive operations. The Controllers
493 sequence and co-ordinate these primitive operation in to useful actions. For
494 instance the vswitch_controller_p2p can be used to bring any vSwitch (that
495 implements the primitives defined in IVSwitch) into the configuration required
496 by the Phy-to-Phy Deployment Scenario.
498 In order to support a new vSwitch only a new implementation of IVSwitch needs
499 be created for the new vSwitch to be capable of fulfilling all the Deployment
500 Scenarios provided for by existing or future vSwitch Controllers.
502 Similarly if a new Deployment Scenario is required it only needs to be written
503 once as a new vSwitch Controller and it will immediately be capable of
504 controlling all existing and future vSwitches in to that Deployment Scenario.
506 Similarly the Traffic Controllers can be used to co-ordinate basic operations
507 provided by implementers of ITrafficGenerator to provide useful tests. Though
508 traffic generators generally already implement full test cases i.e. they both
509 generate suitable traffic and analyse returned traffic in order to implement a
510 test which has typically been predefined in an RFC document. However the
511 Traffic Controller class allows for the possibility of further enhancement -
512 such as iterating over tests for various packet sizes or creating new tests.
514 Traffic Controller's Role
515 -------------------------
517 .. image:: traffic_controller.png
520 Loader & Component Factory
521 --------------------------
523 The working of the Loader package (which is responsible for *finding* arbitrary
524 classes based on configuration data) and the Component Factory which is
525 responsible for *choosing* the correct class for a particular situation - e.g.
526 Deployment Scenario can be seen in this diagram.
528 .. image:: factory_and_loader.png
533 Vsperf uses a standard set of routing tables in order to allow tests to easily
534 mix and match Deployment Scenarios (PVP, P2P topology), Tuple Matching and
535 Frame Modification requirements.
537 .. code-block:: console
541 | Table 0 | table#0 - Match table. Flows designed to force 5 & 10
542 | | tuple matches go here.
548 +--------------+ table#1 - Routing table. Flow entries to forward
549 | | packets between ports goes here.
550 | Table 1 | The chosen port is communicated to subsequent tables by
551 | | setting the metadata value to the egress port number.
552 | | Generally this table is set-up by by the
553 +--------------+ vSwitchController.
557 +--------------+ table#2 - Frame modification table. Frame modification
558 | | flow rules are isolated in this table so that they can
559 | Table 2 | be turned on or off without affecting the routing or
560 | | tuple-matching flow rules. This allows the frame
561 | | modification and tuple matching required by the tests
562 | | in the VSWITCH PERFORMANCE FOR TELCO NFV test
563 +--------------+ specification to be independent of the Deployment
564 | Scenario set up by the vSwitchController.
569 | Table 3 | table#3 - Egress table. Egress packets on the ports
570 | | setup in Table 1.