1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
5 vSwitchPerf test suites userguide
6 ---------------------------------
11 VSPERF requires a traffic generators to run tests, automated traffic gen
12 support in VSPERF includes:
14 - IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA
16 - Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual
17 in a VM) and a VM to run the Spirent Virtual Deployment Service image,
18 formerly known as "Spirent LabServer".
19 - Xena Network traffic generator (Xena hardware chassis) that houses the Xena
20 Traffic generator modules.
21 - Moongen software traffic generator. Requires a separate machine running
22 moongen to execute packet generation.
23 - T-Rex software traffic generator. Requires a separate machine running T-Rex
24 Server to execute packet generation.
26 If you want to use another traffic generator, please select the :ref:`trafficgen-dummy`
32 To see the supported Operating Systems, vSwitches and system requirements,
33 please follow the `installation instructions <vsperf-installation>`.
35 Traffic Generator Setup
36 ^^^^^^^^^^^^^^^^^^^^^^^
38 Follow the `Traffic generator instructions <trafficgen-installation>` to
39 install and configure a suitable traffic generator.
41 Cloning and building src dependencies
42 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
44 In order to run VSPERF, you will need to download DPDK and OVS. You can
45 do this manually and build them in a preferred location, OR you could
46 use vswitchperf/src. The vswitchperf/src directory contains makefiles
47 that will allow you to clone and build the libraries that VSPERF depends
48 on, such as DPDK and OVS. To clone and build simply:
50 .. code-block:: console
55 VSPERF can be used with stock OVS (without DPDK support). When build
56 is finished, the libraries are stored in src_vanilla directory.
58 The 'make' builds all options in src:
61 * OVS with vhost_user as the guest access method (with DPDK support)
63 The vhost_user build will reside in src/ovs/
64 The Vanilla OVS build will reside in vswitchperf/src_vanilla
66 To delete a src subdirectory and its contents to allow you to re-clone simply
69 .. code-block:: console
73 Configure the ``./conf/10_custom.conf`` file
74 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
76 The ``10_custom.conf`` file is the configuration file that overrides
77 default configurations in all the other configuration files in ``./conf``
78 The supplied ``10_custom.conf`` file **MUST** be modified, as it contains
79 configuration items for which there are no reasonable default values.
81 The configuration items that can be added is not limited to the initial
82 contents. Any configuration item mentioned in any .conf file in
83 ``./conf`` directory can be added and that item will be overridden by
84 the custom configuration value.
86 Further details about configuration files evaluation and special behaviour
87 of options with ``GUEST_`` prefix could be found at :ref:`design document
88 <design-configuration>`.
90 Using a custom settings file
91 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
93 If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
94 of if you want to use an alternative configuration file, the file can
95 be passed to ``vsperf`` via the ``--conf-file`` argument.
97 .. code-block:: console
99 $ ./vsperf --conf-file <path_to_custom_conf> ...
101 Note that configuration passed in via the environment (``--load-env``)
102 or via another command line argument will override both the default and
103 your custom configuration files. This "priority hierarchy" can be
104 described like so (1 = max priority):
106 1. Testcase definition section ``Parameters``
107 2. Command line arguments
108 3. Environment variables
109 4. Configuration file(s)
111 Further details about configuration files evaluation and special behaviour
112 of options with ``GUEST_`` prefix could be found at :ref:`design document
113 <design-configuration>`.
115 .. _overriding-parameters-documentation:
117 Referencing parameter values
118 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
120 It is possible to use a special macro ``#PARAM()`` to refer to the value of
121 another configuration parameter. This reference is evaluated during
122 access of the parameter value (by ``settings.getValue()`` call), so it
123 can refer to parameters created during VSPERF runtime, e.g. NICS dictionary.
124 It can be used to reflect DUT HW details in the testcase definition.
136 # set destination MAC to the MAC of the first
137 # interface from WHITELIST_NICS list
138 'dstmac' : '#PARAM(NICS[0]["mac"])',
143 Overriding values defined in configuration files
144 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
146 The configuration items can be overridden by command line argument
147 ``--test-params``. In this case, the configuration items and
148 their values should be passed in form of ``item=value`` and separated
155 $ ./vsperf --test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,);" \
156 "GUEST_LOOPBACK=['testpmd','l2fwd']" pvvp_tput
158 The ``--test-params`` command line argument can also be used to override default
159 configuration values for multiple tests. Providing a list of parameters will apply each
160 element of the list to the test with the same index. If more tests are run than
161 parameters provided the last element of the list will repeat.
165 $ ./vsperf --test-params "['TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)',"
166 "'TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(64,)']" \
169 The second option is to override configuration items by ``Parameters`` section
170 of the test case definition. The configuration items can be added into ``Parameters``
171 dictionary with their new values. These values will override values defined in
172 configuration files or specified by ``--test-params`` command line argument.
178 "Parameters" : {'TRAFFICGEN_PKT_SIZES' : (128,),
179 'TRAFFICGEN_DURATION' : 10,
180 'GUEST_LOOPBACK' : ['testpmd','l2fwd'],
183 **NOTE:** In both cases, configuration item names and their values must be specified
184 in the same form as they are defined inside configuration files. Parameter names
185 must be specified in uppercase and data types of original and new value must match.
186 Python syntax rules related to data types and structures must be followed.
187 For example, parameter ``TRAFFICGEN_PKT_SIZES`` above is defined as a tuple
188 with a single value ``128``. In this case trailing comma is mandatory, otherwise
189 value can be wrongly interpreted as a number instead of a tuple and vsperf
190 execution would fail. Please check configuration files for default values and their
191 types and use them as a basis for any customized values. In case of any doubt, please
192 check official python documentation related to data structures like tuples, lists
195 **NOTE:** Vsperf execution will terminate with runtime error in case, that unknown
196 parameter name is passed via ``--test-params`` CLI argument or defined in ``Parameters``
197 section of test case definition. It is also forbidden to redefine a value of
198 ``TEST_PARAMS`` configuration item via CLI or ``Parameters`` section.
203 VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment
204 scenarios involving VMs. The image can be downloaded from
205 `<http://artifacts.opnfv.org/>`__.
207 Please see the installation instructions for information on :ref:`vloop-vnf`
215 A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
216 support for Destination Network Address Translation (DNAT) for both the MAC and
217 IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
222 All examples inside these docs assume, that user is inside the VSPERF
223 directory. VSPERF can be executed from any directory.
225 Before running any tests make sure you have root permissions by adding
226 the following line to /etc/sudoers:
228 .. code-block:: console
230 username ALL=(ALL) NOPASSWD: ALL
232 username in the example above should be replaced with a real username.
234 To list the available tests:
236 .. code-block:: console
240 To run a single test:
242 .. code-block:: console
246 Where $TESTNAME is the name of the vsperf test you would like to run.
248 To run a test multiple times, repeat it:
250 .. code-block:: console
252 $ ./vsperf $TESTNAME $TESTNAME $TESTNAME
254 To run a group of tests, for example all tests with a name containing
257 .. code-block:: console
259 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"
263 .. code-block:: console
265 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
267 Some tests allow for configurable parameters, including test duration
268 (in seconds) as well as packet sizes (in bytes).
272 $ ./vsperf --conf-file user_settings.py \
273 --tests RFC2544Tput \
274 --test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)"
276 To specify configurable parameters for multiple tests, use a list of
277 parameters. One element for each test.
281 $ ./vsperf --conf-file user_settings.py \
282 --test-params "['TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)',"\
283 "'TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(64,)']" \
284 phy2phy_cont phy2phy_cont
286 If the ``CUMULATIVE_PARAMS`` setting is set to True and there are different parameters
287 provided for each test using ``--test-params``, each test will take the parameters of
288 the previous test before appyling it's own.
289 With ``CUMULATIVE_PARAMS`` set to True the following command will be equivalent to the
294 $ ./vsperf --conf-file user_settings.py \
295 --test-params "['TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)',"\
296 "'TRAFFICGEN_PKT_SIZES=(64,)']" \
297 phy2phy_cont phy2phy_cont
300 For all available options, check out the help dialog:
302 .. code-block:: console
306 Executing Vanilla OVS tests
307 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
309 1. If needed, recompile src for all OVS variants
311 .. code-block:: console
317 2. Update your ``10_custom.conf`` file to use Vanilla OVS:
319 .. code-block:: python
321 VSWITCH = 'OvsVanilla'
325 .. code-block:: console
327 $ ./vsperf --conf-file=<path_to_custom_conf>
329 Please note if you don't want to configure Vanilla OVS through the
330 configuration file, you can pass it as a CLI argument.
332 .. code-block:: console
334 $ ./vsperf --vswitch OvsVanilla
337 Executing tests with VMs
338 ^^^^^^^^^^^^^^^^^^^^^^^^
340 To run tests using vhost-user as guest access method:
342 1. Set VSWITCH and VNF of your settings file to:
344 .. code-block:: python
346 VSWITCH = 'OvsDpdkVhost'
347 VNF = 'QemuDpdkVhost'
349 2. If needed, recompile src for all OVS variants
351 .. code-block:: console
359 .. code-block:: console
361 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
363 **NOTE:** By default vSwitch is acting as a server for dpdk vhost-user sockets.
364 In case, that QEMU should be a server for vhost-user sockets, then parameter
365 ``VSWITCH_VHOSTUSER_SERVER_MODE`` should be set to ``False``.
367 Executing tests with VMs using Vanilla OVS
368 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
370 To run tests using Vanilla OVS:
372 1. Set the following variables:
374 .. code-block:: python
376 VSWITCH = 'OvsVanilla'
377 VNF = 'QemuVirtioNet'
379 VANILLA_TGEN_PORT1_IP = n.n.n.n
380 VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn
382 VANILLA_TGEN_PORT2_IP = n.n.n.n
383 VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn
385 VANILLA_BRIDGE_IP = n.n.n.n
387 or use ``--test-params`` option
389 .. code-block:: console
391 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
392 --test-params "VANILLA_TGEN_PORT1_IP=n.n.n.n;" \
393 "VANILLA_TGEN_PORT1_MAC=nn:nn:nn:nn:nn:nn;" \
394 "VANILLA_TGEN_PORT2_IP=n.n.n.n;" \
395 "VANILLA_TGEN_PORT2_MAC=nn:nn:nn:nn:nn:nn"
397 2. If needed, recompile src for all OVS variants
399 .. code-block:: console
407 .. code-block:: console
409 $ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
416 Currently it is not possible to use standard scenario deployments for execution of
417 tests with VPP. It means, that deployments ``p2p``, ``pvp``, ``pvvp`` and in general any
418 :ref:`pxp-deployment` won't work with VPP. However it is possible to use VPP in
419 :ref:`step-driven-tests`. A basic set of VPP testcases covering ``phy2phy``, ``pvp``
420 and ``pvvp`` tests are already prepared.
422 List of performance tests with VPP support follows:
424 * phy2phy_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
425 * phy2phy_cont_vpp: VPP: Phy2Phy Continuous Stream
426 * phy2phy_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
427 * pvp_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
428 * pvp_cont_vpp: VPP: PVP Continuous Stream
429 * pvp_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
430 * pvvp_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
431 * pvvp_cont_vpp: VPP: PVP Continuous Stream
432 * pvvp_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
434 In order to execute testcases with VPP it is required to:
436 * install VPP manually, see :ref:`vpp-installation`
437 * configure ``WHITELIST_NICS``, with two physical NICs connected to the traffic generator
438 * configure traffic generator, see :ref:`trafficgen-installation`
440 After that it is possible to execute VPP testcases listed above.
444 .. code-block:: console
446 $ ./vsperf --conf-file=<path_to_custom_conf> phy2phy_tput_vpp
450 Using vfio_pci with DPDK
451 ^^^^^^^^^^^^^^^^^^^^^^^^^
453 To use vfio with DPDK instead of igb_uio add into your custom configuration
454 file the following parameter:
456 .. code-block:: python
458 PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
461 **NOTE:** In case, that DPDK is installed from binary package, then please
462 set ``PATHS['dpdk']['bin']['modules']`` instead.
464 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
466 **NOTE:** Please ensure your boot/grub parameters include
469 **NOTE:** In case of VPP, it is required to explicitly define, that vfio-pci
470 DPDK driver should be used. It means to update dpdk part of VSWITCH_VPP_ARGS
471 dictionary with uio-driver section, e.g. VSWITCH_VPP_ARGS['dpdk'] = 'uio-driver vfio-pci'
473 .. code-block:: console
475 iommu=pt intel_iommu=on
477 To check that IOMMU is enabled on your platform:
479 .. code-block:: console
482 [ 0.000000] Intel-IOMMU: enabled
483 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
484 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
485 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
486 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
487 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
488 [ 3.335744] IOMMU: dmar0 using Queued invalidation
489 [ 3.335746] IOMMU: dmar1 using Queued invalidation
497 To use virtual functions of NIC with SRIOV support, use extended form
498 of NIC PCI slot definition:
500 .. code-block:: python
502 WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
504 Where 'vf' is an indication of virtual function usage and following
505 number defines a VF to be used. In case that VF usage is detected,
506 then vswitchperf will enable SRIOV support for given card and it will
507 detect PCI slot numbers of selected VFs.
509 So in example above, one VF will be configured for NIC '0000:05:00.0'
510 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
511 will detect PCI addresses of selected VFs and it will use them during
514 At the end of vswitchperf execution, SRIOV support will be disabled.
516 SRIOV support is generic and it can be used in different testing scenarios.
519 * vSwitch tests with DPDK or without DPDK support to verify impact
520 of VF usage on vSwitch performance
521 * tests without vSwitch, where traffic is forwarded directly
522 between VF interfaces by packet forwarder (e.g. testpmd application)
523 * tests without vSwitch, where VM accesses VF interfaces directly
524 by PCI-passthrough_ to measure raw VM throughput performance.
528 Using QEMU with PCI passthrough support
529 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
531 Raw virtual machine throughput performance can be measured by execution of PVP
532 test with direct access to NICs by PCI pass-through. To execute VM with direct
533 access to PCI devices, enable vfio-pci_. In order to use virtual functions,
534 SRIOV-support_ must be enabled.
536 Execution of test with PCI pass-through with vswitch disabled:
538 .. code-block:: console
540 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
541 --vswitch none --vnf QemuPciPassthrough pvp_tput
543 Any of supported guest-loopback-application_ can be used inside VM with
544 PCI pass-through support.
546 Note: Qemu with PCI pass-through support can be used only with PVP test
549 .. _guest-loopback-application:
551 Selection of loopback application for tests with VMs
552 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
554 To select the loopback applications which will forward packets inside VMs,
555 the following parameter should be configured:
557 .. code-block:: python
559 GUEST_LOOPBACK = ['testpmd']
561 or use ``--test-params`` CLI argument:
563 .. code-block:: console
565 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
566 --test-params "GUEST_LOOPBACK=['testpmd']"
568 Supported loopback applications are:
570 .. code-block:: console
572 'testpmd' - testpmd from dpdk will be built and used
573 'l2fwd' - l2fwd module provided by Huawei will be built and used
574 'linux_bridge' - linux bridge will be configured
575 'buildin' - nothing will be configured by vsperf; VM image must
576 ensure traffic forwarding between its interfaces
578 Guest loopback application must be configured, otherwise traffic
579 will not be forwarded by VM and testcases with VM related deployments
580 will fail. Guest loopback application is set to 'testpmd' by default.
582 **NOTE:** In case that only 1 or more than 2 NICs are configured for VM,
583 then 'testpmd' should be used. As it is able to forward traffic between
584 multiple VM NIC pairs.
586 **NOTE:** In case of linux_bridge, all guest NICs are connected to the same
587 bridge inside the guest.
589 Mergable Buffers Options with QEMU
590 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
592 Mergable buffers can be disabled with VSPerf within QEMU. This option can
593 increase performance significantly when not using jumbo frame sized packets.
594 By default VSPerf disables mergable buffers. If you wish to enable it you
595 can modify the setting in the a custom conf file.
597 .. code-block:: python
599 GUEST_NIC_MERGE_BUFFERS_DISABLE = [False]
601 Then execute using the custom conf file.
603 .. code-block:: console
605 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
607 Alternatively you can just pass the param during execution.
609 .. code-block:: console
611 $ ./vsperf --test-params "GUEST_NIC_MERGE_BUFFERS_DISABLE=[False]"
614 Selection of dpdk binding driver for tests with VMs
615 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
617 To select dpdk binding driver, which will specify which driver the vm NICs will
618 use for dpdk bind, the following configuration parameter should be configured:
620 .. code-block:: console
622 GUEST_DPDK_BIND_DRIVER = ['igb_uio_from_src']
624 The supported dpdk guest bind drivers are:
626 .. code-block:: console
628 'uio_pci_generic' - Use uio_pci_generic driver
629 'igb_uio_from_src' - Build and use the igb_uio driver from the dpdk src
631 'vfio_no_iommu' - Use vfio with no iommu option. This requires custom
632 guest images that support this option. The default
633 vloop image does not support this driver.
635 Note: uio_pci_generic does not support sr-iov testcases with guests attached.
636 This is because uio_pci_generic only supports legacy interrupts. In case
637 uio_pci_generic is selected with the vnf as QemuPciPassthrough it will be
638 modified to use igb_uio_from_src instead.
640 Note: vfio_no_iommu requires kernels equal to or greater than 4.5 and dpdk
641 16.04 or greater. Using this option will also taint the kernel.
643 Please refer to the dpdk documents at http://dpdk.org/doc/guides for more
644 information on these drivers.
646 Guest Core and Thread Binding
647 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
649 VSPERF provides options to achieve better performance by guest core binding and
650 guest vCPU thread binding as well. Core binding is to bind all the qemu threads.
651 Thread binding is to bind the house keeping threads to some CPU and vCPU thread to
652 some other CPU, this helps to reduce the noise from qemu house keeping threads.
655 .. code-block:: python
657 GUEST_CORE_BINDING = [('#EVAL(6+2*#VMINDEX)', '#EVAL(7+2*#VMINDEX)')]
659 **NOTE** By default the GUEST_THREAD_BINDING will be none, which means same as
660 the GUEST_CORE_BINDING, i.e. the vcpu threads are sharing the physical CPUs with
661 the house keeping threads. Better performance using vCPU thread binding can be
662 achieved by enabling affinity in the custom configuration file.
664 For example, if an environment requires 32,33 to be core binded and 29,30&31 for
665 guest thread binding to achieve better performance.
667 .. code-block:: python
669 VNF_AFFINITIZATION_ON = True
670 GUEST_CORE_BINDING = [('32','33')]
671 GUEST_THREAD_BINDING = [('29', '30', '31')]
676 QEMU default to a compatible subset of performance enhancing cpu features.
677 To pass all available host processor features to the guest.
679 .. code-block:: python
681 GUEST_CPU_OPTIONS = ['host,migratable=off']
683 **NOTE** To enhance the performance, cpu features tsc deadline timer for guest,
684 the guest PMU, the invariant TSC can be provided in the custom configuration file.
686 Multi-Queue Configuration
687 ^^^^^^^^^^^^^^^^^^^^^^^^^
689 VSPerf currently supports multi-queue with the following limitations:
691 1. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The
692 default upstream package versions installed by VSPerf satisfies this
695 2. Guest image must have ethtool utility installed if using l2fwd or linux
696 bridge inside guest for loopback.
698 3. If using OVS versions 2.5.0 or less enable old style multi-queue as shown
699 in the ''02_vswitch.conf'' file.
701 .. code-block:: python
703 OVS_OLD_STYLE_MQ = True
705 To enable multi-queue for dpdk modify the ''02_vswitch.conf'' file.
707 .. code-block:: python
709 VSWITCH_DPDK_MULTI_QUEUES = 2
711 **NOTE:** you should consider using the switch affinity to set a pmd cpu mask
712 that can optimize your performance. Consider the numa of the NIC in use if this
713 applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an
714 appropriate mask to create PMD threads on the same numa node.
716 When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created
717 on the switch will set the option for multiple queues. If old style multi queue
718 has been enabled a global option for multi queue will be used instead of the
721 To enable multi-queue on the guest modify the ''04_vnf.conf'' file.
723 .. code-block:: python
725 GUEST_NIC_QUEUES = [2]
727 Enabling multi-queue at the guest will add multiple queues to each NIC port when
728 qemu launches the guest.
730 In case of Vanilla OVS, multi-queue is enabled on the tuntap ports and nic
731 queues will be enabled inside the guest with ethtool. Simply enabling the
732 multi-queue on the guest is sufficient for Vanilla OVS multi-queue.
734 Testpmd should be configured to take advantage of multi-queue on the guest if
735 using DPDKVhostUser. This can be done by modifying the ''04_vnf.conf'' file.
737 .. code-block:: python
739 GUEST_TESTPMD_PARAMS = ['-l 0,1,2,3,4 -n 4 --socket-mem 512 -- '
740 '--burst=64 -i --txqflags=0xf00 '
741 '--nb-cores=4 --rxq=2 --txq=2 '
744 **NOTE:** The guest SMP cores must be configured to allow for testpmd to use the
745 optimal number of cores to take advantage of the multiple guest queues.
747 In case of using Vanilla OVS and qemu virtio-net you can increase performance
748 by binding vhost-net threads to cpus. This can be done by enabling the affinity
749 in the ''04_vnf.conf'' file. This can be done to non multi-queue enabled
750 configurations as well as there will be 2 vhost-net threads.
752 .. code-block:: python
754 VSWITCH_VHOST_NET_AFFINITIZATION = True
756 VSWITCH_VHOST_CPU_MAP = [4,5,8,11]
758 **NOTE:** This method of binding would require a custom script in a real
761 **NOTE:** For optimal performance guest SMPs and/or vhost-net threads should be
762 on the same numa as the NIC in use if possible/applicable. Testpmd should be
763 assigned at least (nb_cores +1) total cores with the cpu mask.
768 VSPERF provides options to support jumbo frame testing with a jumbo frame supported
769 NIC and traffic generator for the following vswitches:
775 3. TestPMD loopback with or without a guest
777 **NOTE:** There is currently no support for SR-IOV or VPP at this time with jumbo
780 All packet forwarding applications for pxp testing is supported.
782 To enable jumbo frame testing simply enable the option in the conf files and set the
783 maximum size that will be used.
785 .. code-block:: python
787 VSWITCH_JUMBO_FRAMES_ENABLED = True
788 VSWITCH_JUMBO_FRAMES_SIZE = 9000
790 To enable jumbo frame testing with OVSVanilla the NIC in test on the host must have
791 its mtu size changed manually using ifconfig or applicable tools:
793 .. code-block:: console
795 ifconfig eth1 mtu 9000 up
797 **NOTE:** To make the setting consistent across reboots you should reference the OS
798 documents as it differs from distribution to distribution.
800 To start a test for jumbo frames modify the conf file packet sizes or pass the option
801 through the VSPERF command line.
803 .. code-block:: python
805 TEST_PARAMS = {'TRAFFICGEN_PKT_SIZES':(2000,9000)}
807 .. code-block:: python
809 ./vsperf --test-params "TRAFFICGEN_PKT_SIZES=2000,9000"
811 It is recommended to increase the memory size for OvsDpdkVhostUser testing from the default
812 1024. Your size required may vary depending on the number of guests in your testing. 4096
813 appears to work well for most typical testing scenarios.
815 .. code-block:: python
817 DPDK_SOCKET_MEM = ['4096', '0']
819 **NOTE:** For Jumbo frames to work with DpdkVhostUser, mergable buffers will be enabled by
820 default. If testing with mergable buffers in QEMU is desired, disable Jumbo Frames and only
821 test non jumbo frame sizes. Test Jumbo Frames sizes separately to avoid this collision.
824 Executing Packet Forwarding tests
825 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
827 To select the applications which will forward packets,
828 the following parameters should be configured:
830 .. code-block:: python
835 or use ``--vswitch`` and ``--fwdapp`` CLI arguments:
837 .. code-block:: console
839 $ ./vsperf phy2phy_cont --conf-file user_settings.py \
843 Supported Packet Forwarding applications are:
845 .. code-block:: console
847 'testpmd' - testpmd from dpdk
850 1. Update your ''10_custom.conf'' file to use the appropriate variables
851 for selected Packet Forwarder:
853 .. code-block:: python
855 # testpmd configuration
857 # packet forwarding mode supported by testpmd; Please see DPDK documentation
858 # for comprehensive list of modes supported by your version.
859 # e.g. io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho|...
860 # Note: Option "mac_retry" has been changed to "mac retry" since DPDK v16.07
861 TESTPMD_FWD_MODE = 'csum'
862 # checksum calculation layer: ip|udp|tcp|sctp|outer-ip
863 TESTPMD_CSUM_LAYER = 'ip'
864 # checksum calculation place: hw (hardware) | sw (software)
865 TESTPMD_CSUM_CALC = 'sw'
866 # recognize tunnel headers: on|off
867 TESTPMD_CSUM_PARSE_TUNNEL = 'off'
871 .. code-block:: console
873 $ ./vsperf phy2phy_tput --conf-file <path_to_settings_py>
875 Executing Packet Forwarding tests with one guest
876 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
878 TestPMD with DPDK 16.11 or greater can be used to forward packets as a switch to a single guest using TestPMD vdev
879 option. To set this configuration the following parameters should be used.
881 .. code-block:: python
886 or use ``--vswitch`` and ``--fwdapp`` CLI arguments:
888 .. code-block:: console
890 $ ./vsperf pvp_tput --conf-file user_settings.py \
894 Guest forwarding application only supports TestPMD in this configuration.
896 .. code-block:: python
898 GUEST_LOOPBACK = ['testpmd']
900 For optimal performance one cpu per port +1 should be used for TestPMD. Also set additional params for packet forwarding
901 application to use the correct number of nb-cores.
903 .. code-block:: python
905 DPDK_SOCKET_MEM = ['1024', '0']
906 VSWITCHD_DPDK_ARGS = ['-l', '46,44,42,40,38', '-n', '4']
907 TESTPMD_ARGS = ['--nb-cores=4', '--txq=1', '--rxq=1']
909 For guest TestPMD 3 VCpus should be assigned with the following TestPMD params.
911 .. code-block:: python
913 GUEST_TESTPMD_PARAMS = ['-l 0,1,2 -n 4 --socket-mem 1024 -- '
914 '--burst=64 -i --txqflags=0xf00 '
915 '--disable-hw-vlan --nb-cores=2 --txq=1 --rxq=1']
917 Execution of TestPMD can be run with the following command line
919 .. code-block:: console
921 ./vsperf pvp_tput --vswitch=none --fwdapp=TestPMD --conf-file <path_to_settings_py>
923 **NOTE:** To achieve the best 0% loss numbers with rfc2544 throughput testing, other tunings should be applied to host
924 and guest such as tuned profiles and CPU tunings to prevent possible interrupts to worker threads.
926 VSPERF modes of operation
927 ^^^^^^^^^^^^^^^^^^^^^^^^^
929 VSPERF can be run in different modes. By default it will configure vSwitch,
930 traffic generator and VNF. However it can be used just for configuration
931 and execution of traffic generator. Another option is execution of all
932 components except traffic generator itself.
934 Mode of operation is driven by configuration parameter -m or --mode
936 .. code-block:: console
938 -m MODE, --mode MODE vsperf mode of operation;
940 "normal" - execute vSwitch, VNF and traffic generator
941 "trafficgen" - execute only traffic generator
942 "trafficgen-off" - execute vSwitch and VNF
943 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
945 In case, that VSPERF is executed in "trafficgen" mode, then configuration
946 of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the
947 ``--test-params`` option. It is not needed to specify all values of ``TRAFFIC``
948 dictionary. It is sufficient to specify only values, which should be changed.
949 Detailed description of ``TRAFFIC`` dictionary can be found at
950 :ref:`configuration-of-traffic-dictionary`.
952 Example of execution of VSPERF in "trafficgen" mode:
954 .. code-block:: console
956 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
957 --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
962 The ``--matrix`` command line argument analyses and displays the performance of
963 all the tests run. Using the metric specified by ``MATRIX_METRIC`` in the conf-file,
964 the first test is set as the baseline and all the other tests are compared to it.
965 The ``MATRIX_METRIC`` must always refer to a numeric value to enable comparision.
966 A table, with the test ID, metric value, the change of the metric in %, testname
967 and the test parameters used for each test, is printed out as well as saved into the
970 Example of 2 tests being compared using Performance Matrix:
972 .. code-block:: console
974 $ ./vsperf --conf-file user_settings.py \
975 --test-params "['TRAFFICGEN_PKT_SIZES=(64,)',"\
976 "'TRAFFICGEN_PKT_SIZES=(128,)']" \
977 phy2phy_cont phy2phy_cont --matrix
981 .. code-block:: console
983 +------+--------------+---------------------+----------+---------------------------------------+
984 | ID | Name | throughput_rx_fps | Change | Parameters, CUMULATIVE_PARAMS = False |
985 +======+==============+=====================+==========+=======================================+
986 | 0 | phy2phy_cont | 23749000.000 | 0 | 'TRAFFICGEN_PKT_SIZES': [64] |
987 +------+--------------+---------------------+----------+---------------------------------------+
988 | 1 | phy2phy_cont | 16850500.000 | -29.048 | 'TRAFFICGEN_PKT_SIZES': [128] |
989 +------+--------------+---------------------+----------+---------------------------------------+
992 Code change verification by pylint
993 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
995 Every developer participating in VSPERF project should run
996 pylint before his python code is submitted for review. Project
997 specific configuration for pylint is available at 'pylint.rc'.
999 Example of manual pylint invocation:
1001 .. code-block:: console
1003 $ pylint --rcfile ./pylintrc ./vsperf
1008 Custom image fails to boot
1009 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1011 Using custom VM images may not boot within VSPerf pxp testing because of
1012 the drive boot and shared type which could be caused by a missing scsi
1013 driver inside the image. In case of issues you can try changing the drive
1016 .. code-block:: python
1018 GUEST_BOOT_DRIVE_TYPE = ['ide']
1019 GUEST_SHARED_DRIVE_TYPE = ['ide']
1021 OVS with DPDK and QEMU
1022 ~~~~~~~~~~~~~~~~~~~~~~~
1024 If you encounter the following error: "before (last 100 chars):
1025 '-path=/dev/hugepages,share=on: unable to map backing store for
1026 hugepages: Cannot allocate memory\r\n\r\n" during qemu initialization,
1027 check the amount of hugepages on your system:
1029 .. code-block:: console
1031 $ cat /proc/meminfo | grep HugePages
1034 By default the vswitchd is launched with 1Gb of memory, to change
1035 this, modify --socket-mem parameter in conf/02_vswitch.conf to allocate
1036 an appropriate amount of memory:
1038 .. code-block:: python
1040 DPDK_SOCKET_MEM = ['1024', '0']
1041 VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4']
1042 VSWITCHD_DPDK_CONFIG = {
1043 'dpdk-init' : 'true',
1044 'dpdk-lcore-mask' : '0x4',
1045 'dpdk-socket-mem' : '1024,0',
1048 Note: Option ``VSWITCHD_DPDK_ARGS`` is used for vswitchd, which supports ``--dpdk``
1049 parameter. In recent vswitchd versions, option ``VSWITCHD_DPDK_CONFIG`` will be
1050 used to configure vswitchd via ``ovs-vsctl`` calls.
1056 For more information and details refer to the rest of vSwitchPerfuser documentation.