1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
5 vSwitchPerf test suites userguide
6 ---------------------------------
11 VSPERF requires a traffic generators to run tests, automated traffic gen
12 support in VSPERF includes:
14 - IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA
16 - Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual
17 in a VM) and a VM to run the Spirent Virtual Deployment Service image,
18 formerly known as "Spirent LabServer".
19 - Xena Network traffic generator (Xena hardware chassis) that houses the Xena
20 Traffic generator modules.
21 - Moongen software traffic generator. Requires a separate machine running
22 moongen to execute packet generation.
24 If you want to use another traffic generator, please select the :ref:`trafficgen-dummy`
30 To see the supported Operating Systems, vSwitches and system requirements,
31 please follow the `installation instructions <vsperf-installation>`.
33 Traffic Generator Setup
34 ^^^^^^^^^^^^^^^^^^^^^^^
36 Follow the `Traffic generator instructions <trafficgen-installation>` to
37 install and configure a suitable traffic generator.
39 Cloning and building src dependencies
40 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
42 In order to run VSPERF, you will need to download DPDK and OVS. You can
43 do this manually and build them in a preferred location, OR you could
44 use vswitchperf/src. The vswitchperf/src directory contains makefiles
45 that will allow you to clone and build the libraries that VSPERF depends
46 on, such as DPDK and OVS. To clone and build simply:
48 .. code-block:: console
53 VSPERF can be used with stock OVS (without DPDK support). When build
54 is finished, the libraries are stored in src_vanilla directory.
56 The 'make' builds all options in src:
59 * OVS with vhost_user as the guest access method (with DPDK support)
61 The vhost_user build will reside in src/ovs/
62 The Vanilla OVS build will reside in vswitchperf/src_vanilla
64 To delete a src subdirectory and its contents to allow you to re-clone simply
67 .. code-block:: console
71 Configure the ``./conf/10_custom.conf`` file
72 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
74 The ``10_custom.conf`` file is the configuration file that overrides
75 default configurations in all the other configuration files in ``./conf``
76 The supplied ``10_custom.conf`` file **MUST** be modified, as it contains
77 configuration items for which there are no reasonable default values.
79 The configuration items that can be added is not limited to the initial
80 contents. Any configuration item mentioned in any .conf file in
81 ``./conf`` directory can be added and that item will be overridden by
82 the custom configuration value.
84 Further details about configuration files evaluation and special behaviour
85 of options with ``GUEST_`` prefix could be found at :ref:`design document
86 <design-configuration>`.
88 Using a custom settings file
89 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
91 If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
92 of if you want to use an alternative configuration file, the file can
93 be passed to ``vsperf`` via the ``--conf-file`` argument.
95 .. code-block:: console
97 $ ./vsperf --conf-file <path_to_custom_conf> ...
99 Note that configuration passed in via the environment (``--load-env``)
100 or via another command line argument will override both the default and
101 your custom configuration files. This "priority hierarchy" can be
102 described like so (1 = max priority):
104 1. Testcase definition section ``Parameters``
105 2. Command line arguments
106 3. Environment variables
107 4. Configuration file(s)
109 Further details about configuration files evaluation and special behaviour
110 of options with ``GUEST_`` prefix could be found at :ref:`design document
111 <design-configuration>`.
113 .. _overriding-parameters-documentation:
115 Overriding values defined in configuration files
116 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
118 The configuration items can be overridden by command line argument
119 ``--test-params``. In this case, the configuration items and
120 their values should be passed in form of ``item=value`` and separated
127 $ ./vsperf --test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,);" \
128 "GUEST_LOOPBACK=['testpmd','l2fwd']" pvvp_tput
130 The second option is to override configuration items by ``Parameters`` section
131 of the test case definition. The configuration items can be added into ``Parameters``
132 dictionary with their new values. These values will override values defined in
133 configuration files or specified by ``--test-params`` command line argument.
139 "Parameters" : {'TRAFFICGEN_PKT_SIZES' : (128,),
140 'TRAFFICGEN_DURATION' : 10,
141 'GUEST_LOOPBACK' : ['testpmd','l2fwd'],
144 **NOTE:** In both cases, configuration item names and their values must be specified
145 in the same form as they are defined inside configuration files. Parameter names
146 must be specified in uppercase and data types of original and new value must match.
147 Python syntax rules related to data types and structures must be followed.
148 For example, parameter ``TRAFFICGEN_PKT_SIZES`` above is defined as a tuple
149 with a single value ``128``. In this case trailing comma is mandatory, otherwise
150 value can be wrongly interpreted as a number instead of a tuple and vsperf
151 execution would fail. Please check configuration files for default values and their
152 types and use them as a basis for any customized values. In case of any doubt, please
153 check official python documentation related to data structures like tuples, lists
156 **NOTE:** Vsperf execution will terminate with runtime error in case, that unknown
157 parameter name is passed via ``--test-params`` CLI argument or defined in ``Parameters``
158 section of test case definition. It is also forbidden to redefine a value of
159 ``TEST_PARAMS`` configuration item via CLI or ``Parameters`` section.
164 VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment
165 scenarios involving VMs. The image can be downloaded from
166 `<http://artifacts.opnfv.org/>`__.
168 Please see the installation instructions for information on :ref:`vloop-vnf`
176 A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
177 support for Destination Network Address Translation (DNAT) for both the MAC and
178 IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
183 All examples inside these docs assume, that user is inside the VSPERF
184 directory. VSPERF can be executed from any directory.
186 Before running any tests make sure you have root permissions by adding
187 the following line to /etc/sudoers:
189 .. code-block:: console
191 username ALL=(ALL) NOPASSWD: ALL
193 username in the example above should be replaced with a real username.
195 To list the available tests:
197 .. code-block:: console
201 To run a single test:
203 .. code-block:: console
207 Where $TESTNAME is the name of the vsperf test you would like to run.
209 To run a group of tests, for example all tests with a name containing
212 .. code-block:: console
214 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"
218 .. code-block:: console
220 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
222 Some tests allow for configurable parameters, including test duration
223 (in seconds) as well as packet sizes (in bytes).
227 $ ./vsperf --conf-file user_settings.py \
228 --tests RFC2544Tput \
229 --test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)"
231 For all available options, check out the help dialog:
233 .. code-block:: console
237 Executing Vanilla OVS tests
238 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
240 1. If needed, recompile src for all OVS variants
242 .. code-block:: console
248 2. Update your ``10_custom.conf`` file to use Vanilla OVS:
250 .. code-block:: python
252 VSWITCH = 'OvsVanilla'
256 .. code-block:: console
258 $ ./vsperf --conf-file=<path_to_custom_conf>
260 Please note if you don't want to configure Vanilla OVS through the
261 configuration file, you can pass it as a CLI argument.
263 .. code-block:: console
265 $ ./vsperf --vswitch OvsVanilla
268 Executing tests with VMs
269 ^^^^^^^^^^^^^^^^^^^^^^^^
271 To run tests using vhost-user as guest access method:
273 1. Set VHOST_METHOD and VNF of your settings file to:
275 .. code-block:: python
277 VSWITCH = 'OvsDpdkVhost'
278 VNF = 'QemuDpdkVhost'
280 2. If needed, recompile src for all OVS variants
282 .. code-block:: console
290 .. code-block:: console
292 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
294 Executing tests with VMs using Vanilla OVS
295 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
297 To run tests using Vanilla OVS:
299 1. Set the following variables:
301 .. code-block:: python
303 VSWITCH = 'OvsVanilla'
304 VNF = 'QemuVirtioNet'
306 VANILLA_TGEN_PORT1_IP = n.n.n.n
307 VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn
309 VANILLA_TGEN_PORT2_IP = n.n.n.n
310 VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn
312 VANILLA_BRIDGE_IP = n.n.n.n
314 or use ``--test-params`` option
316 .. code-block:: console
318 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
319 --test-params "VANILLA_TGEN_PORT1_IP=n.n.n.n;" \
320 "VANILLA_TGEN_PORT1_MAC=nn:nn:nn:nn:nn:nn;" \
321 "VANILLA_TGEN_PORT2_IP=n.n.n.n;" \
322 "VANILLA_TGEN_PORT2_MAC=nn:nn:nn:nn:nn:nn"
324 2. If needed, recompile src for all OVS variants
326 .. code-block:: console
334 .. code-block:: console
336 $ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
343 Currently it is not possible to use standard scenario deployments for execution of
344 tests with VPP. It means, that deployments ``p2p``, ``pvp``, ``pvvp`` and in general any
345 :ref:`pxp-deployment` won't work with VPP. However it is possible to use VPP in
346 :ref:`step-driven-tests`. A basic set of VPP testcases covering ``phy2phy``, ``pvp``
347 and ``pvvp`` tests are already prepared.
349 List of performance tests with VPP support follows:
351 * phy2phy_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
352 * phy2phy_cont_vpp: VPP: Phy2Phy Continuous Stream
353 * phy2phy_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
354 * pvp_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
355 * pvp_cont_vpp: VPP: PVP Continuous Stream
356 * pvp_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
357 * pvvp_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
358 * pvvp_cont_vpp: VPP: PVP Continuous Stream
359 * pvvp_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
361 In order to execute testcases with VPP it is required to:
363 * install VPP manually, see :ref:`vpp-installation`
364 * configure ``WHITELIST_NICS``, with two physical NICs connected to the traffic generator
365 * configure traffic generator, see :ref:`trafficgen-installation`
367 After that it is possible to execute VPP testcases listed above.
371 .. code-block:: console
373 $ ./vsperf --conf-file=<path_to_custom_conf> phy2phy_tput_vpp
377 Using vfio_pci with DPDK
378 ^^^^^^^^^^^^^^^^^^^^^^^^^
380 To use vfio with DPDK instead of igb_uio add into your custom configuration
381 file the following parameter:
383 .. code-block:: python
385 PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
388 **NOTE:** In case, that DPDK is installed from binary package, then please
389 set ``PATHS['dpdk']['bin']['modules']`` instead.
391 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
393 **NOTE:** Please ensure your boot/grub parameters include
396 .. code-block:: console
398 iommu=pt intel_iommu=on
400 To check that IOMMU is enabled on your platform:
402 .. code-block:: console
405 [ 0.000000] Intel-IOMMU: enabled
406 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
407 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
408 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
409 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
410 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
411 [ 3.335744] IOMMU: dmar0 using Queued invalidation
412 [ 3.335746] IOMMU: dmar1 using Queued invalidation
420 To use virtual functions of NIC with SRIOV support, use extended form
421 of NIC PCI slot definition:
423 .. code-block:: python
425 WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
427 Where 'vf' is an indication of virtual function usage and following
428 number defines a VF to be used. In case that VF usage is detected,
429 then vswitchperf will enable SRIOV support for given card and it will
430 detect PCI slot numbers of selected VFs.
432 So in example above, one VF will be configured for NIC '0000:05:00.0'
433 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
434 will detect PCI addresses of selected VFs and it will use them during
437 At the end of vswitchperf execution, SRIOV support will be disabled.
439 SRIOV support is generic and it can be used in different testing scenarios.
442 * vSwitch tests with DPDK or without DPDK support to verify impact
443 of VF usage on vSwitch performance
444 * tests without vSwitch, where traffic is forwared directly
445 between VF interfaces by packet forwarder (e.g. testpmd application)
446 * tests without vSwitch, where VM accesses VF interfaces directly
447 by PCI-passthrough_ to measure raw VM throughput performance.
451 Using QEMU with PCI passthrough support
452 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
454 Raw virtual machine throughput performance can be measured by execution of PVP
455 test with direct access to NICs by PCI passthrough. To execute VM with direct
456 access to PCI devices, enable vfio-pci_. In order to use virtual functions,
457 SRIOV-support_ must be enabled.
459 Execution of test with PCI passthrough with vswitch disabled:
461 .. code-block:: console
463 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
464 --vswitch none --vnf QemuPciPassthrough pvp_tput
466 Any of supported guest-loopback-application_ can be used inside VM with
467 PCI passthrough support.
469 Note: Qemu with PCI passthrough support can be used only with PVP test
472 .. _guest-loopback-application:
474 Selection of loopback application for tests with VMs
475 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
477 To select the loopback applications which will forward packets inside VMs,
478 the following parameter should be configured:
480 .. code-block:: python
482 GUEST_LOOPBACK = ['testpmd']
484 or use ``--test-params`` CLI argument:
486 .. code-block:: console
488 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
489 --test-params "GUEST_LOOPBACK=['testpmd']"
491 Supported loopback applications are:
493 .. code-block:: console
495 'testpmd' - testpmd from dpdk will be built and used
496 'l2fwd' - l2fwd module provided by Huawei will be built and used
497 'linux_bridge' - linux bridge will be configured
498 'buildin' - nothing will be configured by vsperf; VM image must
499 ensure traffic forwarding between its interfaces
501 Guest loopback application must be configured, otherwise traffic
502 will not be forwarded by VM and testcases with VM related deployments
503 will fail. Guest loopback application is set to 'testpmd' by default.
505 **NOTE:** In case that only 1 or more than 2 NICs are configured for VM,
506 then 'testpmd' should be used. As it is able to forward traffic between
507 multiple VM NIC pairs.
509 **NOTE:** In case of linux_bridge, all guest NICs are connected to the same
510 bridge inside the guest.
512 Mergable Buffers Options with QEMU
513 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
515 Mergable buffers can be disabled with VSPerf within QEMU. This option can
516 increase performance significantly when not using jumbo frame sized packets.
517 By default VSPerf disables mergable buffers. If you wish to enable it you
518 can modify the setting in the a custom conf file.
520 .. code-block:: python
522 GUEST_NIC_MERGE_BUFFERS_DISABLE = [False]
524 Then execute using the custom conf file.
526 .. code-block:: console
528 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
530 Alternatively you can just pass the param during execution.
532 .. code-block:: console
534 $ ./vsperf --test-params "GUEST_NIC_MERGE_BUFFERS_DISABLE=[False]"
537 Selection of dpdk binding driver for tests with VMs
538 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
540 To select dpdk binding driver, which will specify which driver the vm NICs will
541 use for dpdk bind, the following configuration parameter should be configured:
543 .. code-block:: console
545 GUEST_DPDK_BIND_DRIVER = ['igb_uio_from_src']
547 The supported dpdk guest bind drivers are:
549 .. code-block:: console
551 'uio_pci_generic' - Use uio_pci_generic driver
552 'igb_uio_from_src' - Build and use the igb_uio driver from the dpdk src
554 'vfio_no_iommu' - Use vfio with no iommu option. This requires custom
555 guest images that support this option. The default
556 vloop image does not support this driver.
558 Note: uio_pci_generic does not support sr-iov testcases with guests attached.
559 This is because uio_pci_generic only supports legacy interrupts. In case
560 uio_pci_generic is selected with the vnf as QemuPciPassthrough it will be
561 modified to use igb_uio_from_src instead.
563 Note: vfio_no_iommu requires kernels equal to or greater than 4.5 and dpdk
564 16.04 or greater. Using this option will also taint the kernel.
566 Please refer to the dpdk documents at http://dpdk.org/doc/guides for more
567 information on these drivers.
569 Multi-Queue Configuration
570 ^^^^^^^^^^^^^^^^^^^^^^^^^
572 VSPerf currently supports multi-queue with the following limitations:
574 1. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The
575 default upstream package versions installed by VSPerf satisfies this
578 2. Guest image must have ethtool utility installed if using l2fwd or linux
579 bridge inside guest for loopback.
581 3. If using OVS versions 2.5.0 or less enable old style multi-queue as shown
582 in the ''02_vswitch.conf'' file.
584 .. code-block:: python
586 OVS_OLD_STYLE_MQ = True
588 To enable multi-queue for dpdk modify the ''02_vswitch.conf'' file.
590 .. code-block:: python
592 VSWITCH_DPDK_MULTI_QUEUES = 2
594 **NOTE:** you should consider using the switch affinity to set a pmd cpu mask
595 that can optimize your performance. Consider the numa of the NIC in use if this
596 applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an
597 appropriate mask to create PMD threads on the same numa node.
599 When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created
600 on the switch will set the option for multiple queues. If old style multi queue
601 has been enabled a global option for multi queue will be used instead of the
604 To enable multi-queue on the guest modify the ''04_vnf.conf'' file.
606 .. code-block:: python
608 GUEST_NIC_QUEUES = [2]
610 Enabling multi-queue at the guest will add multiple queues to each NIC port when
611 qemu launches the guest.
613 In case of Vanilla OVS, multi-queue is enabled on the tuntap ports and nic
614 queues will be enabled inside the guest with ethtool. Simply enabling the
615 multi-queue on the guest is sufficient for Vanilla OVS multi-queue.
617 Testpmd should be configured to take advantage of multi-queue on the guest if
618 using DPDKVhostUser. This can be done by modifying the ''04_vnf.conf'' file.
620 .. code-block:: python
622 GUEST_TESTPMD_PARAMS = ['-l 0,1,2,3,4 -n 4 --socket-mem 512 -- '
623 '--burst=64 -i --txqflags=0xf00 '
624 '--nb-cores=4 --rxq=2 --txq=2 '
627 **NOTE:** The guest SMP cores must be configured to allow for testpmd to use the
628 optimal number of cores to take advantage of the multiple guest queues.
630 In case of using Vanilla OVS and qemu virtio-net you can increase performance
631 by binding vhost-net threads to cpus. This can be done by enabling the affinity
632 in the ''04_vnf.conf'' file. This can be done to non multi-queue enabled
633 configurations as well as there will be 2 vhost-net threads.
635 .. code-block:: python
637 VSWITCH_VHOST_NET_AFFINITIZATION = True
639 VSWITCH_VHOST_CPU_MAP = [4,5,8,11]
641 **NOTE:** This method of binding would require a custom script in a real
644 **NOTE:** For optimal performance guest SMPs and/or vhost-net threads should be
645 on the same numa as the NIC in use if possible/applicable. Testpmd should be
646 assigned at least (nb_cores +1) total cores with the cpu mask.
648 Executing Packet Forwarding tests
649 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
651 To select the applications which will forward packets,
652 the following parameters should be configured:
654 .. code-block:: python
659 or use ``--vswitch`` and ``--fwdapp`` CLI arguments:
661 .. code-block:: console
663 $ ./vsperf phy2phy_cont --conf-file user_settings.py \
667 Supported Packet Forwarding applications are:
669 .. code-block:: console
671 'testpmd' - testpmd from dpdk
674 1. Update your ''10_custom.conf'' file to use the appropriate variables
675 for selected Packet Forwarder:
677 .. code-block:: python
679 # testpmd configuration
681 # packet forwarding mode supported by testpmd; Please see DPDK documentation
682 # for comprehensive list of modes supported by your version.
683 # e.g. io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho|...
684 # Note: Option "mac_retry" has been changed to "mac retry" since DPDK v16.07
685 TESTPMD_FWD_MODE = 'csum'
686 # checksum calculation layer: ip|udp|tcp|sctp|outer-ip
687 TESTPMD_CSUM_LAYER = 'ip'
688 # checksum calculation place: hw (hardware) | sw (software)
689 TESTPMD_CSUM_CALC = 'sw'
690 # recognize tunnel headers: on|off
691 TESTPMD_CSUM_PARSE_TUNNEL = 'off'
695 .. code-block:: console
697 $ ./vsperf phy2phy_tput --conf-file <path_to_settings_py>
699 Executing Packet Forwarding tests with one guest
700 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
702 TestPMD with DPDK 16.11 or greater can be used to forward packets as a switch to a single guest using TestPMD vdev
703 option. To set this configuration the following parameters should be used.
705 .. code-block:: python
710 or use ``--vswitch`` and ``--fwdapp`` CLI arguments:
712 .. code-block:: console
714 $ ./vsperf pvp_tput --conf-file user_settings.py \
718 Guest forwarding application only supports TestPMD in this configuration.
720 .. code-block:: python
722 GUEST_LOOPBACK = ['testpmd']
724 For optimal performance one cpu per port +1 should be used for TestPMD. Also set additional params for packet forwarding
725 application to use the correct number of nb-cores.
727 .. code-block:: python
729 DPDK_SOCKET_MEM = ['1024', '0']
730 VSWITCHD_DPDK_ARGS = ['-l', '46,44,42,40,38', '-n', '4']
731 TESTPMD_ARGS = ['--nb-cores=4', '--txq=1', '--rxq=1']
733 For guest TestPMD 3 VCpus should be assigned with the following TestPMD params.
735 .. code-block:: python
737 GUEST_TESTPMD_PARAMS = ['-l 0,1,2 -n 4 --socket-mem 1024 -- '
738 '--burst=64 -i --txqflags=0xf00 '
739 '--disable-hw-vlan --nb-cores=2 --txq=1 --rxq=1']
741 Execution of TestPMD can be run with the following command line
743 .. code-block:: console
745 ./vsperf pvp_tput --vswitch=none --fwdapp=TestPMD --conf-file <path_to_settings_py>
747 **NOTE:** To achieve the best 0% loss numbers with rfc2544 throughput testing, other tunings should be applied to host
748 and guest such as tuned profiles and CPU tunings to prevent possible interrupts to worker threads.
750 VSPERF modes of operation
751 ^^^^^^^^^^^^^^^^^^^^^^^^^
753 VSPERF can be run in different modes. By default it will configure vSwitch,
754 traffic generator and VNF. However it can be used just for configuration
755 and execution of traffic generator. Another option is execution of all
756 components except traffic generator itself.
758 Mode of operation is driven by configuration parameter -m or --mode
760 .. code-block:: console
762 -m MODE, --mode MODE vsperf mode of operation;
764 "normal" - execute vSwitch, VNF and traffic generator
765 "trafficgen" - execute only traffic generator
766 "trafficgen-off" - execute vSwitch and VNF
767 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
769 In case, that VSPERF is executed in "trafficgen" mode, then configuration
770 of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the
771 ``--test-params`` option. It is not needed to specify all values of ``TRAFFIC``
772 dictionary. It is sufficient to specify only values, which should be changed.
773 Detailed description of ``TRAFFIC`` dictionary can be found at
774 :ref:`configuration-of-traffic-dictionary`.
776 Example of execution of VSPERF in "trafficgen" mode:
778 .. code-block:: console
780 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
781 --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
783 Code change verification by pylint
784 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
786 Every developer participating in VSPERF project should run
787 pylint before his python code is submitted for review. Project
788 specific configuration for pylint is available at 'pylint.rc'.
790 Example of manual pylint invocation:
792 .. code-block:: console
794 $ pylint --rcfile ./pylintrc ./vsperf
799 Custom image fails to boot
800 ~~~~~~~~~~~~~~~~~~~~~~~~~~
802 Using custom VM images may not boot within VSPerf pxp testing because of
803 the drive boot and shared type which could be caused by a missing scsi
804 driver inside the image. In case of issues you can try changing the drive
807 .. code-block:: python
809 GUEST_BOOT_DRIVE_TYPE = ['ide']
810 GUEST_SHARED_DRIVE_TYPE = ['ide']
812 OVS with DPDK and QEMU
813 ~~~~~~~~~~~~~~~~~~~~~~~
815 If you encounter the following error: "before (last 100 chars):
816 '-path=/dev/hugepages,share=on: unable to map backing store for
817 hugepages: Cannot allocate memory\r\n\r\n" during qemu initialization,
818 check the amount of hugepages on your system:
820 .. code-block:: console
822 $ cat /proc/meminfo | grep HugePages
825 By default the vswitchd is launched with 1Gb of memory, to change
826 this, modify --socket-mem parameter in conf/02_vswitch.conf to allocate
827 an appropriate amount of memory:
829 .. code-block:: python
831 DPDK_SOCKET_MEM = ['1024', '0']
832 VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4']
833 VSWITCHD_DPDK_CONFIG = {
834 'dpdk-init' : 'true',
835 'dpdk-lcore-mask' : '0x4',
836 'dpdk-socket-mem' : '1024,0',
839 Note: Option ``VSWITCHD_DPDK_ARGS`` is used for vswitchd, which supports ``--dpdk``
840 parameter. In recent vswitchd versions, option ``VSWITCHD_DPDK_CONFIG`` will be
841 used to configure vswitchd via ``ovs-vsctl`` calls.
847 For more information and details refer to the rest of vSwitchPerfuser documentation.