1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
5 vSwitchPerf test suites userguide
6 ---------------------------------
11 VSPERF requires a traffic generators to run tests, automated traffic gen
12 support in VSPERF includes:
14 - IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA
16 - Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual
17 in a VM) and a VM to run the Spirent Virtual Deployment Service image,
18 formerly known as "Spirent LabServer".
19 - Xena Network traffic generator (Xena hardware chassis) that houses the Xena
20 Traffic generator modules.
21 - Moongen software traffic generator. Requires a separate machine running
22 moongen to execute packet generation.
24 If you want to use another traffic generator, please select the Dummy generator
25 option as shown in `Traffic generator instructions
26 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/trafficgen.html>`__
31 To see the supported Operating Systems, vSwitches and system requirements,
32 please follow the `installation instructions
33 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/installation.html>`__ to
36 Traffic Generator Setup
37 ^^^^^^^^^^^^^^^^^^^^^^^
39 Follow the `Traffic generator instructions
40 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/trafficgen.html>`__ to
41 install and configure a suitable traffic generator.
43 Cloning and building src dependencies
44 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
46 In order to run VSPERF, you will need to download DPDK and OVS. You can
47 do this manually and build them in a preferred location, OR you could
48 use vswitchperf/src. The vswitchperf/src directory contains makefiles
49 that will allow you to clone and build the libraries that VSPERF depends
50 on, such as DPDK and OVS. To clone and build simply:
52 .. code-block:: console
57 VSPERF can be used with stock OVS (without DPDK support). When build
58 is finished, the libraries are stored in src_vanilla directory.
60 The 'make' builds all options in src:
63 * OVS with vhost_user as the guest access method (with DPDK support)
65 The vhost_user build will reside in src/ovs/
66 The Vanilla OVS build will reside in vswitchperf/src_vanilla
68 To delete a src subdirectory and its contents to allow you to re-clone simply
71 .. code-block:: console
75 Configure the ``./conf/10_custom.conf`` file
76 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
78 The ``10_custom.conf`` file is the configuration file that overrides
79 default configurations in all the other configuration files in ``./conf``
80 The supplied ``10_custom.conf`` file **MUST** be modified, as it contains
81 configuration items for which there are no reasonable default values.
83 The configuration items that can be added is not limited to the initial
84 contents. Any configuration item mentioned in any .conf file in
85 ``./conf`` directory can be added and that item will be overridden by
86 the custom configuration value.
88 Further details about configuration files evaluation and special behaviour
89 of options with ``GUEST_`` prefix could be found at `design document
90 <http://artifacts.opnfv.org/vswitchperf/docs/design/vswitchperf_design.html#configuration>`__.
92 Using a custom settings file
93 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
95 If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
96 of if you want to use an alternative configuration file, the file can
97 be passed to ``vsperf`` via the ``--conf-file`` argument.
99 .. code-block:: console
101 $ ./vsperf --conf-file <path_to_custom_conf> ...
103 Note that configuration passed in via the environment (``--load-env``)
104 or via another command line argument will override both the default and
105 your custom configuration files. This "priority hierarchy" can be
106 described like so (1 = max priority):
108 1. Command line arguments
109 2. Environment variables
110 3. Configuration file(s)
112 Further details about configuration files evaluation and special behaviour
113 of options with ``GUEST_`` prefix could be found at `design document
114 <http://artifacts.opnfv.org/vswitchperf/docs/design/vswitchperf_design.html#configuration>`__.
119 vsperf uses a VM image called vloop_vnf for looping traffic in the deployment
120 scenarios involving VMs. The image can be downloaded from
121 `<http://artifacts.opnfv.org/>`__.
123 .. code-block:: console
125 $ wget http://artifacts.opnfv.org/vswitchperf/vloop-vnf-ubuntu-14.04_20151216.qcow2
127 Newer vloop_vnf images are available. Please reference the
128 installation instructions for information on these images
129 `installation instructions
130 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/installation.html>`__
133 vloop_vnf forwards traffic through a VM using one of:
136 * l2fwd kernel Module.
138 Alternatively you can use your own QEMU image.
143 A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
144 support for Destination Network Address Translation (DNAT) for both the MAC and
145 IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
150 Before running any tests make sure you have root permissions by adding
151 the following line to /etc/sudoers:
153 .. code-block:: console
155 username ALL=(ALL) NOPASSWD: ALL
157 username in the example above should be replaced with a real username.
159 To list the available tests:
161 .. code-block:: console
165 To run a single test:
167 .. code-block:: console
171 Where $TESTNAME is the name of the vsperf test you would like to run.
173 To run a group of tests, for example all tests with a name containing
176 .. code-block:: console
178 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"
182 .. code-block:: console
184 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
186 Some tests allow for configurable parameters, including test duration
187 (in seconds) as well as packet sizes (in bytes).
191 $ ./vsperf --conf-file user_settings.py
193 --test-params "duration=10;pkt_sizes=128"
195 For all available options, check out the help dialog:
197 .. code-block:: console
201 Executing Vanilla OVS tests
202 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
204 1. If needed, recompile src for all OVS variants
206 .. code-block:: console
212 2. Update your ''10_custom.conf'' file to use the appropriate variables
215 .. code-block:: console
217 VSWITCH = 'OvsVanilla'
219 Where $PORT1 and $PORT2 are the Linux interfaces you'd like to bind
224 .. code-block:: console
226 $ ./vsperf --conf-file=<path_to_custom_conf>
228 Please note if you don't want to configure Vanilla OVS through the
229 configuration file, you can pass it as a CLI argument; BUT you must
232 .. code-block:: console
234 $ ./vsperf --vswitch OvsVanilla
237 Executing tests with VMs
238 ^^^^^^^^^^^^^^^^^^^^^^^^
240 To run tests using vhost-user as guest access method:
242 1. Set VHOST_METHOD and VNF of your settings file to:
244 .. code-block:: console
246 VSWITCH = 'OvsDpdkVhost'
247 VNF = 'QemuDpdkVhost'
249 2. If needed, recompile src for all OVS variants
251 .. code-block:: console
259 .. code-block:: console
261 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
263 Executing tests with VMs using Vanilla OVS
264 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
266 To run tests using Vanilla OVS:
268 1. Set the following variables:
270 .. code-block:: console
272 VSWITCH = 'OvsVanilla'
273 VNF = 'QemuVirtioNet'
275 VANILLA_TGEN_PORT1_IP = n.n.n.n
276 VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn
278 VANILLA_TGEN_PORT2_IP = n.n.n.n
279 VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn
281 VANILLA_BRIDGE_IP = n.n.n.n
285 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
286 --test-params "vanilla_tgen_tx_ip=n.n.n.n;
287 vanilla_tgen_tx_mac=nn:nn:nn:nn:nn:nn"
290 2. If needed, recompile src for all OVS variants
292 .. code-block:: console
300 .. code-block:: console
302 $ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
306 Using vfio_pci with DPDK
307 ^^^^^^^^^^^^^^^^^^^^^^^^^
309 To use vfio with DPDK instead of igb_uio add into your custom configuration
310 file the following parameter:
312 .. code-block:: console
314 PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
317 **NOTE:** In case, that DPDK is installed from binary package, then please
318 set ``PATHS['dpdk']['bin']['modules']`` instead.
320 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
322 **NOTE:** Please ensure your boot/grub parameters include
325 .. code-block:: console
327 iommu=pt intel_iommu=on
329 To check that IOMMU is enabled on your platform:
331 .. code-block:: console
334 [ 0.000000] Intel-IOMMU: enabled
335 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
336 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
337 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
338 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
339 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
340 [ 3.335744] IOMMU: dmar0 using Queued invalidation
341 [ 3.335746] IOMMU: dmar1 using Queued invalidation
349 To use virtual functions of NIC with SRIOV support, use extended form
350 of NIC PCI slot definition:
352 .. code-block:: python
354 WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
356 Where 'vf' is an indication of virtual function usage and following
357 number defines a VF to be used. In case that VF usage is detected,
358 then vswitchperf will enable SRIOV support for given card and it will
359 detect PCI slot numbers of selected VFs.
361 So in example above, one VF will be configured for NIC '0000:05:00.0'
362 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
363 will detect PCI addresses of selected VFs and it will use them during
366 At the end of vswitchperf execution, SRIOV support will be disabled.
368 SRIOV support is generic and it can be used in different testing scenarios.
371 * vSwitch tests with DPDK or without DPDK support to verify impact
372 of VF usage on vSwitch performance
373 * tests without vSwitch, where traffic is forwared directly
374 between VF interfaces by packet forwarder (e.g. testpmd application)
375 * tests without vSwitch, where VM accesses VF interfaces directly
376 by PCI-passthrough_ to measure raw VM throughput performance.
380 Using QEMU with PCI passthrough support
381 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
383 Raw virtual machine throughput performance can be measured by execution of PVP
384 test with direct access to NICs by PCI passthrough. To execute VM with direct
385 access to PCI devices, enable vfio-pci_. In order to use virtual functions,
386 SRIOV-support_ must be enabled.
388 Execution of test with PCI passthrough with vswitch disabled:
390 .. code-block:: console
392 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
393 --vswitch none --vnf QemuPciPassthrough pvp_tput
395 Any of supported guest-loopback-application_ can be used inside VM with
396 PCI passthrough support.
398 Note: Qemu with PCI passthrough support can be used only with PVP test
401 .. _guest-loopback-application:
403 Selection of loopback application for tests with VMs
404 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
406 To select loopback application, which will perform traffic forwarding
407 inside VM, following configuration parameter should be configured:
409 .. code-block:: console
411 GUEST_LOOPBACK = ['testpmd']
415 .. code-block:: console
417 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
418 --test-params "guest_loopback=testpmd"
420 Supported loopback applications are:
422 .. code-block:: console
424 'testpmd' - testpmd from dpdk will be built and used
425 'l2fwd' - l2fwd module provided by Huawei will be built and used
426 'linux_bridge' - linux bridge will be configured
427 'buildin' - nothing will be configured by vsperf; VM image must
428 ensure traffic forwarding between its interfaces
430 Guest loopback application must be configured, otherwise traffic
431 will not be forwarded by VM and testcases with VM related deployments
432 will fail. Guest loopback application is set to 'testpmd' by default.
434 Note: In case that only 1 or more than 2 NICs are configured for VM,
435 then 'testpmd' should be used. As it is able to forward traffic between
436 multiple VM NIC pairs.
438 Note: In case of linux_bridge, all guest NICs are connected to the same
439 bridge inside the guest.
441 Multi-Queue Configuration
442 ^^^^^^^^^^^^^^^^^^^^^^^^^
444 VSPerf currently supports multi-queue with the following limitations:
446 1. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The
447 default upstream package versions installed by VSPerf satisfies this
450 2. Guest image must have ethtool utility installed if using l2fwd or linux
451 bridge inside guest for loopback.
453 3. If using OVS versions 2.5.0 or less enable old style multi-queue as shown
454 in the ''02_vswitch.conf'' file.
456 .. code-block:: console
458 OVS_OLD_STYLE_MQ = True
460 To enable multi-queue for dpdk modify the ''02_vswitch.conf'' file.
462 .. code-block:: console
464 VSWITCH_DPDK_MULTI_QUEUES = 2
466 **NOTE:** you should consider using the switch affinity to set a pmd cpu mask
467 that can optimize your performance. Consider the numa of the NIC in use if this
468 applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an
469 appropriate mask to create PMD threads on the same numa node.
471 When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created
472 on the switch will set the option for multiple queues. If old style multi queue
473 has been enabled a global option for multi queue will be used instead of the
476 To enable multi-queue on the guest modify the ''04_vnf.conf'' file.
478 .. code-block:: console
482 Enabling multi-queue at the guest will add multiple queues to each NIC port when
483 qemu launches the guest.
485 In case of Vanilla OVS, multi-queue is enabled on the tuntap ports and nic
486 queues will be enabled inside the guest with ethtool. Simply enabling the
487 multi-queue on the guest is sufficient for Vanilla OVS multi-queue.
489 Testpmd should be configured to take advantage of multi-queue on the guest if
490 using DPDKVhostUser. This can be done by modifying the ''04_vnf.conf'' file.
492 .. code-block:: console
494 GUEST_TESTPMD_CPU_MASK = '-l 0,1,2,3,4'
496 GUEST_TESTPMD_NB_CORES = 4
497 GUEST_TESTPMD_TXQ = 2
498 GUEST_TESTPMD_RXQ = 2
500 **NOTE:** The guest SMP cores must be configured to allow for testpmd to use the
501 optimal number of cores to take advantage of the multiple guest queues.
503 In case of using Vanilla OVS and qemu virtio-net you can increase performance
504 by binding vhost-net threads to cpus. This can be done by enabling the affinity
505 in the ''04_vnf.conf'' file. This can be done to non multi-queue enabled
506 configurations as well as there will be 2 vhost-net threads.
508 .. code-block:: console
510 VSWITCH_VHOST_NET_AFFINITIZATION = True
512 VSWITCH_VHOST_CPU_MAP = [4,5,8,11]
514 **NOTE:** This method of binding would require a custom script in a real
517 **NOTE:** For optimal performance guest SMPs and/or vhost-net threads should be
518 on the same numa as the NIC in use if possible/applicable. Testpmd should be
519 assigned at least (nb_cores +1) total cores with the cpu mask.
521 The following CLI parameters override the corresponding configuration settings:
522 1. guest_nic_queues, which overrides all GUEST_NIC_QUEUES values
523 2. guest_testpmd_txq, which overrides all GUEST_TESTPMD_TXQ
524 3. guest_testpmd_rxq, which overrides all GUEST_TESTPMD_RXQ
525 4. guest_testpmd_nb_cores, which overrides all GUEST_TESTPMD_NB_CORES
527 5. guest_testpmd_cpu_mask, which overrides all GUEST_TESTPMD_CPU_MASK
529 6. vswitch_dpdk_multi_queues, which overrides VSWITCH_DPDK_MULTI_QUEUES
530 7. guest_smp, which overrides all GUEST_SMP values
531 8. guest_core_binding, which overrides all GUEST_CORE_BINDING values
533 Executing Packet Forwarding tests
534 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
536 To select application, which will perform packet forwarding,
537 following configuration parameter should be configured:
539 .. code-block:: console
544 or use --vswitch and --fwdapp
546 $ ./vsperf --conf-file user_settings.py
550 Supported Packet Forwarding applications are:
552 .. code-block:: console
554 'testpmd' - testpmd from dpdk
557 1. Update your ''10_custom.conf'' file to use the appropriate variables
558 for selected Packet Forwarder:
560 .. code-block:: console
562 # testpmd configuration
564 # packet forwarding mode supported by testpmd; Please see DPDK documentation
565 # for comprehensive list of modes supported by your version.
566 # e.g. io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho|...
567 # Note: Option "mac_retry" has been changed to "mac retry" since DPDK v16.07
568 TESTPMD_FWD_MODE = 'csum'
569 # checksum calculation layer: ip|udp|tcp|sctp|outer-ip
570 TESTPMD_CSUM_LAYER = 'ip'
571 # checksum calculation place: hw (hardware) | sw (software)
572 TESTPMD_CSUM_CALC = 'sw'
573 # recognize tunnel headers: on|off
574 TESTPMD_CSUM_PARSE_TUNNEL = 'off'
578 .. code-block:: console
580 $ ./vsperf --conf-file <path_to_settings_py>
582 VSPERF modes of operation
583 ^^^^^^^^^^^^^^^^^^^^^^^^^
585 VSPERF can be run in different modes. By default it will configure vSwitch,
586 traffic generator and VNF. However it can be used just for configuration
587 and execution of traffic generator. Another option is execution of all
588 components except traffic generator itself.
590 Mode of operation is driven by configuration parameter -m or --mode
592 .. code-block:: console
594 -m MODE, --mode MODE vsperf mode of operation;
596 "normal" - execute vSwitch, VNF and traffic generator
597 "trafficgen" - execute only traffic generator
598 "trafficgen-off" - execute vSwitch and VNF
599 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
601 In case, that VSPERF is executed in "trafficgen" mode, then configuration
602 of traffic generator should be configured through --test-params option.
603 Supported CLI options useful for traffic generator configuration are:
605 .. code-block:: console
607 'traffic_type' - One of the supported traffic types. E.g. rfc2544,
608 back2back or continuous
609 Default value is "rfc2544".
610 'bidirectional' - Specifies if generated traffic will be full-duplex (true)
611 or half-duplex (false)
612 Default value is "false".
613 'iload' - Defines desired percentage of frame rate used during
614 continuous stream tests.
615 Default value is 100.
616 'multistream' - Defines number of flows simulated by traffic generator.
617 Value 0 disables MultiStream feature
619 'stream_type' - Stream Type is an extension of the "MultiStream" feature.
620 If MultiStream is disabled, then Stream Type will be
621 ignored. Stream Type defines ISO OSI network layer used
622 for simulation of multiple streams.
623 Default value is "L4".
625 Example of execution of VSPERF in "trafficgen" mode:
627 .. code-block:: console
629 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf
630 --test-params "traffic_type=continuous;bidirectional=True;iload=60"
632 Code change verification by pylint
633 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
635 Every developer participating in VSPERF project should run
636 pylint before his python code is submitted for review. Project
637 specific configuration for pylint is available at 'pylint.rc'.
639 Example of manual pylint invocation:
641 .. code-block:: console
643 $ pylint --rcfile ./pylintrc ./vsperf
648 OVS with DPDK and QEMU
649 ~~~~~~~~~~~~~~~~~~~~~~~
651 If you encounter the following error: "before (last 100 chars):
652 '-path=/dev/hugepages,share=on: unable to map backing store for
653 hugepages: Cannot allocate memory\r\n\r\n" during qemu initialization,
654 check the amount of hugepages on your system:
656 .. code-block:: console
658 $ cat /proc/meminfo | grep HugePages
661 By default the vswitchd is launched with 1Gb of memory, to change
662 this, modify --socket-mem parameter in conf/02_vswitch.conf to allocate
663 an appropriate amount of memory:
665 .. code-block:: console
667 VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']
668 VSWITCHD_DPDK_CONFIG = {
669 'dpdk-init' : 'true',
670 'dpdk-lcore-mask' : '0x4',
671 'dpdk-socket-mem' : '1024,0',
674 Note: Option VSWITCHD_DPDK_ARGS is used for vswitchd, which supports --dpdk
675 parameter. In recent vswitchd versions, option VSWITCHD_DPDK_CONFIG will be
676 used to configure vswitchd via ovs-vsctl calls.
682 For more information and details refer to the vSwitchPerf user guide at:
683 http://artifacts.opnfv.org/vswitchperf/docs/userguide/index.html