1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
5 vSwitchPerf test suites userguide
6 ---------------------------------
11 VSPERF requires a traffic generators to run tests, automated traffic gen
12 support in VSPERF includes:
14 - IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA
16 - Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual
17 in a VM) and a VM to run the Spirent Virtual Deployment Service image,
18 formerly known as "Spirent LabServer".
20 If you want to use another traffic generator, please select the Dummy generator
21 option as shown in `Traffic generator instructions
22 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/trafficgen.html>`__
27 To see the supported Operating Systems, vSwitches and system requirements,
28 please follow the `installation instructions
29 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/installation.html>`__ to
32 Traffic Generator Setup
33 ^^^^^^^^^^^^^^^^^^^^^^^
35 Follow the `Traffic generator instructions
36 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/trafficgen.html>`__ to
37 install and configure a suitable traffic generator.
39 Cloning and building src dependencies
40 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
42 In order to run VSPERF, you will need to download DPDK and OVS. You can
43 do this manually and build them in a preferred location, OR you could
44 use vswitchperf/src. The vswitchperf/src directory contains makefiles
45 that will allow you to clone and build the libraries that VSPERF depends
46 on, such as DPDK and OVS. To clone and build simply:
48 .. code-block:: console
53 VSPERF can be used with stock OVS (without DPDK support). When build
54 is finished, the libraries are stored in src_vanilla directory.
56 The 'make' builds all options in src:
59 * OVS with vhost_user as the guest access method (with DPDK support)
60 * OVS with vhost_cuse s the guest access method (with DPDK support)
62 The vhost_user build will reside in src/ovs/
63 The vhost_cuse build will reside in vswitchperf/src_cuse
64 The Vanilla OVS build will reside in vswitchperf/src_vanilla
66 To delete a src subdirectory and its contents to allow you to re-clone simply
69 .. code-block:: console
73 Configure the ``./conf/10_custom.conf`` file
74 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
76 The ``10_custom.conf`` file is the configuration file that overrides
77 default configurations in all the other configuration files in ``./conf``
78 The supplied ``10_custom.conf`` file **MUST** be modified, as it contains
79 configuration items for which there are no reasonable default values.
81 The configuration items that can be added is not limited to the initial
82 contents. Any configuration item mentioned in any .conf file in
83 ``./conf`` directory can be added and that item will be overridden by
84 the custom configuration value.
86 Using a custom settings file
87 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
89 If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
90 of if you want to use an alternative configuration file, the file can
91 be passed to ``vsperf`` via the ``--conf-file`` argument.
93 .. code-block:: console
95 $ ./vsperf --conf-file <path_to_custom_conf> ...
97 Note that configuration passed in via the environment (``--load-env``)
98 or via another command line argument will override both the default and
99 your custom configuration files. This "priority hierarchy" can be
100 described like so (1 = max priority):
102 1. Command line arguments
103 2. Environment variables
104 3. Configuration file(s)
109 vsperf uses a VM called vloop_vnf for looping traffic in the PVP and PVVP
110 deployment scenarios. The image can be downloaded from
111 `<http://artifacts.opnfv.org/>`__.
113 .. code-block:: console
115 $ wget http://artifacts.opnfv.org/vswitchperf/vloop-vnf-ubuntu-14.04_20151216.qcow2
117 vloop_vnf forwards traffic through a VM using one of:
120 * l2fwd kernel Module.
122 Alternatively you can use your own QEMU image.
127 A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
128 support for Destination Network Address Translation (DNAT) for both the MAC and
129 IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
134 Before running any tests make sure you have root permissions by adding
135 the following line to /etc/sudoers:
137 .. code-block:: console
139 username ALL=(ALL) NOPASSWD: ALL
141 username in the example above should be replaced with a real username.
143 To list the available tests:
145 .. code-block:: console
149 To run a single test:
151 .. code-block:: console
155 Where $TESTNAME is the name of the vsperf test you would like to run.
157 To run a group of tests, for example all tests with a name containing
160 .. code-block:: console
162 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"
166 .. code-block:: console
168 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
170 Some tests allow for configurable parameters, including test duration
171 (in seconds) as well as packet sizes (in bytes).
175 $ ./vsperf --conf-file user_settings.py
177 --test-params "duration=10;pkt_sizes=128"
179 For all available options, check out the help dialog:
181 .. code-block:: console
185 Executing Vanilla OVS tests
186 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
188 1. If needed, recompile src for all OVS variants
190 .. code-block:: console
196 2. Update your ''10_custom.conf'' file to use the appropriate variables
199 .. code-block:: console
201 VSWITCH = 'OvsVanilla'
203 Where $PORT1 and $PORT2 are the Linux interfaces you'd like to bind
208 .. code-block:: console
210 $ ./vsperf --conf-file=<path_to_custom_conf>
212 Please note if you don't want to configure Vanilla OVS through the
213 configuration file, you can pass it as a CLI argument; BUT you must
216 .. code-block:: console
218 $ ./vsperf --vswitch OvsVanilla
221 Executing PVP and PVVP tests
222 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
224 To run tests using vhost-user as guest access method:
226 1. Set VHOST_METHOD and VNF of your settings file to:
228 .. code-block:: console
231 VNF = 'QemuDpdkVhost'
233 2. If needed, recompile src for all OVS variants
235 .. code-block:: console
243 .. code-block:: console
245 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
247 To run tests using vhost-cuse as guest access method:
249 1. Set VHOST_METHOD and VNF of your settings file to:
251 .. code-block:: console
254 VNF = 'QemuDpdkVhostCuse'
256 2. If needed, recompile src for all OVS variants
258 .. code-block:: console
266 .. code-block:: console
268 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
270 Executing PVP tests using Vanilla OVS
271 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
273 To run tests using Vanilla OVS:
275 1. Set the following variables:
277 .. code-block:: console
279 VSWITCH = 'OvsVanilla'
280 VNF = 'QemuVirtioNet'
282 VANILLA_TGEN_PORT1_IP = n.n.n.n
283 VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn
285 VANILLA_TGEN_PORT2_IP = n.n.n.n
286 VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn
288 VANILLA_BRIDGE_IP = n.n.n.n
292 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
293 --test-params "vanilla_tgen_tx_ip=n.n.n.n;
294 vanilla_tgen_tx_mac=nn:nn:nn:nn:nn:nn"
297 2. If needed, recompile src for all OVS variants
299 .. code-block:: console
307 .. code-block:: console
309 $ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
313 Using vfio_pci with DPDK
314 ^^^^^^^^^^^^^^^^^^^^^^^^^
316 To use vfio with DPDK instead of igb_uio edit 'conf/02_vswitch.conf'
317 with the following parameters:
319 .. code-block:: console
324 SYS_MODULES = ['cuse']
326 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
328 **NOTE:** Please ensure your boot/grub parameters include
331 .. code-block:: console
333 iommu=pt intel_iommu=on
335 To check that IOMMU is enabled on your platform:
337 .. code-block:: console
340 [ 0.000000] Intel-IOMMU: enabled
341 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
342 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
343 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
344 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
345 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
346 [ 3.335744] IOMMU: dmar0 using Queued invalidation
347 [ 3.335746] IOMMU: dmar1 using Queued invalidation
355 To use virtual functions of NIC with SRIOV support, use extended form
356 of NIC PCI slot definition:
358 .. code-block:: python
360 WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
362 Where 'vf' is an indication of virtual function usage and following
363 number defines a VF to be used. In case that VF usage is detected,
364 then vswitchperf will enable SRIOV support for given card and it will
365 detect PCI slot numbers of selected VFs.
367 So in example above, one VF will be configured for NIC '0000:05:00.0'
368 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
369 will detect PCI addresses of selected VFs and it will use them during
372 At the end of vswitchperf execution, SRIOV support will be disabled.
374 SRIOV support is generic and it can be used in different testing scenarios.
377 * vSwitch tests with DPDK or without DPDK support to verify impact
378 of VF usage on vSwitch performance
379 * tests without vSwitch, where traffic is forwared directly
380 between VF interfaces by packet forwarder (e.g. testpmd application)
381 * tests without vSwitch, where VM accesses VF interfaces directly
382 by PCI-passthrough_ to measure raw VM throughput performance.
386 Using QEMU with PCI passthrough support
387 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
389 Raw virtual machine throughput performance can be measured by execution of PVP
390 test with direct access to NICs by PCI passthrough. To execute VM with direct
391 access to PCI devices, enable vfio-pci_. In order to use virtual functions,
392 SRIOV-support_ must be enabled.
394 Execution of test with PCI passthrough with vswitch disabled:
396 .. code-block:: console
398 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
399 --vswtich none --vnf QemuPciPassthrough pvp_tput
401 Any of supported guest-loopback-application_ can be used inside VM with
402 PCI passthrough support.
404 Note: Qemu with PCI passthrough support can be used only with PVP test
407 .. _guest-loopback-application:
409 Selection of loopback application for PVP and PVVP tests
410 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
412 To select loopback application, which will perform traffic forwarding
413 inside VM, following configuration parameter should be configured:
415 .. code-block:: console
417 GUEST_LOOPBACK = ['testpmd', 'testpmd']
421 .. code-block:: console
423 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
424 --test-params "guest_loopback=testpmd"
426 Supported loopback applications are:
428 .. code-block:: console
430 'testpmd' - testpmd from dpdk will be built and used
431 'l2fwd' - l2fwd module provided by Huawei will be built and used
432 'linux_bridge' - linux bridge will be configured
433 'buildin' - nothing will be configured by vsperf; VM image must
434 ensure traffic forwarding between its interfaces
436 Guest loopback application must be configured, otherwise traffic
437 will not be forwarded by VM and testcases with PVP and PVVP deployments
438 will fail. Guest loopback application is set to 'testpmd' by default.
440 Multi-Queue Configuration
441 ^^^^^^^^^^^^^^^^^^^^^^^^^
443 VSPerf currently supports multi-queue with the following limitations:
445 1. Execution of pvp/pvvp tests require testpmd as the loopback if multi-queue
446 is enabled at the guest.
448 2. Requires QemuDpdkVhostUser as the vnf.
450 3. Requires switch to be set to OvsDpdkVhost.
452 4. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The
453 default upstream package versions installed by VSPerf satisfy this
456 To enable multi-queue modify the ''02_vswitch.conf'' file to enable multi-queue
459 .. code-block:: console
461 VSWITCH_MULTI_QUEUES = 2
463 **NOTE:** you should consider using the switch affinity to set a pmd cpu mask
464 that can optimize your performance. Consider the numa of the NIC in use if this
465 applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an
466 appropriate mask to create PMD threads on the same numa node.
468 When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created
469 on the switch will set the option for multiple queues.
471 To enable multi-queue on the guest modify the ''04_vnf.conf'' file.
473 .. code-block:: console
477 Enabling multi-queue at the guest will add multiple queues to each NIC port when
478 qemu launches the guest.
480 Testpmd should be configured to take advantage of multi-queue on the guest. This
481 can be done by modifying the ''04_vnf.conf'' file.
483 .. code-block:: console
485 GUEST_TESTPMD_CPU_MASK = '-l 0,1,2,3,4'
487 GUEST_TESTPMD_NB_CORES = 4
488 GUEST_TESTPMD_TXQ = 2
489 GUEST_TESTPMD_RXQ = 2
491 **NOTE:** The guest SMP cores must be configured to allow for testpmd to use the
492 optimal number of cores to take advantage of the multiple guest queues.
494 **NOTE:** For optimal performance guest SMPs should be on the same numa as the
495 NIC in use if possible/applicable. Testpmd should be assigned at least
496 (nb_cores +1) total cores with the cpu mask.
498 Executing Packet Forwarding tests
499 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
501 To select application, which will perform packet forwarding,
502 following configuration parameter should be configured:
504 .. code-block:: console
509 or use --vswitch and --fwdapp
511 $ ./vsperf --conf-file user_settings.py
515 Supported Packet Forwarding applications are:
517 .. code-block:: console
519 'testpmd' - testpmd from dpdk
522 1. Update your ''10_custom.conf'' file to use the appropriate variables
523 for selected Packet Forwarder:
525 .. code-block:: console
527 # testpmd configuration
529 # packet forwarding mode: io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho
530 TESTPMD_FWD_MODE = 'csum'
531 # checksum calculation layer: ip|udp|tcp|sctp|outer-ip
532 TESTPMD_CSUM_LAYER = 'ip'
533 # checksum calculation place: hw (hardware) | sw (software)
534 TESTPMD_CSUM_CALC = 'sw'
535 # recognize tunnel headers: on|off
536 TESTPMD_CSUM_PARSE_TUNNEL = 'off'
540 .. code-block:: console
542 $ ./vsperf --conf-file <path_to_settings_py>
544 VSPERF modes of operation
545 ^^^^^^^^^^^^^^^^^^^^^^^^^
547 VSPERF can be run in different modes. By default it will configure vSwitch,
548 traffic generator and VNF. However it can be used just for configuration
549 and execution of traffic generator. Another option is execution of all
550 components except traffic generator itself.
552 Mode of operation is driven by configuration parameter -m or --mode
554 .. code-block:: console
556 -m MODE, --mode MODE vsperf mode of operation;
558 "normal" - execute vSwitch, VNF and traffic generator
559 "trafficgen" - execute only traffic generator
560 "trafficgen-off" - execute vSwitch and VNF
561 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
563 In case, that VSPERF is executed in "trafficgen" mode, then configuration
564 of traffic generator should be configured through --test-params option.
565 Supported CLI options useful for traffic generator configuration are:
567 .. code-block:: console
569 'traffic_type' - One of the supported traffic types. E.g. rfc2544,
570 back2back or continuous
571 Default value is "rfc2544".
572 'bidirectional' - Specifies if generated traffic will be full-duplex (true)
573 or half-duplex (false)
574 Default value is "false".
575 'iload' - Defines desired percentage of frame rate used during
576 continuous stream tests.
577 Default value is 100.
578 'multistream' - Defines number of flows simulated by traffic generator.
579 Value 0 disables MultiStream feature
581 'stream_type' - Stream Type is an extension of the "MultiStream" feature.
582 If MultiStream is disabled, then Stream Type will be
583 ignored. Stream Type defines ISO OSI network layer used
584 for simulation of multiple streams.
585 Default value is "L4".
587 Example of execution of VSPERF in "trafficgen" mode:
589 .. code-block:: console
591 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf
592 --test-params "traffic_type=continuous;bidirectional=True;iload=60"
594 Code change verification by pylint
595 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
597 Every developer participating in VSPERF project should run
598 pylint before his python code is submitted for review. Project
599 specific configuration for pylint is available at 'pylint.rc'.
601 Example of manual pylint invocation:
603 .. code-block:: console
605 $ pylint --rcfile ./pylintrc ./vsperf
610 OVS with DPDK and QEMU
611 ~~~~~~~~~~~~~~~~~~~~~~~
613 If you encounter the following error: "before (last 100 chars):
614 '-path=/dev/hugepages,share=on: unable to map backing store for
615 hugepages: Cannot allocate memory\r\n\r\n" with the PVP or PVVP
616 deployment scenario, check the amount of hugepages on your system:
618 .. code-block:: console
620 $ cat /proc/meminfo | grep HugePages
623 By default the vswitchd is launched with 1Gb of memory, to change
624 this, modify --socket-mem parameter in conf/02_vswitch.conf to allocate
625 an appropriate amount of memory:
627 .. code-block:: console
629 VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']
630 VSWITCHD_DPDK_CONFIG = {
631 'dpdk-init' : 'true',
632 'dpdk-lcore-mask' : '0x4',
633 'dpdk-socket-mem' : '1024,0',
636 Note: Option VSWITCHD_DPDK_ARGS is used for vswitchd, which supports --dpdk
637 parameter. In recent vswitchd versions, option VSWITCHD_DPDK_CONFIG will be
638 used to configure vswitchd via ovs-vsctl calls.
644 For more information and details refer to the vSwitchPerf user guide at:
645 http://artifacts.opnfv.org/vswitchperf/brahmaputra/userguide/index.html