1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
5 vSwitchPerf test suites userguide
6 ---------------------------------
11 VSPERF requires a traffic generators to run tests, automated traffic gen
12 support in VSPERF includes:
14 - IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA
16 - Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual
17 in a VM) and a VM to run the Spirent Virtual Deployment Service image,
18 formerly known as "Spirent LabServer".
20 If you want to use another traffic generator, please select the Dummy generator
21 option as shown in `Traffic generator instructions
22 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/trafficgen.html>`__
27 To see the supported Operating Systems, vSwitches and system requirements,
28 please follow the `installation instructions
29 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/installation.html>`__ to
32 Traffic Generator Setup
33 ^^^^^^^^^^^^^^^^^^^^^^^
35 Follow the `Traffic generator instructions
36 <http://artifacts.opnfv.org/vswitchperf/docs/configguide/trafficgen.html>`__ to
37 install and configure a suitable traffic generator.
39 Cloning and building src dependencies
40 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
42 In order to run VSPERF, you will need to download DPDK and OVS. You can
43 do this manually and build them in a preferred location, OR you could
44 use vswitchperf/src. The vswitchperf/src directory contains makefiles
45 that will allow you to clone and build the libraries that VSPERF depends
46 on, such as DPDK and OVS. To clone and build simply:
48 .. code-block:: console
53 VSPERF can be used with stock OVS (without DPDK support). When build
54 is finished, the libraries are stored in src_vanilla directory.
56 The 'make' builds all options in src:
59 * OVS with vhost_user as the guest access method (with DPDK support)
60 * OVS with vhost_cuse s the guest access method (with DPDK support)
62 The vhost_user build will reside in src/ovs/
63 The vhost_cuse build will reside in vswitchperf/src_cuse
64 The Vanilla OVS build will reside in vswitchperf/src_vanilla
66 To delete a src subdirectory and its contents to allow you to re-clone simply
69 .. code-block:: console
73 Configure the ``./conf/10_custom.conf`` file
74 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
76 The ``10_custom.conf`` file is the configuration file that overrides
77 default configurations in all the other configuration files in ``./conf``
78 The supplied ``10_custom.conf`` file **MUST** be modified, as it contains
79 configuration items for which there are no reasonable default values.
81 The configuration items that can be added is not limited to the initial
82 contents. Any configuration item mentioned in any .conf file in
83 ``./conf`` directory can be added and that item will be overridden by
84 the custom configuration value.
86 Using a custom settings file
87 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
89 If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
90 of if you want to use an alternative configuration file, the file can
91 be passed to ``vsperf`` via the ``--conf-file`` argument.
93 .. code-block:: console
95 $ ./vsperf --conf-file <path_to_custom_conf> ...
97 Note that configuration passed in via the environment (``--load-env``)
98 or via another command line argument will override both the default and
99 your custom configuration files. This "priority hierarchy" can be
100 described like so (1 = max priority):
102 1. Command line arguments
103 2. Environment variables
104 3. Configuration file(s)
109 vsperf uses a VM called vloop_vnf for looping traffic in the PVP and PVVP
110 deployment scenarios. The image can be downloaded from
111 `<http://artifacts.opnfv.org/>`__.
113 .. code-block:: console
115 $ wget http://artifacts.opnfv.org/vswitchperf/vloop-vnf-ubuntu-14.04_20151216.qcow2
117 vloop_vnf forwards traffic through a VM using one of:
120 * l2fwd kernel Module.
122 Alternatively you can use your own QEMU image.
127 A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
128 support for Destination Network Address Translation (DNAT) for both the MAC and
129 IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
134 Before running any tests make sure you have root permissions by adding
135 the following line to /etc/sudoers:
137 .. code-block:: console
139 username ALL=(ALL) NOPASSWD: ALL
141 username in the example above should be replaced with a real username.
143 To list the available tests:
145 .. code-block:: console
149 To run a single test:
151 .. code-block:: console
155 Where $TESTNAME is the name of the vsperf test you would like to run.
157 To run a group of tests, for example all tests with a name containing
160 .. code-block:: console
162 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"
166 .. code-block:: console
168 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
170 Some tests allow for configurable parameters, including test duration
171 (in seconds) as well as packet sizes (in bytes).
175 $ ./vsperf --conf-file user_settings.py
177 --test-params "duration=10;pkt_sizes=128"
179 For all available options, check out the help dialog:
181 .. code-block:: console
185 Executing Vanilla OVS tests
186 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
188 1. If needed, recompile src for all OVS variants
190 .. code-block:: console
196 2. Update your ''10_custom.conf'' file to use the appropriate variables
199 .. code-block:: console
201 VSWITCH = 'OvsVanilla'
203 Where $PORT1 and $PORT2 are the Linux interfaces you'd like to bind
208 .. code-block:: console
210 $ ./vsperf --conf-file=<path_to_custom_conf>
212 Please note if you don't want to configure Vanilla OVS through the
213 configuration file, you can pass it as a CLI argument; BUT you must
216 .. code-block:: console
218 $ ./vsperf --vswitch OvsVanilla
221 Executing PVP and PVVP tests
222 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
224 To run tests using vhost-user as guest access method:
226 1. Set VHOST_METHOD and VNF of your settings file to:
228 .. code-block:: console
231 VNF = 'QemuDpdkVhost'
233 2. If needed, recompile src for all OVS variants
235 .. code-block:: console
243 .. code-block:: console
245 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
247 To run tests using vhost-cuse as guest access method:
249 1. Set VHOST_METHOD and VNF of your settings file to:
251 .. code-block:: console
254 VNF = 'QemuDpdkVhostCuse'
256 2. If needed, recompile src for all OVS variants
258 .. code-block:: console
266 .. code-block:: console
268 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
270 Executing PVP tests using Vanilla OVS
271 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
273 To run tests using Vanilla OVS:
275 1. Set the following variables:
277 .. code-block:: console
279 VSWITCH = 'OvsVanilla'
280 VNF = 'QemuVirtioNet'
282 VANILLA_TGEN_PORT1_IP = n.n.n.n
283 VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn
285 VANILLA_TGEN_PORT2_IP = n.n.n.n
286 VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn
288 VANILLA_BRIDGE_IP = n.n.n.n
292 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
293 --test-params "vanilla_tgen_tx_ip=n.n.n.n;
294 vanilla_tgen_tx_mac=nn:nn:nn:nn:nn:nn"
297 2. If needed, recompile src for all OVS variants
299 .. code-block:: console
307 .. code-block:: console
309 $ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
313 Using vfio_pci with DPDK
314 ^^^^^^^^^^^^^^^^^^^^^^^^^
316 To use vfio with DPDK instead of igb_uio edit 'conf/02_vswitch.conf'
317 with the following parameters:
319 .. code-block:: console
324 SYS_MODULES = ['cuse']
326 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
328 **NOTE:** Please ensure your boot/grub parameters include
331 .. code-block:: console
333 iommu=pt intel_iommu=on
335 To check that IOMMU is enabled on your platform:
337 .. code-block:: console
340 [ 0.000000] Intel-IOMMU: enabled
341 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
342 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
343 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
344 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
345 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
346 [ 3.335744] IOMMU: dmar0 using Queued invalidation
347 [ 3.335746] IOMMU: dmar1 using Queued invalidation
355 To use virtual functions of NIC with SRIOV support, use extended form
356 of NIC PCI slot definition:
358 .. code-block:: python
360 WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
362 Where 'vf' is an indication of virtual function usage and following
363 number defines a VF to be used. In case that VF usage is detected,
364 then vswitchperf will enable SRIOV support for given card and it will
365 detect PCI slot numbers of selected VFs.
367 So in example above, one VF will be configured for NIC '0000:05:00.0'
368 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
369 will detect PCI addresses of selected VFs and it will use them during
372 At the end of vswitchperf execution, SRIOV support will be disabled.
374 SRIOV support is generic and it can be used in different testing scenarios.
377 * vSwitch tests with DPDK or without DPDK support to verify impact
378 of VF usage on vSwitch performance
379 * tests without vSwitch, where traffic is forwared directly
380 between VF interfaces by packet forwarder (e.g. testpmd application)
381 * tests without vSwitch, where VM accesses VF interfaces directly
382 by PCI-passthrough_ to measure raw VM throughput performance.
386 Using QEMU with PCI passthrough support
387 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
389 Raw virtual machine throughput performance can be measured by execution of PVP
390 test with direct access to NICs by PCI passthrough. To execute VM with direct
391 access to PCI devices, enable vfio-pci_. In order to use virtual functions,
392 SRIOV-support_ must be enabled.
394 Execution of test with PCI passthrough with vswitch disabled:
396 .. code-block:: console
398 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
399 --vswtich none --vnf QemuPciPassthrough pvp_tput
401 Any of supported guest-loopback-application_ can be used inside VM with
402 PCI passthrough support.
404 Note: Qemu with PCI passthrough support can be used only with PVP test
407 .. _guest-loopback-application:
409 Selection of loopback application for PVP and PVVP tests
410 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
412 To select loopback application, which will perform traffic forwarding
413 inside VM, following configuration parameter should be configured:
415 .. code-block:: console
417 GUEST_LOOPBACK = ['testpmd', 'testpmd']
421 .. code-block:: console
423 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
424 --test-params "guest_loopback=testpmd"
426 Supported loopback applications are:
428 .. code-block:: console
430 'testpmd' - testpmd from dpdk will be built and used
431 'l2fwd' - l2fwd module provided by Huawei will be built and used
432 'linux_bridge' - linux bridge will be configured
433 'buildin' - nothing will be configured by vsperf; VM image must
434 ensure traffic forwarding between its interfaces
436 Guest loopback application must be configured, otherwise traffic
437 will not be forwarded by VM and testcases with PVP and PVVP deployments
438 will fail. Guest loopback application is set to 'testpmd' by default.
440 Executing Packet Forwarding tests
441 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
443 To select application, which will perform packet forwarding,
444 following configuration parameter should be configured:
446 .. code-block:: console
451 or use --vswitch and --fwdapp
453 $ ./vsperf --conf-file user_settings.py
457 Supported Packet Forwarding applications are:
459 .. code-block:: console
461 'testpmd' - testpmd from dpdk
464 1. Update your ''10_custom.conf'' file to use the appropriate variables
465 for selected Packet Forwarder:
467 .. code-block:: console
469 # testpmd configuration
471 # packet forwarding mode: io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho
472 TESTPMD_FWD_MODE = 'csum'
473 # checksum calculation layer: ip|udp|tcp|sctp|outer-ip
474 TESTPMD_CSUM_LAYER = 'ip'
475 # checksum calculation place: hw (hardware) | sw (software)
476 TESTPMD_CSUM_CALC = 'sw'
477 # recognize tunnel headers: on|off
478 TESTPMD_CSUM_PARSE_TUNNEL = 'off'
482 .. code-block:: console
484 $ ./vsperf --conf-file <path_to_settings_py>
486 VSPERF modes of operation
487 ^^^^^^^^^^^^^^^^^^^^^^^^^
489 VSPERF can be run in different modes. By default it will configure vSwitch,
490 traffic generator and VNF. However it can be used just for configuration
491 and execution of traffic generator. Another option is execution of all
492 components except traffic generator itself.
494 Mode of operation is driven by configuration parameter -m or --mode
496 .. code-block:: console
498 -m MODE, --mode MODE vsperf mode of operation;
500 "normal" - execute vSwitch, VNF and traffic generator
501 "trafficgen" - execute only traffic generator
502 "trafficgen-off" - execute vSwitch and VNF
503 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
505 In case, that VSPERF is executed in "trafficgen" mode, then configuration
506 of traffic generator should be configured through --test-params option.
507 Supported CLI options useful for traffic generator configuration are:
509 .. code-block:: console
511 'traffic_type' - One of the supported traffic types. E.g. rfc2544,
512 back2back or continuous
513 Default value is "rfc2544".
514 'bidirectional' - Specifies if generated traffic will be full-duplex (true)
515 or half-duplex (false)
516 Default value is "false".
517 'iload' - Defines desired percentage of frame rate used during
518 continuous stream tests.
519 Default value is 100.
520 'multistream' - Defines number of flows simulated by traffic generator.
521 Value 0 disables MultiStream feature
523 'stream_type' - Stream Type is an extension of the "MultiStream" feature.
524 If MultiStream is disabled, then Stream Type will be
525 ignored. Stream Type defines ISO OSI network layer used
526 for simulation of multiple streams.
527 Default value is "L4".
529 Example of execution of VSPERF in "trafficgen" mode:
531 .. code-block:: console
533 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf
534 --test-params "traffic_type=continuous;bidirectional=True;iload=60"
536 Code change verification by pylint
537 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
539 Every developer participating in VSPERF project should run
540 pylint before his python code is submitted for review. Project
541 specific configuration for pylint is available at 'pylint.rc'.
543 Example of manual pylint invocation:
545 .. code-block:: console
547 $ pylint --rcfile ./pylintrc ./vsperf
552 OVS with DPDK and QEMU
553 ~~~~~~~~~~~~~~~~~~~~~~~
555 If you encounter the following error: "before (last 100 chars):
556 '-path=/dev/hugepages,share=on: unable to map backing store for
557 hugepages: Cannot allocate memory\r\n\r\n" with the PVP or PVVP
558 deployment scenario, check the amount of hugepages on your system:
560 .. code-block:: console
562 $ cat /proc/meminfo | grep HugePages
565 By default the vswitchd is launched with 1Gb of memory, to change
566 this, modify --socket-mem parameter in conf/02_vswitch.conf to allocate
567 an appropriate amount of memory:
569 .. code-block:: console
571 VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']
572 VSWITCHD_DPDK_CONFIG = {
573 'dpdk-init' : 'true',
574 'dpdk-lcore-mask' : '0x4',
575 'dpdk-socket-mem' : '1024,0',
578 Note: Option VSWITCHD_DPDK_ARGS is used for vswitchd, which supports --dpdk
579 parameter. In recent vswitchd versions, option VSWITCHD_DPDK_CONFIG will be
580 used to configure vswitchd via ovs-vsctl calls.
586 For more information and details refer to the vSwitchPerf user guide at:
587 http://artifacts.opnfv.org/vswitchperf/brahmaputra/userguide/index.html