- Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual
in a VM) and a VM to run the Spirent Virtual Deployment Service image,
formerly known as "Spirent LabServer".
+- Xena Network traffic generator (Xena hardware chassis) that houses the Xena
+ Traffic generator modules.
+- Moongen software traffic generator. Requires a separate machine running
+ moongen to execute packet generation.
If you want to use another traffic generator, please select the Dummy generator
option as shown in `Traffic generator instructions
$ wget http://artifacts.opnfv.org/vswitchperf/vloop-vnf-ubuntu-14.04_20151216.qcow2
+Newer vloop_vnf images are available. Please reference the
+installation instructions for information on these images
+`installation instructions
+<http://artifacts.opnfv.org/vswitchperf/docs/configguide/installation.html>`__
+
+
vloop_vnf forwards traffic through a VM using one of:
* DPDK testpmd
* Linux Bridge
$ ./vsperf --conf-file user_settings.py
--tests RFC2544Tput
- --test-param "duration=10;pkt_sizes=128"
+ --test-params "duration=10;pkt_sizes=128"
For all available options, check out the help dialog:
.. code-block:: console
VSWITCH = 'OvsVanilla'
- VSWITCH_VANILLA_PHY_PORT_NAMES = ['$PORT1', '$PORT2']
Where $PORT1 and $PORT2 are the Linux interfaces you'd like to bind
to the vswitch.
or use --test-param
$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
- --test-param "vanilla_tgen_tx_ip=n.n.n.n;
+ --test-params "vanilla_tgen_tx_ip=n.n.n.n;
vanilla_tgen_tx_mac=nn:nn:nn:nn:nn:nn"
$ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
+.. _vfio-pci:
+
Using vfio_pci with DPDK
^^^^^^^^^^^^^^^^^^^^^^^^^
[ 3.335746] IOMMU: dmar1 using Queued invalidation
....
+.. _SRIOV-support:
+
+Using SRIOV support
+^^^^^^^^^^^^^^^^^^^
+
+To use virtual functions of NIC with SRIOV support, use extended form
+of NIC PCI slot definition:
+
+.. code-block:: python
+
+ WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
+
+Where 'vf' is an indication of virtual function usage and following
+number defines a VF to be used. In case that VF usage is detected,
+then vswitchperf will enable SRIOV support for given card and it will
+detect PCI slot numbers of selected VFs.
+
+So in example above, one VF will be configured for NIC '0000:05:00.0'
+and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
+will detect PCI addresses of selected VFs and it will use them during
+test execution.
+
+At the end of vswitchperf execution, SRIOV support will be disabled.
+
+SRIOV support is generic and it can be used in different testing scenarios.
+For example:
+
+* vSwitch tests with DPDK or without DPDK support to verify impact
+ of VF usage on vSwitch performance
+* tests without vSwitch, where traffic is forwared directly
+ between VF interfaces by packet forwarder (e.g. testpmd application)
+* tests without vSwitch, where VM accesses VF interfaces directly
+ by PCI-passthrough_ to measure raw VM throughput performance.
+
+.. _PCI-passthrough:
+
+Using QEMU with PCI passthrough support
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Raw virtual machine throughput performance can be measured by execution of PVP
+test with direct access to NICs by PCI passthrough. To execute VM with direct
+access to PCI devices, enable vfio-pci_. In order to use virtual functions,
+SRIOV-support_ must be enabled.
+
+Execution of test with PCI passthrough with vswitch disabled:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
+ --vswitch none --vnf QemuPciPassthrough pvp_tput
+
+Any of supported guest-loopback-application_ can be used inside VM with
+PCI passthrough support.
+
+Note: Qemu with PCI passthrough support can be used only with PVP test
+deployment.
+
+.. _guest-loopback-application:
+
Selection of loopback application for PVP and PVVP tests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
- --test-param "guest_loopback=testpmd"
+ --test-params "guest_loopback=testpmd"
Supported loopback applications are:
will not be forwarded by VM and testcases with PVP and PVVP deployments
will fail. Guest loopback application is set to 'testpmd' by default.
+Multi-Queue Configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+VSPerf currently supports multi-queue with the following limitations:
+
+ 1. Execution of pvp/pvvp tests require testpmd as the loopback if multi-queue
+ is enabled at the guest.
+
+ 2. Requires QemuDpdkVhostUser as the vnf.
+
+ 3. Requires switch to be set to OvsDpdkVhost.
+
+ 4. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The
+ default upstream package versions installed by VSPerf satisfy this
+ requirement.
+
+To enable multi-queue modify the ''02_vswitch.conf'' file to enable multi-queue
+on the switch.
+
+ .. code-block:: console
+
+ VSWITCH_MULTI_QUEUES = 2
+
+**NOTE:** you should consider using the switch affinity to set a pmd cpu mask
+that can optimize your performance. Consider the numa of the NIC in use if this
+applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an
+appropriate mask to create PMD threads on the same numa node.
+
+When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created
+on the switch will set the option for multiple queues.
+
+To enable multi-queue on the guest modify the ''04_vnf.conf'' file.
+
+ .. code-block:: console
+
+ GUEST_NIC_QUEUES = 2
+
+Enabling multi-queue at the guest will add multiple queues to each NIC port when
+qemu launches the guest.
+
+Testpmd should be configured to take advantage of multi-queue on the guest. This
+can be done by modifying the ''04_vnf.conf'' file.
+
+ .. code-block:: console
+
+ GUEST_TESTPMD_CPU_MASK = '-l 0,1,2,3,4'
+
+ GUEST_TESTPMD_NB_CORES = 4
+ GUEST_TESTPMD_TXQ = 2
+ GUEST_TESTPMD_RXQ = 2
+
+**NOTE:** The guest SMP cores must be configured to allow for testpmd to use the
+optimal number of cores to take advantage of the multiple guest queues.
+
+**NOTE:** For optimal performance guest SMPs should be on the same numa as the
+NIC in use if possible/applicable. Testpmd should be assigned at least
+(nb_cores +1) total cores with the cpu mask.
+
Executing Packet Forwarding tests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# testpmd configuration
TESTPMD_ARGS = []
- # packet forwarding mode: io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho
+ # packet forwarding mode supported by testpmd; Please see DPDK documentation
+ # for comprehensive list of modes supported by your version.
+ # e.g. io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho|...
+ # Note: Option "mac_retry" has been changed to "mac retry" since DPDK v16.07
TESTPMD_FWD_MODE = 'csum'
# checksum calculation layer: ip|udp|tcp|sctp|outer-ip
TESTPMD_CSUM_LAYER = 'ip'
"trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
In case, that VSPERF is executed in "trafficgen" mode, then configuration
-of traffic generator should be configured through --test-param option.
+of traffic generator should be configured through --test-params option.
Supported CLI options useful for traffic generator configuration are:
.. code-block:: console
.. code-block:: console
VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']
+ VSWITCHD_DPDK_CONFIG = {
+ 'dpdk-init' : 'true',
+ 'dpdk-lcore-mask' : '0x4',
+ 'dpdk-socket-mem' : '1024,0',
+ }
+
+Note: Option VSWITCHD_DPDK_ARGS is used for vswitchd, which supports --dpdk
+parameter. In recent vswitchd versions, option VSWITCHD_DPDK_CONFIG will be
+used to configure vswitchd via ovs-vsctl calls.
+
More information
^^^^^^^^^^^^^^^^
For more information and details refer to the vSwitchPerf user guide at:
-http://artifacts.opnfv.org/vswitchperf/brahmaputra/userguide/index.html
+http://artifacts.opnfv.org/vswitchperf/docs/userguide/index.html