.. code-block:: console
VSWITCH = 'OvsVanilla'
- VSWITCH_VANILLA_PHY_PORT_NAMES = ['$PORT1', '$PORT2']
Where $PORT1 and $PORT2 are the Linux interfaces you'd like to bind
to the vswitch.
$ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
+.. _vfio-pci:
+
Using vfio_pci with DPDK
^^^^^^^^^^^^^^^^^^^^^^^^^
[ 3.335746] IOMMU: dmar1 using Queued invalidation
....
+.. _SRIOV-support:
+
+Using SRIOV support
+^^^^^^^^^^^^^^^^^^^
+
+To use virtual functions of NIC with SRIOV support, use extended form
+of NIC PCI slot definition:
+
+.. code-block:: python
+
+ WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
+
+Where 'vf' is an indication of virtual function usage and following
+number defines a VF to be used. In case that VF usage is detected,
+then vswitchperf will enable SRIOV support for given card and it will
+detect PCI slot numbers of selected VFs.
+
+So in example above, one VF will be configured for NIC '0000:05:00.0'
+and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
+will detect PCI addresses of selected VFs and it will use them during
+test execution.
+
+At the end of vswitchperf execution, SRIOV support will be disabled.
+
+SRIOV support is generic and it can be used in different testing scenarios.
+For example:
+
+* vSwitch tests with DPDK or without DPDK support to verify impact
+ of VF usage on vSwitch performance
+* tests without vSwitch, where traffic is forwared directly
+ between VF interfaces by packet forwarder (e.g. testpmd application)
+* tests without vSwitch, where VM accesses VF interfaces directly
+ by PCI-passthrough_ to measure raw VM throughput performance.
+
+.. _PCI-passthrough:
+
+Using QEMU with PCI passthrough support
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Raw virtual machine throughput performance can be measured by execution of PVP
+test with direct access to NICs by PCI passthrough. To execute VM with direct
+access to PCI devices, enable vfio-pci_. In order to use virtual functions,
+SRIOV-support_ must be enabled.
+
+Execution of test with PCI passthrough with vswitch disabled:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
+ --vswtich none --vnf QemuPciPassthrough pvp_tput
+
+Any of supported guest-loopback-application_ can be used inside VM with
+PCI passthrough support.
+
+Note: Qemu with PCI passthrough support can be used only with PVP test
+deployment.
+
+.. _guest-loopback-application:
+
Selection of loopback application for PVP and PVVP tests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
will not be forwarded by VM and testcases with PVP and PVVP deployments
will fail. Guest loopback application is set to 'testpmd' by default.
+Multi-Queue Configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+VSPerf currently supports multi-queue with the following limitations:
+
+ 1. Execution of pvp/pvvp tests require testpmd as the loopback if multi-queue
+ is enabled at the guest.
+
+ 2. Requires QemuDpdkVhostUser as the vnf.
+
+ 3. Requires switch to be set to OvsDpdkVhost.
+
+ 4. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The
+ default upstream package versions installed by VSPerf satisfy this
+ requirement.
+
+To enable multi-queue modify the ''02_vswitch.conf'' file to enable multi-queue
+on the switch.
+
+ .. code-block:: console
+
+ VSWITCH_MULTI_QUEUES = 2
+
+**NOTE:** you should consider using the switch affinity to set a pmd cpu mask
+that can optimize your performance. Consider the numa of the NIC in use if this
+applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an
+appropriate mask to create PMD threads on the same numa node.
+
+When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created
+on the switch will set the option for multiple queues.
+
+To enable multi-queue on the guest modify the ''04_vnf.conf'' file.
+
+ .. code-block:: console
+
+ GUEST_NIC_QUEUES = 2
+
+Enabling multi-queue at the guest will add multiple queues to each NIC port when
+qemu launches the guest.
+
+Testpmd should be configured to take advantage of multi-queue on the guest. This
+can be done by modifying the ''04_vnf.conf'' file.
+
+ .. code-block:: console
+
+ GUEST_TESTPMD_CPU_MASK = '-l 0,1,2,3,4'
+
+ GUEST_TESTPMD_NB_CORES = 4
+ GUEST_TESTPMD_TXQ = 2
+ GUEST_TESTPMD_RXQ = 2
+
+**NOTE:** The guest SMP cores must be configured to allow for testpmd to use the
+optimal number of cores to take advantage of the multiple guest queues.
+
+**NOTE:** For optimal performance guest SMPs should be on the same numa as the
+NIC in use if possible/applicable. Testpmd should be assigned at least
+(nb_cores +1) total cores with the cpu mask.
+
Executing Packet Forwarding tests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']
+ VSWITCHD_DPDK_CONFIG = {
+ 'dpdk-init' : 'true',
+ 'dpdk-lcore-mask' : '0x4',
+ 'dpdk-socket-mem' : '1024,0',
+ }
+
+Note: Option VSWITCHD_DPDK_ARGS is used for vswitchd, which supports --dpdk
+parameter. In recent vswitchd versions, option VSWITCHD_DPDK_CONFIG will be
+used to configure vswitchd via ovs-vsctl calls.
+
More information
^^^^^^^^^^^^^^^^