1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
5 List of vswitchperf testcases
6 -----------------------------
11 ============================= ====================================================================
12 Testcase Name Description
13 ============================= ====================================================================
14 phy2phy_tput LTD.Throughput.RFC2544.PacketLossRatio
15 phy2phy_forwarding LTD.Forwarding.RFC2889.MaxForwardingRate
16 phy2phy_learning LTD.AddrLearning.RFC2889.AddrLearningRate
17 phy2phy_caching LTD.AddrCaching.RFC2889.AddrCachingCapacity
18 back2back LTD.Throughput.RFC2544.BackToBackFrames
19 phy2phy_tput_mod_vlan LTD.Throughput.RFC2544.PacketLossRatioFrameModification
20 phy2phy_cont Phy2Phy Continuous Stream
21 pvp_cont PVP Continuous Stream
22 pvvp_cont PVVP Continuous Stream
23 pvpv_cont Two VMs in parallel with Continuous Stream
24 phy2phy_scalability LTD.Scalability.Flows.RFC2544.0PacketLoss
25 pvp_tput LTD.Throughput.RFC2544.PacketLossRatio
26 pvp_back2back LTD.Throughput.RFC2544.BackToBackFrames
27 pvvp_tput LTD.Throughput.RFC2544.PacketLossRatio
28 pvvp_back2back LTD.Throughput.RFC2544.BackToBackFrames
29 phy2phy_cpu_load LTD.CPU.RFC2544.0PacketLoss
30 phy2phy_mem_load LTD.Memory.RFC2544.0PacketLoss
31 phy2phy_tput_vpp VPP: LTD.Throughput.RFC2544.PacketLossRatio
32 phy2phy_cont_vpp VPP: Phy2Phy Continuous Stream
33 phy2phy_back2back_vpp VPP: LTD.Throughput.RFC2544.BackToBackFrames
34 pvp_tput_vpp VPP: LTD.Throughput.RFC2544.PacketLossRatio
35 pvp_cont_vpp VPP: PVP Continuous Stream
36 pvp_back2back_vpp VPP: LTD.Throughput.RFC2544.BackToBackFrames
37 pvvp_tput_vpp VPP: LTD.Throughput.RFC2544.PacketLossRatio
38 pvvp_cont_vpp VPP: PVP Continuous Stream
39 pvvp_back2back_vpp VPP: LTD.Throughput.RFC2544.BackToBackFrames
40 ============================= ====================================================================
42 List of performance testcases above can be obtained by execution of:
52 ====================================== ========================================================================================
53 Testcase Name Description
54 ====================================== ========================================================================================
55 vswitch_vports_add_del_flow vSwitch - configure switch with vports, add and delete flow
56 vswitch_add_del_flows vSwitch - add and delete flows
57 vswitch_p2p_tput vSwitch - configure switch and execute RFC2544 throughput test
58 vswitch_p2p_back2back vSwitch - configure switch and execute RFC2544 back2back test
59 vswitch_p2p_cont vSwitch - configure switch and execute RFC2544 continuous stream test
60 vswitch_pvp vSwitch - configure switch and one vnf
61 vswitch_vports_pvp vSwitch - configure switch with vports and one vnf
62 vswitch_pvp_tput vSwitch - configure switch, vnf and execute RFC2544 throughput test
63 vswitch_pvp_back2back vSwitch - configure switch, vnf and execute RFC2544 back2back test
64 vswitch_pvp_cont vSwitch - configure switch, vnf and execute RFC2544 continuous stream test
65 vswitch_pvp_all vSwitch - configure switch, vnf and execute all test types
66 vswitch_pvvp vSwitch - configure switch and two vnfs
67 vswitch_pvvp_tput vSwitch - configure switch, two chained vnfs and execute RFC2544 throughput test
68 vswitch_pvvp_back2back vSwitch - configure switch, two chained vnfs and execute RFC2544 back2back test
69 vswitch_pvvp_cont vSwitch - configure switch, two chained vnfs and execute RFC2544 continuous stream test
70 vswitch_pvvp_all vSwitch - configure switch, two chained vnfs and execute all test types
71 vswitch_p4vp_tput 4 chained vnfs, execute RFC2544 throughput test, deployment pvvp4
72 vswitch_p4vp_back2back 4 chained vnfs, execute RFC2544 back2back test, deployment pvvp4
73 vswitch_p4vp_cont 4 chained vnfs, execute RFC2544 continuous stream test, deployment pvvp4
74 vswitch_p4vp_all 4 chained vnfs, execute RFC2544 throughput tests, deployment pvvp4
75 2pvp_udp_dest_flows RFC2544 Continuous TC with 2 Parallel VMs, flows on UDP Dest Port, deployment pvpv2
76 4pvp_udp_dest_flows RFC2544 Continuous TC with 4 Parallel VMs, flows on UDP Dest Port, deployment pvpv4
77 6pvp_udp_dest_flows RFC2544 Continuous TC with 6 Parallel VMs, flows on UDP Dest Port, deployment pvpv6
78 vhost_numa_awareness vSwitch DPDK - verify that PMD threads are served by the same NUMA slot as QEMU instances
79 ixnet_pvp_tput_1nic PVP Scenario with 1 port towards IXIA
80 vswitch_vports_add_del_connection_vpp VPP: vSwitch - configure switch with vports, add and delete connection
81 p2p_l3_multi_IP_ovs OVS: P2P L3 multistream with unique flow for each IP stream
82 p2p_l3_multi_IP_mask_ovs OVS: P2P L3 multistream with 1 flow for /8 net mask
83 pvp_l3_multi_IP_mask_ovs OVS: PVP L3 multistream with 1 flow for /8 net mask
84 pvvp_l3_multi_IP_mask_ovs OVS: PVVP L3 multistream with 1 flow for /8 net mask
85 p2p_l4_multi_PORT_ovs OVS: P2P L4 multistream with unique flow for each IP stream
86 p2p_l4_multi_PORT_mask_ovs OVS: P2P L4 multistream with 1 flow for /8 net and port mask
87 pvp_l4_multi_PORT_mask_ovs OVS: PVP L4 multistream flows for /8 net and port mask
88 pvvp_l4_multi_PORT_mask_ovs OVS: PVVP L4 multistream with flows for /8 net and port mask
89 p2p_l3_multi_IP_arp_vpp VPP: P2P L3 multistream with unique ARP entry for each IP stream
90 p2p_l3_multi_IP_mask_vpp VPP: P2P L3 multistream with 1 route for /8 net mask
91 p2p_l3_multi_IP_routes_vpp VPP: P2P L3 multistream with unique route for each IP stream
92 pvp_l3_multi_IP_mask_vpp VPP: PVP L3 multistream with route for /8 netmask
93 pvvp_l3_multi_IP_mask_vpp VPP: PVVP L3 multistream with route for /8 netmask
94 p2p_l4_multi_PORT_arp_vpp VPP: P2P L4 multistream with unique ARP entry for each IP stream and port check
95 p2p_l4_multi_PORT_mask_vpp VPP: P2P L4 multistream with 1 route for /8 net mask and port check
96 p2p_l4_multi_PORT_routes_vpp VPP: P2P L4 multistream with unique route for each IP stream and port check
97 pvp_l4_multi_PORT_mask_vpp VPP: PVP L4 multistream with route for /8 net and port mask
98 pvvp_l4_multi_PORT_mask_vpp VPP: PVVP L4 multistream with route for /8 net and port mask
99 vxlan_multi_IP_mask_ovs OVS: VxLAN L3 multistream
100 vxlan_multi_IP_arp_vpp VPP: VxLAN L3 multistream with unique ARP entry for each IP stream
101 vxlan_multi_IP_mask_vpp VPP: VxLAN L3 multistream with 1 route for /8 netmask
102 ====================================== ========================================================================================
104 List of integration testcases above can be obtained by execution of:
108 $ ./vsperf --integration --list
110 OVS/DPDK Regression TestCases
111 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
113 These regression tests verify several DPDK features used internally by Open vSwitch. Tests
114 can be used for verification of performance and correct functionality of upcoming DPDK
115 and OVS releases and release candidates.
117 These tests are part of integration testcases and they must be executed with
118 ``--integration`` CLI parameter.
120 Example of execution of all OVS/DPDK regression tests:
124 $ ./vsperf --integration --tests ovsdpdk_
126 Testcases are defined in the file ``conf/integration/01b_dpdk_regression_tests.conf``. This file
127 contains a set of configuration options with prefix ``OVSDPDK_``. These parameters can be used
128 for customization of regression tests and they will override some of standard VSPERF configuration
129 options. It is recommended to check OVSDPDK configuration parameters and modify them in accordance
130 with VSPERF configuration.
132 At least following parameters should be examined. Their values shall ensure, that DPDK and
133 QEMU threads are pinned to cpu cores of the same NUMA slot, where tested NICs are connected.
135 .. code-block:: python
137 _OVSDPDK_1st_PMD_CORE
138 _OVSDPDK_2nd_PMD_CORE
139 _OVSDPDK_GUEST_5_CORES
144 A set of performance tests to verify support of DPDK accelerated network interface cards.
145 Testcases use standard physical to physical network scenario with several vSwitch and
146 traffic configurations, which includes one and two PMD threads, uni and bidirectional traffic
147 and RFC2544 Continuous or RFC2544 Throughput with 0% packet loss traffic types.
149 ======================================== ======================================================================================
150 Testcase Name Description
151 ======================================== ======================================================================================
152 ovsdpdk_nic_p2p_single_pmd_unidir_cont P2P with single PMD in OVS and unidirectional traffic.
153 ovsdpdk_nic_p2p_single_pmd_bidir_cont P2P with single PMD in OVS and bidirectional traffic.
154 ovsdpdk_nic_p2p_two_pmd_bidir_cont P2P with two PMDs in OVS and bidirectional traffic.
155 ovsdpdk_nic_p2p_single_pmd_unidir_tput P2P with single PMD in OVS and unidirectional traffic.
156 ovsdpdk_nic_p2p_single_pmd_bidir_tput P2P with single PMD in OVS and bidirectional traffic.
157 ovsdpdk_nic_p2p_two_pmd_bidir_tput P2P with two PMDs in OVS and bidirectional traffic.
158 ======================================== ======================================================================================
163 A set of functional tests to verify DPDK hotplug support. Tests verify, that it is possible
164 to use port, which was not bound to DPDK driver during vSwitch startup. There is also
165 a test which verifies a possibility to detach port from DPDK driver. However support
166 for manual detachment of a port from DPDK has been removed from recent OVS versions and
167 thus this testcase is expected to fail.
169 ======================================== ======================================================================================
170 Testcase Name Description
171 ======================================== ======================================================================================
172 ovsdpdk_hotplug_attach Ensure successful port-add after binding a device to igb_uio after
173 ovs-vswitchd is launched.
174 ovsdpdk_hotplug_detach Same as ovsdpdk_hotplug_attach, but delete and detach the device
175 after the hotplug. Note Support of netdev-dpdk/detach has been
176 removed from OVS, so testcase will fail with recent OVS/DPDK
178 ======================================== ======================================================================================
183 A set of functional tests for verification of RX checksum calculation for tunneled traffic.
184 Open vSwitch enables RX checksum offloading by default if NIC supports it. It is to note,
185 that it is not possible to disable or enable RX checksum offloading. In order to verify
186 correct RX checksum calculation in software, user has to execute these testcases
187 at NIC without HW offloading capabilities.
189 Testcases utilize existing overlay physical to physical (op2p) network deployment
190 implemented in vsperf. This deployment expects, that traffic generator sends unidirectional
191 tunneled traffic (e.g. vxlan) and Open vSwitch performs data decapsulation and sends them
192 back to the traffic generator via second port.
194 ======================================== ======================================================================================
195 Testcase Name Description
196 ======================================== ======================================================================================
197 ovsdpdk_checksum_l3 Test verifies RX IP header checksum (offloading) validation for
199 ovsdpdk_checksum_l4 Test verifies RX UDP header checksum (offloading) validation for
201 ======================================== ======================================================================================
206 A set of functional testcases for the validation of flow control support in Open vSwitch
207 with DPDK support. If flow control is enabled in both OVS and Traffic Generator,
208 the network endpoint (OVS or TGEN) is not able to process incoming data and
209 thus it detects a RX buffer overflow. It then sends an ethernet pause frame (as defined at 802.3x)
210 to the TX side. This mechanism will ensure, that the TX side will slow down traffic transmission
211 and thus no data is lost at RX side.
213 Introduced testcases use physical to physical scenario to forward data between
214 traffic generator ports. It is expected that the processing of small frames in OVS is slower
215 than line rate. It means that with flow control disabled, traffic generator will
216 report a frame loss. On the other hand with flow control enabled, there should be 0%
217 frame loss reported by traffic generator.
219 ======================================== ======================================================================================
220 Testcase Name Description
221 ======================================== ======================================================================================
222 ovsdpdk_flow_ctrl_rx Test the rx flow control functionality of DPDK PHY ports.
223 ovsdpdk_flow_ctrl_rx_dynamic Change the rx flow control support at run time and ensure the system
225 ======================================== ======================================================================================
230 A set of functional testcases for validation of multiqueue support for both physical
231 and vHost User DPDK ports. Testcases utilize P2P and PVP network deployments and
232 native support of multiqueue configuration available in VSPERF.
234 ======================================== ======================================================================================
235 Testcase Name Description
236 ======================================== ======================================================================================
237 ovsdpdk_mq_p2p_rxqs Setup rxqs on NIC port.
238 ovsdpdk_mq_p2p_rxqs_same_core_affinity Affinitize rxqs to the same core.
239 ovsdpdk_mq_p2p_rxqs_multi_core_affinity Affinitize rxqs to separate cores.
240 ovsdpdk_mq_pvp_rxqs Setup rxqs on vhost user port.
241 ovsdpdk_mq_pvp_rxqs_linux_bridge Confirm traffic received over vhost RXQs with Linux virtio device in
243 ovsdpdk_mq_pvp_rxqs_testpmd Confirm traffic received over vhost RXQs with DPDK device in guest.
244 ======================================== ======================================================================================
249 A set of functional testcases for validation of vHost User Client and vHost User
252 **NOTE:** Vhost User Server mode is deprecated and it will be removed from OVS
255 ======================================== ======================================================================================
256 Testcase Name Description
257 ======================================== ======================================================================================
258 ovsdpdk_vhostuser_client Test vhost-user client mode
259 ovsdpdk_vhostuser_client_reconnect Test vhost-user client mode reconnect feature
260 ovsdpdk_vhostuser_server Test vhost-user server mode
261 ovsdpdk_vhostuser_sock_dir Verify functionality of vhost-sock-dir flag
262 ======================================== ======================================================================================
264 Virtual Devices Support
265 +++++++++++++++++++++++
267 A set of functional testcases for verification of correct functionality of virtual
270 ======================================== ======================================================================================
271 Testcase Name Description
272 ======================================== ======================================================================================
273 ovsdpdk_vdev_add_null_pmd Test addition of port using the null DPDK PMD driver.
274 ovsdpdk_vdev_del_null_pmd Test deletion of port using the null DPDK PMD driver.
275 ovsdpdk_vdev_add_af_packet_pmd Test addition of port using the af_packet DPDK PMD driver.
276 ovsdpdk_vdev_del_af_packet_pmd Test deletion of port using the af_packet DPDK PMD driver.
277 ======================================== ======================================================================================
282 A functional testcase for validation of NUMA awareness feature in OVS.
284 ======================================== ======================================================================================
285 Testcase Name Description
286 ======================================== ======================================================================================
287 ovsdpdk_numa Test vhost-user NUMA support. Vhostuser PMD threads should migrate to
288 the same numa slot, where QEMU is executed.
289 ======================================== ======================================================================================
294 A set of functional testcases for verification of jumbo frame support in OVS.
295 Testcases utilize P2P and PVP network deployments and native support of jumbo
296 frames available in VSPERF.
298 ============================================ ==================================================================================
299 Testcase Name Description
300 ============================================ ==================================================================================
301 ovsdpdk_jumbo_increase_mtu_phy_port_ovsdb Ensure that the increased MTU for a DPDK physical port is updated in
303 ovsdpdk_jumbo_increase_mtu_vport_ovsdb Ensure that the increased MTU for a DPDK vhost-user port is updated in
305 ovsdpdk_jumbo_reduce_mtu_phy_port_ovsdb Ensure that the reduced MTU for a DPDK physical port is updated in
307 ovsdpdk_jumbo_reduce_mtu_vport_ovsdb Ensure that the reduced MTU for a DPDK vhost-user port is updated in
309 ovsdpdk_jumbo_increase_mtu_phy_port_datapath Ensure that the MTU for a DPDK physical port is updated in the
310 datapath itself when increased to a valid value.
311 ovsdpdk_jumbo_increase_mtu_vport_datapath Ensure that the MTU for a DPDK vhost-user port is updated in the
312 datapath itself when increased to a valid value.
313 ovsdpdk_jumbo_reduce_mtu_phy_port_datapath
314 Ensure that the MTU for a DPDK physical port is updated in the
315 datapath itself when decreased to a valid value.
316 ovsdpdk_jumbo_reduce_mtu_vport_datapath Ensure that the MTU for a DPDK vhost-user port is updated in the
317 datapath itself when decreased to a valid value.
318 ovsdpdk_jumbo_mtu_upper_bound_phy_port Verify that the upper bound limit is enforced for OvS DPDK Phy ports.
319 ovsdpdk_jumbo_mtu_upper_bound_vport Verify that the upper bound limit is enforced for OvS DPDK vhost-user
321 ovsdpdk_jumbo_mtu_lower_bound_phy_port Verify that the lower bound limit is enforced for OvS DPDK Phy ports.
322 ovsdpdk_jumbo_mtu_lower_bound_vport Verify that the lower bound limit is enforced for OvS DPDK vhost-user
324 ovsdpdk_jumbo_p2p Ensure that jumbo frames are received, processed and forwarded
325 correctly by DPDK physical ports.
326 ovsdpdk_jumbo_pvp Ensure that jumbo frames are received, processed and forwarded
327 correctly by DPDK vhost-user ports.
328 ovsdpdk_jumbo_p2p_upper_bound Ensure that jumbo frames above the configured Rx port's MTU are not
330 ============================================ ==================================================================================
335 A set of functional testcases for validation of rate limiting support. This feature
336 allows to configure an ingress policing for both physical and vHost User DPDK
339 **NOTE:** Desired maximum rate is specified in kilo bits per second and it defines
340 the rate of payload only.
342 ======================================== ======================================================================================
343 Testcase Name Description
344 ======================================== ======================================================================================
345 ovsdpdk_rate_create_phy_port Ensure a rate limiting interface can be created on a physical DPDK
347 ovsdpdk_rate_delete_phy_port Ensure a rate limiting interface can be destroyed on a physical DPDK
349 ovsdpdk_rate_create_vport Ensure a rate limiting interface can be created on a vhost-user port.
350 ovsdpdk_rate_delete_vport Ensure a rate limiting interface can be destroyed on a vhost-user
352 ovsdpdk_rate_no_policing Ensure when a user attempts to create a rate limiting interface but
353 is missing policing rate argument, no rate limitiner is created.
354 ovsdpdk_rate_no_burst Ensure when a user attempts to create a rate limiting interface but
355 is missing policing burst argument, rate limitiner is created.
356 ovsdpdk_rate_p2p Ensure when a user creates a rate limiting physical interface that
357 the traffic is limited to the specified policer rate in a p2p setup.
358 ovsdpdk_rate_pvp Ensure when a user creates a rate limiting vHost User interface that
359 the traffic is limited to the specified policer rate in a pvp setup.
360 ovsdpdk_rate_p2p_multi_pkt_sizes Ensure that rate limiting works for various frame sizes.
361 ======================================== ======================================================================================
366 A set of functional testcases for validation of QoS support. This feature
367 allows to configure an egress policing for both physical and vHost User DPDK
370 **NOTE:** Desired maximum rate is specified in bytes per second and it defines
371 the rate of payload only.
373 ======================================== ======================================================================================
374 Testcase Name Description
375 ======================================== ======================================================================================
376 ovsdpdk_qos_create_phy_port Ensure a QoS policy can be created on a physical DPDK port
377 ovsdpdk_qos_delete_phy_port Ensure an existing QoS policy can be destroyed on a physical DPDK
379 ovsdpdk_qos_create_vport Ensure a QoS policy can be created on a virtual vhost user port.
380 ovsdpdk_qos_delete_vport Ensure an existing QoS policy can be destroyed on a vhost user port.
381 ovsdpdk_qos_create_no_cir Ensure that a QoS policy cannot be created if the egress policer cir
383 ovsdpdk_qos_create_no_cbs Ensure that a QoS policy cannot be created if the egress policer cbs
385 ovsdpdk_qos_p2p In a p2p setup, ensure when a QoS egress policer is created that the
386 traffic is limited to the specified rate.
387 ovsdpdk_qos_pvp In a pvp setup, ensure when a QoS egress policer is created that the
388 traffic is limited to the specified rate.
389 ======================================== ======================================================================================
394 A set of functional testcases for validation of Custom Statistics support by OVS.
395 This feature allows Custom Statistics to be accessed by VSPERF.
397 These testcases require DPDK v17.11, the latest Open vSwitch(v2.9.90)
398 and the IxNet traffic-generator.
400 ======================================== ======================================================================================
401 ovsdpdk_custstat_check Test if custom statistics are supported.
402 ovsdpdk_custstat_rx_error Test bad ethernet CRC counter 'rx_crc_errors' exposed by custom
405 ======================================== ======================================================================================
407 T-Rex in VM TestCases
408 ^^^^^^^^^^^^^^^^^^^^^
410 A set of functional testcases, which use T-Rex running in VM as a traffic generator.
411 These testcases require a VM image with T-Rex server installed. An example of such
412 image is a vloop-vnf image with T-Rex available for download at:
414 http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-16.04_trex_20180209.qcow2
416 This image can be used for both T-Rex VM and loopback VM in ``vm2vm`` testcases.
418 **NOTE:** The performance of T-Rex running inside the VM is lower if compared to T-Rex
419 execution on bare-metal. The user should perform a calibration of the VM maximum FPS
420 capability, to ensure this limitation is understood.
422 ======================================== ======================================================================================
423 trex_vm_cont T-Rex VM - execute RFC2544 Continuous Stream from T-Rex VM and loop
424 it back through Open vSwitch.
425 trex_vm_tput T-Rex VM - execute RFC2544 Throughput from T-Rex VM and loop it back
426 through Open vSwitch.
427 trex_vm2vm_cont T-Rex VM2VM - execute RFC2544 Continuous Stream from T-Rex VM and
428 loop it back through 2nd VM.
429 trex_vm2vm_tput T-Rex VM2VM - execute RFC2544 Throughput from T-Rex VM and loop it back
432 ======================================== ======================================================================================