1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
9 About Packet Forwarding
10 -----------------------
12 Packet Forwarding is a test suite of KVM4NFV. These latency tests measures the time taken by a
13 **Packet** generated by the traffic generator to travel from the originating device through the
14 network to the destination device. Packet Forwarding is implemented using test framework
15 implemented by OPNFV VSWITCHPERF project and an ``IXIA Traffic Generator``.
20 +-----------------------------+---------------------------------------------------+
22 | **Release** | **Features** |
24 +=============================+===================================================+
25 | | - Packet Forwarding is not part of Colorado |
26 | Colorado | release of KVM4NFV |
28 +-----------------------------+---------------------------------------------------+
29 | | - Packet Forwarding is a testcase in KVM4NFV |
30 | | - Implements three scenarios (Host/Guest/SRIOV) |
31 | | as part of testing in KVM4NFV |
32 | Danube | - Uses automated test framework of OPNFV |
33 | | VSWITCHPERF software (PVP/PVVP) |
34 | | - Works with IXIA Traffic Generator |
35 +-----------------------------+---------------------------------------------------+
36 | | - Test cases involving multiple guests (PVVP/PVPV)|
38 | Euphrates | - Implemented Yardstick Grafana dashboard to |
39 | | publish results of packet forwarding test cases |
40 +-----------------------------+---------------------------------------------------+
45 VSPerf is an OPNFV testing project.
46 VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated
47 tests, that will serve as a basis for validating the suitability of different vSwitch
48 implementations in a Telco NFV deployment environment. The output of this project will be utilized
49 by the OPNFV Performance and Test group and its associated projects, as part of OPNFV Platform and
50 VNF level testing and validation.
52 For complete VSPERF documentation go to `link.`_
54 .. _link.: http://artifacts.opnfv.org/vswitchperf/danube/index.html
60 Guidelines of installating `VSPERF`_.
62 .. _VSPERF: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
64 Supported Operating Systems
65 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
77 The vSwitch must support Open Flow 1.3 or greater.
79 * OVS (built from source).
80 * OVS with DPDK (built from source).
90 The test suite requires Python 3.3 and relies on a number of other
91 packages. These need to be installed for the test suite to function.
93 Installation of required packages, preparation of Python 3 virtual
94 environment and compilation of OVS, DPDK and QEMU is performed by
95 script **systems/build_base_machine.sh**. It should be executed under
96 user account, which will be used for vsperf execution.
98 **Please Note:** Password-less sudo access must be configured for given user before script is
101 Execution of installation script:
107 $ ./build_base_machine.sh
109 Script **build_base_machine.sh** will install all the vsperf dependencies
110 in terms of system packages, Python 3.x and required Python modules.
111 In case of CentOS 7 it will install Python 3.3 from an additional repository
112 provided by Software Collections (`a link`_). In case of RedHat 7 it will
113 install Python 3.4 as an alternate installation in /usr/local/bin. Installation
114 script will also use `virtualenv`_ to create a vsperf virtual environment,
115 which is isolated from the default Python environment. This environment will
116 reside in a directory called **vsperfenv** in $HOME.
118 You will need to activate the virtual environment every time you start a
119 new shell session. Its activation is specific to your OS:
121 For running testcases VSPERF is installed on Intel pod1-node2 in which centos
122 operating system is installed. Only VSPERF installion on Centos is discussed here.
123 For installation steps on other operating systems please refer to `here`_.
125 .. _here: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
132 To avoid file permission errors and Python version issues, use virtualenv to create an isolated
133 environment with Python3. The required Python 3 packages can be found in the `requirements.txt` file
134 in the root of the test suite. They can be installed in your virtual environment like so:
138 scl enable python33 bash
139 # Create virtual environment
143 pip install -r requirements.txt
146 You need to activate the virtual environment every time you start a new shell session.
147 To activate, simple run:
151 scl enable python33 bash
156 Working Behind a Proxy
157 ~~~~~~~~~~~~~~~~~~~~~~
159 If you're behind a proxy, you'll likely want to configure this before running any of the above.
164 export http_proxy="http://<username>:<password>@<proxy>:<port>/";
165 export https_proxy="https://<username>:<password>@<proxy>:<port>/";
166 export ftp_proxy="ftp://<username>:<password>@<proxy>:<port>/";
167 export socks_proxy="socks://<username>:<password>@<proxy>:<port>/";
169 .. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
170 .. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
172 For other OS specific activation click `this link`_:
175 http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements
180 VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic
181 generator go through `this`_.
183 .. _this: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html
185 VSPERF supports the following traffic generators:
187 * Dummy (DEFAULT): Allows you to use your own external
189 * IXIA (IxNet and IxOS)
194 To see the list of traffic gens from the cli:
196 .. code-block:: console
198 $ ./vsperf --list-trafficgens
200 This guide provides the details of how to install
201 and configure the various traffic generators.
203 As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation
204 regarding traffic generators please follow this `link`_.
206 .. _link: https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD
211 Hardware Requirements
212 ~~~~~~~~~~~~~~~~~~~~~
214 VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine
215 that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
220 Follow the installation instructions to install.
222 On the CentOS 7 system
223 ~~~~~~~~~~~~~~~~~~~~~~
225 You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz.
227 On the IXIA client software system
228 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
230 Find the IxNetwork TCL server app
231 - (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server)
232 - Right click on IxNetwork TCL Server, select properties
233 - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx"
235 where xxxx is your port number (take note of this port number you will need it for the
236 10_custom.conf file).
238 .. figure:: images/IXIA1.png
243 - Hit Ok and start the TCL server application
248 There are several configuration options specific to the IxNetworks traffic generator
249 from IXIA. It is essential to set them correctly, before the VSPERF is executed
252 Detailed description of options follows:
254 * TRAFFICGEN_IXNET_MACHINE - IP address of server, where IxNetwork TCL Server is running
255 * TRAFFICGEN_IXNET_PORT - PORT, where IxNetwork TCL Server is accepting connections from
257 * TRAFFICGEN_IXNET_USER - username, which will be used during communication with IxNetwork
258 TCL Server and IXIA chassis
259 * TRAFFICGEN_IXIA_HOST - IP address of IXIA traffic generator chassis
260 * TRAFFICGEN_IXIA_CARD - identification of card with dedicated ports at IXIA chassis
261 * TRAFFICGEN_IXIA_PORT1 - identification of the first dedicated port at TRAFFICGEN_IXIA_CARD
262 at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
263 unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC
264 at DUT, i.e. to the first PCI handle from WHITELIST_NICS list. Otherwise traffic may not
265 be able to pass through the vSwitch.
266 * TRAFFICGEN_IXIA_PORT2 - identification of the second dedicated port at TRAFFICGEN_IXIA_CARD
267 at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
268 unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC
269 at DUT, i.e. to the second PCI handle from WHITELIST_NICS list. Otherwise traffic may not
270 be able to pass through the vSwitch.
271 * TRAFFICGEN_IXNET_LIB_PATH - path to the DUT specific installation of IxNetwork TCL API
272 * TRAFFICGEN_IXNET_TCL_SCRIPT - name of the TCL script, which VSPERF will use for
273 communication with IXIA TCL server
274 * TRAFFICGEN_IXNET_TESTER_RESULT_DIR - folder accessible from IxNetwork TCL server,
275 where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_
276 * TRAFFICGEN_IXNET_DUT_RESULT_DIR - directory accessible from the DUT, where test
277 results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see
280 .. _test-results-share:
285 VSPERF is not able to retrieve test results via TCL API directly. Instead, all test
286 results are stored at IxNetwork TCL server. Results are stored at folder defined by
287 ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this
288 folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where
289 VSPERF is executed. VSPERF expects, that test results will be available at directory
290 configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter.
292 Example of sharing configuration:
294 * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results``
295 * Modify sharing options of ``ixia_results`` folder to share it with everybody
296 * Create a new directory at DUT, where shared directory with results
297 will be mounted, e.g. ``/mnt/ixia_results``
298 * Update your custom VSPERF configuration file as follows:
300 .. code-block:: python
302 TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results'
303 TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results'
305 Note: It is essential to use slashes '/' also in path
306 configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter.
308 * Install cifs-utils package.
310 e.g. at rpm based Linux distribution:
312 .. code-block:: console
314 yum install cifs-utils
316 * Mount shared directory, so VSPERF can access test results.
318 e.g. by adding new record into ``/etc/fstab``
320 .. code-block:: console
322 mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results
323 -o file_mode=0777,dir_mode=0777,nounix
325 It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder
326 is visible at DUT inside ``/mnt/ixia_results`` directory.
329 Cloning and building src dependencies
330 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
332 In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build
333 them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory
334 contains makefiles that will allow you to clone and build the libraries that VSPERF depends on,
335 such as DPDK and OVS. To clone and build simply:
342 To delete a src subdirectory and its contents to allow you to re-clone simply use:
348 Configure the `./conf/10_custom.conf` file
349 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
351 The supplied `10_custom.conf` file must be modified, as it contains configuration items for which
352 there are no reasonable default values.
354 The configuration items that can be added is not limited to the initial contents. Any configuration
355 item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden
356 by the custom configuration value.
358 Using a custom settings file
359 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
361 Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument.
365 ./vsperf --conf-file <path_to_settings_py> ...
367 Note that configuration passed in via the environment (`--load-env`) or via another command line
368 argument will override both the default and your custom configuration files. This
369 "priority hierarchy" can be described like so (1 = max priority):
371 1. Command line arguments
372 2. Environment variables
373 3. Configuration file(s)
378 VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment
379 scenarios involving VMs. The image can be downloaded from
380 `<http://artifacts.opnfv.org/>`__.
382 Please see the installation instructions for information on :ref:`vloop-vnf`
390 A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
391 support for Destination Network Address Translation (DNAT) for both the MAC and
392 IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
397 Before running any tests make sure you have root permissions by adding the following line to
402 username ALL=(ALL) NOPASSWD: ALL
404 username in the example above should be replaced with a real username.
406 To list the available tests:
410 ./vsperf --list-tests
413 To run a group of tests, for example all tests with a name containing
418 ./vsperf --conf-file=user_settings.py --tests="RFC2544"
424 ./vsperf --conf-file=user_settings.py
426 Some tests allow for configurable parameters, including test duration (in seconds) as well as packet
431 ./vsperf --conf-file user_settings.py
433 --test-param` "rfc2544_duration=10;packet_sizes=128"
435 For all available options, check out the help dialog:
445 Available Tests in VSPERF are:
450 * phy2phy_tput_mod_vlan
455 * phy2phy_scalability
463 VSPERF modes of operation
464 --------------------------
466 VSPERF can be run in different modes. By default it will configure vSwitch,
467 traffic generator and VNF. However it can be used just for configuration
468 and execution of traffic generator. Another option is execution of all
469 components except traffic generator itself.
471 Mode of operation is driven by configuration parameter -m or --mode
473 .. code-block:: console
475 -m MODE, --mode MODE vsperf mode of operation;
477 "normal" - execute vSwitch, VNF and traffic generator
478 "trafficgen" - execute only traffic generator
479 "trafficgen-off" - execute vSwitch and VNF
480 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
482 In case, that VSPERF is executed in "trafficgen" mode, then configuration
483 of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the
484 ``--test-params`` option. It is not needed to specify all values of ``TRAFFIC``
485 dictionary. It is sufficient to specify only values, which should be changed.
486 Detailed notes on ``TRAFFIC`` dictionary can be found at: ref:`configuration-of-traffic-dictionary`.
488 Example of execution of VSPERF in "trafficgen" mode:
490 .. code-block:: console
492 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
493 --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
496 Packet Forwarding Test Scenarios
497 --------------------------------
499 KVM4NFV currently implements three scenarios as part of testing:
506 Packet Forwarding Host Scenario
507 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
509 Here host DUT has VSPERF installed in it and is properly configured to use IXIA Traffic-generator
510 by providing IXIA CARD, PORTS and Lib paths along with IP.
511 please refer to figure.2
513 .. figure:: images/Host_Scenario.png
518 Packet Forwarding Guest Scenario (PXP Deployment)
519 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
521 Here the guest is a Virtual Machine (VM) launched by using vloop_vnf provided by vsperf project
522 on host/DUT using Qemu. In this latency test the time taken by the frame/packet to travel from the
523 originating device through network involving a guest to destination device is calculated.
524 The resulting latency values will define the performance of installed kernel.
526 .. figure:: images/Guest_Scenario.png
527 :name: Guest_Scenario
531 Every testcase uses one of the supported deployment scenarios to setup test environment.
532 The controller responsible for a given scenario configures flows in the vswitch to route
533 traffic among physical interfaces connected to the traffic generator and virtual
534 machines. VSPERF supports several deployments including PXP deployment, which can
535 setup various scenarios with multiple VMs.
537 These scenarios are realized by VswitchControllerPXP class, which can configure and
538 execute given number of VMs in serial or parallel configurations. Every VM can be
539 configured with just one or an even number of interfaces. In case that VM has more than
540 2 interfaces, then traffic is properly routed among pairs of interfaces.
542 Example of traffic routing for VM with 4 NICs in serial configuration:
544 .. code-block:: console
546 +------------------------------------------+
548 | +---------------+ +---------------+ |
549 | | Application | | Application | |
550 | +---------------+ +---------------+ |
553 | +---------------+ +---------------+ |
554 | | logical ports | | logical ports | |
556 +--+---------------+----+---------------+--+
560 +-----------+---------------+----+---------------+----------+
561 | vSwitch | 0 1 | | 2 3 | |
562 | | logical ports | | logical ports | |
563 | previous +---------------+ +---------------+ next |
564 | VM or PHY ^ | ^ | VM or PHY|
565 | port -----+ +------------+ +---> port |
566 +-----------------------------------------------------------+
569 It is also possible to define different number of interfaces for each VM to better
570 simulate real scenarios.
572 The number of VMs involved in the test and the type of their connection is defined
573 by deployment name as follows:
575 * ``pvvp[number]`` - configures scenario with VMs connected in series with
576 optional ``number`` of VMs. In case that ``number`` is not specified, then
579 Example of 2 VMs in a serial configuration:
581 .. code-block:: console
583 +----------------------+ +----------------------+
584 | 1st VM | | 2nd VM |
585 | +---------------+ | | +---------------+ |
586 | | Application | | | | Application | |
587 | +---------------+ | | +---------------+ |
590 | +---------------+ | | +---------------+ |
591 | | logical ports | | | | logical ports | |
592 | | 0 1 | | | | 0 1 | |
593 +---+---------------+--+ +---+---------------+--+
597 +---+---------------+---------+---------------+--+
599 | | logical ports | vSwitch | logical ports | |
600 | +---------------+ +---------------+ |
602 | | +-----------------+ v |
603 | +----------------------------------------+ |
604 | | physical ports | |
606 +---+----------------------------------------+---+
610 +------------------------------------------------+
612 | traffic generator |
614 +------------------------------------------------+
616 * ``pvpv[number]`` - configures scenario with VMs connected in parallel with
617 optional ``number`` of VMs. In case that ``number`` is not specified, then
618 2 VMs will be used. Multistream feature is used to route traffic to particular
619 VMs (or NIC pairs of every VM). It means, that VSPERF will enable multistream
620 feaure and sets the number of streams to the number of VMs and their NIC
621 pairs. Traffic will be dispatched based on Stream Type, i.e. by UDP port,
622 IP address or MAC address.
624 Example of 2 VMs in a parallel configuration, where traffic is dispatched
625 based on the UDP port.
627 .. code-block:: console
629 +----------------------+ +----------------------+
630 | 1st VM | | 2nd VM |
631 | +---------------+ | | +---------------+ |
632 | | Application | | | | Application | |
633 | +---------------+ | | +---------------+ |
636 | +---------------+ | | +---------------+ |
637 | | logical ports | | | | logical ports | |
638 | | 0 1 | | | | 0 1 | |
639 +---+---------------+--+ +---+---------------+--+
643 +---+---------------+---------+---------------+--+
645 | | logical ports | vSwitch | logical ports | |
646 | +---------------+ +---------------+ |
648 | | ......................: : |
650 | port| port: +--------------------+ : |
653 | +----------------------------------------+ |
654 | | physical ports | |
656 +---+----------------------------------------+---+
660 +------------------------------------------------+
662 | traffic generator |
664 +------------------------------------------------+
667 PXP deployment is backward compatible with PVP deployment, where ``pvp`` is
668 an alias for ``pvvp1`` and it executes just one VM.
670 The number of interfaces used by VMs is defined by configuration option
671 ``GUEST_NICS_NR``. In case that more than one pair of interfaces is defined
674 * for ``pvvp`` (serial) scenario every NIC pair is connected in serial
675 before connection to next VM is created
676 * for ``pvpv`` (parallel) scenario every NIC pair is directly connected
677 to the physical ports and unique traffic stream is assigned to it
681 * Deployment ``pvvp10`` will start 10 VMs and connects them in series
682 * Deployment ``pvpv4`` will start 4 VMs and connects them in parallel
683 * Deployment ``pvpv1`` and GUEST_NICS_NR = [4] will start 1 VM with
684 4 interfaces and every NIC pair is directly connected to the
686 * Deployment ``pvvp`` and GUEST_NICS_NR = [2, 4] will start 2 VMs;
687 1st VM will have 2 interfaces and 2nd VM 4 interfaces. These interfaces
688 will be connected in serial, i.e. traffic will flow as follows:
689 PHY1 -> VM1_1 -> VM1_2 -> VM2_1 -> VM2_2 -> VM2_3 -> VM2_4 -> PHY2
691 Note: In case that only 1 or more than 2 NICs are configured for VM,
692 then ``testpmd`` should be used as forwarding application inside the VM.
693 As it is able to forward traffic between multiple VM NIC pairs.
695 Note: In case of ``linux_bridge``, all NICs are connected to the same
696 bridge inside the VM.
698 Packet Forwarding SRIOV Scenario
699 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
701 In this test the packet generated at the IXIA is forwarded to the Guest VM launched on Host by
702 implementing SR-IOV interface at NIC level of host .i.e., DUT. The time taken by the packet to
703 travel through the network to the destination the IXIA traffic-generator is calculated and
704 published as a test result for this scenario.
706 SRIOV-support_ is given below, it details how to use SR-IOV.
708 .. figure:: images/SRIOV_Scenario.png
709 :name: SRIOV_Scenario
713 Using vfio_pci with DPDK
714 ~~~~~~~~~~~~~~~~~~~~~~~~~
716 To use vfio with DPDK instead of igb_uio add into your custom configuration
717 file the following parameter:
719 .. code-block:: python
721 PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
724 **NOTE:** In case, that DPDK is installed from binary package, then please
726 set ``PATHS['dpdk']['bin']['modules']`` instead.
728 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
730 **NOTE:** Please ensure your boot/grub parameters include
733 .. code-block:: console
735 iommu=pt intel_iommu=on
737 To check that IOMMU is enabled on your platform:
739 .. code-block:: console
742 [ 0.000000] Intel-IOMMU: enabled
743 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
744 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
745 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
746 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
747 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
748 [ 3.335744] IOMMU: dmar0 using Queued invalidation
749 [ 3.335746] IOMMU: dmar1 using Queued invalidation
757 To use virtual functions of NIC with SRIOV support, use extended form
758 of NIC PCI slot definition:
760 .. code-block:: python
762 WHITELIST_NICS = ['0000:03:00.0|vf0', '0000:03:00.1|vf3']
764 Where ``vf`` is an indication of virtual function usage and following
765 number defines a VF to be used. In case that VF usage is detected,
766 then vswitchperf will enable SRIOV support for given card and it will
767 detect PCI slot numbers of selected VFs.
769 So in example above, one VF will be configured for NIC '0000:05:00.0'
770 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
771 will detect PCI addresses of selected VFs and it will use them during
774 At the end of vswitchperf execution, SRIOV support will be disabled.
776 SRIOV support is generic and it can be used in different testing scenarios.
780 * vSwitch tests with DPDK or without DPDK support to verify impact
781 of VF usage on vSwitch performance
782 * tests without vSwitch, where traffic is forwared directly
783 between VF interfaces by packet forwarder (e.g. testpmd application)
784 * tests without vSwitch, where VM accesses VF interfaces directly
785 by PCI-passthrough to measure raw VM throughput performance.
787 Using QEMU with PCI passthrough support
788 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
790 Raw virtual machine throughput performance can be measured by execution of PVP
791 test with direct access to NICs by PCI passthrough. To execute VM with direct
792 access to PCI devices, enable vfio-pci. In order to use virtual functions,
793 SRIOV-support_ must be enabled.
795 Execution of test with PCI passthrough with vswitch disabled:
797 .. code-block:: console
799 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
800 --vswitch none --vnf QemuPciPassthrough pvp_tput
802 Any of supported guest-loopback-application can be used inside VM with
803 PCI passthrough support.
805 Note: Qemu with PCI passthrough support can be used only with PVP test
808 Guest Core and Thread Binding
809 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
811 VSPERF provides options to achieve better performance by guest core binding and
812 guest vCPU thread binding as well. Core binding is to bind all the qemu threads.
813 Thread binding is to bind the house keeping threads to some CPU and vCPU thread to
814 some other CPU, this helps to reduce the noise from qemu house keeping threads.
817 .. code-block:: python
819 GUEST_CORE_BINDING = [('#EVAL(6+2*#VMINDEX)', '#EVAL(7+2*#VMINDEX)')]
821 **NOTE** By default the GUEST_THREAD_BINDING will be none, which means same as
822 the GUEST_CORE_BINDING, i.e. the vcpu threads are sharing the physical CPUs with
823 the house keeping threads. Better performance using vCPU thread binding can be
824 achieved by enabling affinity in the custom configuration file.
826 For example, if an environment requires 28,29 to be core binded and 30,31 for
827 guest thread binding to achieve better performance.
829 .. code-block:: python
831 VNF_AFFINITIZATION_ON = True
832 GUEST_CORE_BINDING = [('28','29')]
833 GUEST_THREAD_BINDING = [('30', '31')]
838 QEMU default to a compatible subset of performance enhancing cpu features.
839 To pass all available host processor features to the guest.
841 .. code-block:: python
843 GUEST_CPU_OPTIONS = ['host,migratable=off']
845 **NOTE** To enhance the performance, cpu features tsc deadline timer for guest,
846 the guest PMU, the invariant TSC can be provided in the custom configuration file.
848 Selection of loopback application for tests with VMs
849 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
851 To select the loopback applications which will forward packets inside VMs,
852 the following parameter should be configured:
854 .. code-block:: python
856 GUEST_LOOPBACK = ['testpmd']
858 or use ``--test-params`` CLI argument:
860 .. code-block:: console
862 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
863 --test-params "GUEST_LOOPBACK=['testpmd']"
865 Supported loopback applications are:
867 .. code-block:: console
869 'testpmd' - testpmd from dpdk will be built and used
870 'l2fwd' - l2fwd module provided by Huawei will be built and used
871 'linux_bridge' - linux bridge will be configured
872 'buildin' - nothing will be configured by vsperf; VM image must
873 ensure traffic forwarding between its interfaces
875 Guest loopback application must be configured, otherwise traffic
876 will not be forwarded by VM and testcases with VM related deployments
877 will fail. Guest loopback application is set to 'testpmd' by default.
879 **NOTE:** In case that only 1 or more than 2 NICs are configured for VM,
880 then 'testpmd' should be used. As it is able to forward traffic between
881 multiple VM NIC pairs.
883 **NOTE:** In case of linux_bridge, all guest NICs are connected to the same
884 bridge inside the guest.
889 The results for the packet forwarding test cases are uploaded to artifacts and
890 also published on Yardstick Grafana dashboard.
891 The links for the same can be found below
895 http://artifacts.opnfv.org/kvmfornfv.html
896 http://testresults.opnfv.org/KVMFORNFV-Packet-Forwarding