1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
9 About Packet Forwarding
10 -----------------------
12 Packet Forwarding is a test suite of KVM4NFV. These latency tests measures the time taken by a
13 **Packet** generated by the traffic generator to travel from the originating device through the
14 network to the destination device. Packet Forwarding is implemented using test framework
15 implemented by OPNFV VSWITCHPERF project and an ``IXIA Traffic Generator``.
20 +-----------------------------+---------------------------------------------------+
22 | **Release** | **Features** |
24 +=============================+===================================================+
25 | | - Packet Forwarding is not part of Colorado |
26 | Colorado | release of KVM4NFV |
28 +-----------------------------+---------------------------------------------------+
29 | | - Packet Forwarding is a testcase in KVM4NFV |
30 | | - Implements three scenarios (Host/Guest/SRIOV) |
31 | | as part of testing in KVM4NFV |
32 | Danube | - Uses automated test framework of OPNFV |
33 | | VSWITCHPERF software (PVP/PVVP) |
34 | | - Works with IXIA Traffic Generator |
35 +-----------------------------+---------------------------------------------------+
40 VSPerf is an OPNFV testing project.
41 VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated
42 tests, that will serve as a basis for validating the suitability of different vSwitch
43 implementations in a Telco NFV deployment environment. The output of this project will be utilized
44 by the OPNFV Performance and Test group and its associated projects, as part of OPNFV Platform and
45 VNF level testing and validation.
47 For complete VSPERF documentation go to `link.`_
49 .. _link.: http://artifacts.opnfv.org/vswitchperf/danube/index.html
55 Guidelines of installating `VSPERF`_.
57 .. _VSPERF: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
59 Supported Operating Systems
60 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
72 The vSwitch must support Open Flow 1.3 or greater.
74 * OVS (built from source).
75 * OVS with DPDK (built from source).
85 The test suite requires Python 3.3 and relies on a number of other
86 packages. These need to be installed for the test suite to function.
88 Installation of required packages, preparation of Python 3 virtual
89 environment and compilation of OVS, DPDK and QEMU is performed by
90 script **systems/build_base_machine.sh**. It should be executed under
91 user account, which will be used for vsperf execution.
93 **Please Note:** Password-less sudo access must be configured for given user before script is executed.
95 Execution of installation script:
101 $ ./build_base_machine.sh
103 Script **build_base_machine.sh** will install all the vsperf dependencies
104 in terms of system packages, Python 3.x and required Python modules.
105 In case of CentOS 7 it will install Python 3.3 from an additional repository
106 provided by Software Collections (`a link`_). In case of RedHat 7 it will
107 install Python 3.4 as an alternate installation in /usr/local/bin. Installation
108 script will also use `virtualenv`_ to create a vsperf virtual environment,
109 which is isolated from the default Python environment. This environment will
110 reside in a directory called **vsperfenv** in $HOME.
112 You will need to activate the virtual environment every time you start a
113 new shell session. Its activation is specific to your OS:
115 For running testcases VSPERF is installed on Intel pod1-node2 in which centos
116 operating system is installed. Only VSPERF installion on Centos is discussed here.
117 For installation steps on other operating systems please refer to `here`_.
119 .. _here: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
126 To avoid file permission errors and Python version issues, use virtualenv to create an isolated environment with Python3.
127 The required Python 3 packages can be found in the `requirements.txt` file in the root of the test suite.
128 They can be installed in your virtual environment like so:
132 scl enable python33 bash
133 # Create virtual environment
137 pip install -r requirements.txt
140 You need to activate the virtual environment every time you start a new shell session.
141 To activate, simple run:
145 scl enable python33 bash
150 Working Behind a Proxy
151 ~~~~~~~~~~~~~~~~~~~~~~
153 If you're behind a proxy, you'll likely want to configure this before running any of the above. For example:
157 export http_proxy="http://<username>:<password>@<proxy>:<port>/";
158 export https_proxy="https://<username>:<password>@<proxy>:<port>/";
159 export ftp_proxy="ftp://<username>:<password>@<proxy>:<port>/";
160 export socks_proxy="socks://<username>:<password>@<proxy>:<port>/";
162 .. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
163 .. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
165 For other OS specific activation click `this link`_:
167 .. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements
172 VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_.
174 .. _this: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html
176 VSPERF supports the following traffic generators:
178 * Dummy (DEFAULT): Allows you to use your own external
180 * IXIA (IxNet and IxOS)
185 To see the list of traffic gens from the cli:
187 .. code-block:: console
189 $ ./vsperf --list-trafficgens
191 This guide provides the details of how to install
192 and configure the various traffic generators.
194 As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_.
196 .. _link: https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD
201 Hardware Requirements
202 ~~~~~~~~~~~~~~~~~~~~~
204 VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that
205 runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
210 Follow the installation instructions to install.
212 On the CentOS 7 system
213 ~~~~~~~~~~~~~~~~~~~~~~
215 You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz.
217 On the IXIA client software system
218 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
220 Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server)
221 - Right click on IxNetwork TCL Server, select properties
222 - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx"
224 where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
226 .. figure:: images/IXIA1.png
231 - Hit Ok and start the TCL server application
236 There are several configuration options specific to the IxNetworks traffic generator
237 from IXIA. It is essential to set them correctly, before the VSPERF is executed
240 Detailed description of options follows:
242 * TRAFFICGEN_IXNET_MACHINE - IP address of server, where IxNetwork TCL Server is running
243 * TRAFFICGEN_IXNET_PORT - PORT, where IxNetwork TCL Server is accepting connections from
245 * TRAFFICGEN_IXNET_USER - username, which will be used during communication with IxNetwork
246 TCL Server and IXIA chassis
247 * TRAFFICGEN_IXIA_HOST - IP address of IXIA traffic generator chassis
248 * TRAFFICGEN_IXIA_CARD - identification of card with dedicated ports at IXIA chassis
249 * TRAFFICGEN_IXIA_PORT1 - identification of the first dedicated port at TRAFFICGEN_IXIA_CARD
250 at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
251 unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC
252 at DUT, i.e. to the first PCI handle from WHITELIST_NICS list. Otherwise traffic may not
253 be able to pass through the vSwitch.
254 * TRAFFICGEN_IXIA_PORT2 - identification of the second dedicated port at TRAFFICGEN_IXIA_CARD
255 at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
256 unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC
257 at DUT, i.e. to the second PCI handle from WHITELIST_NICS list. Otherwise traffic may not
258 be able to pass through the vSwitch.
259 * TRAFFICGEN_IXNET_LIB_PATH - path to the DUT specific installation of IxNetwork TCL API
260 * TRAFFICGEN_IXNET_TCL_SCRIPT - name of the TCL script, which VSPERF will use for
261 communication with IXIA TCL server
262 * TRAFFICGEN_IXNET_TESTER_RESULT_DIR - folder accessible from IxNetwork TCL server,
263 where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_
264 * TRAFFICGEN_IXNET_DUT_RESULT_DIR - directory accessible from the DUT, where test
265 results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see
268 .. _test-results-share:
273 VSPERF is not able to retrieve test results via TCL API directly. Instead, all test
274 results are stored at IxNetwork TCL server. Results are stored at folder defined by
275 ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this
276 folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where
277 VSPERF is executed. VSPERF expects, that test results will be available at directory
278 configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter.
280 Example of sharing configuration:
282 * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results``
283 * Modify sharing options of ``ixia_results`` folder to share it with everybody
284 * Create a new directory at DUT, where shared directory with results
285 will be mounted, e.g. ``/mnt/ixia_results``
286 * Update your custom VSPERF configuration file as follows:
288 .. code-block:: python
290 TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results'
291 TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results'
293 Note: It is essential to use slashes '/' also in path
294 configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter.
296 * Install cifs-utils package.
298 e.g. at rpm based Linux distribution:
300 .. code-block:: console
302 yum install cifs-utils
304 * Mount shared directory, so VSPERF can access test results.
306 e.g. by adding new record into ``/etc/fstab``
308 .. code-block:: console
310 mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results
311 -o file_mode=0777,dir_mode=0777,nounix
313 It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder
314 is visible at DUT inside ``/mnt/ixia_results`` directory.
317 Cloning and building src dependencies
318 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
320 In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build
321 them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory
322 contains makefiles that will allow you to clone and build the libraries that VSPERF depends on,
323 such as DPDK and OVS. To clone and build simply:
330 To delete a src subdirectory and its contents to allow you to re-clone simply use:
336 Configure the `./conf/10_custom.conf` file
337 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
339 The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values.
341 The configuration items that can be added is not limited to the initial contents. Any configuration item
342 mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
345 Using a custom settings file
346 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
348 Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument.
352 ./vsperf --conf-file <path_to_settings_py> ...
354 Note that configuration passed in via the environment (`--load-env`) or via another command line
355 argument will override both the default and your custom configuration files. This
356 "priority hierarchy" can be described like so (1 = max priority):
358 1. Command line arguments
359 2. Environment variables
360 3. Configuration file(s)
365 VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment
366 scenarios involving VMs. The image can be downloaded from
367 `<http://artifacts.opnfv.org/>`__.
369 Please see the installation instructions for information on :ref:`vloop-vnf`
377 A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
378 support for Destination Network Address Translation (DNAT) for both the MAC and
379 IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
384 Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:
387 username ALL=(ALL) NOPASSWD: ALL
389 username in the example above should be replaced with a real username.
391 To list the available tests:
395 ./vsperf --list-tests
398 To run a group of tests, for example all tests with a name containing
403 ./vsperf --conf-file=user_settings.py --tests="RFC2544"
409 ./vsperf --conf-file=user_settings.py
411 Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes).
415 ./vsperf --conf-file user_settings.py
417 --test-param` "rfc2544_duration=10;packet_sizes=128"
419 For all available options, check out the help dialog:
429 Available Tests in VSPERF are:
434 * phy2phy_tput_mod_vlan
439 * phy2phy_scalability
447 VSPERF modes of operation
448 --------------------------
450 VSPERF can be run in different modes. By default it will configure vSwitch,
451 traffic generator and VNF. However it can be used just for configuration
452 and execution of traffic generator. Another option is execution of all
453 components except traffic generator itself.
455 Mode of operation is driven by configuration parameter -m or --mode
457 .. code-block:: console
459 -m MODE, --mode MODE vsperf mode of operation;
461 "normal" - execute vSwitch, VNF and traffic generator
462 "trafficgen" - execute only traffic generator
463 "trafficgen-off" - execute vSwitch and VNF
464 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
466 In case, that VSPERF is executed in "trafficgen" mode, then configuration
467 of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the
468 ``--test-params`` option. It is not needed to specify all values of ``TRAFFIC``
469 dictionary. It is sufficient to specify only values, which should be changed.
470 Detailed description of ``TRAFFIC`` dictionary can be found at: ref:`configuration-of-traffic-dictionary`.
472 Example of execution of VSPERF in "trafficgen" mode:
474 .. code-block:: console
476 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
477 --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
480 Packet Forwarding Test Scenarios
481 --------------------------------
483 KVM4NFV currently implements three scenarios as part of testing:
490 Packet Forwarding Host Scenario
491 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
493 Here host DUT has VSPERF installed in it and is properly configured to use IXIA Traffic-generator
494 by providing IXIA CARD, PORTS and Lib paths along with IP.
495 please refer to figure.2
497 .. figure:: images/Host_Scenario.png
502 Packet Forwarding Guest Scenario
503 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
505 Here the guest is a Virtual Machine (VM) launched by using vloop_vnf provided by vsperf project
506 on host/DUT using Qemu. In this latency test the time taken by the frame/packet to travel from the
507 originating device through network involving a guest to destination device is calculated.
508 The resulting latency values will define the performance of installed kernel.
510 .. figure:: images/Guest_Scenario.png
511 :name: Guest_Scenario
515 Packet Forwarding SRIOV Scenario
516 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
518 In this test the packet generated at the IXIA is forwarded to the Guest VM launched on Host by
519 implementing SR-IOV interface at NIC level of host .i.e., DUT. The time taken by the packet to
520 travel through the network to the destination the IXIA traffic-generator is calculated and
521 published as a test result for this scenario.
523 SRIOV-support_ is given below, it details how to use SR-IOV.
525 .. figure:: images/SRIOV_Scenario.png
526 :name: SRIOV_Scenario
530 Using vfio_pci with DPDK
531 ~~~~~~~~~~~~~~~~~~~~~~~~~
533 To use vfio with DPDK instead of igb_uio add into your custom configuration
534 file the following parameter:
536 .. code-block:: python
538 PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
541 **NOTE:** In case, that DPDK is installed from binary package, then please
543 set ``PATHS['dpdk']['bin']['modules']`` instead.
545 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
547 **NOTE:** Please ensure your boot/grub parameters include
550 .. code-block:: console
552 iommu=pt intel_iommu=on
554 To check that IOMMU is enabled on your platform:
556 .. code-block:: console
559 [ 0.000000] Intel-IOMMU: enabled
560 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
561 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
562 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
563 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
564 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
565 [ 3.335744] IOMMU: dmar0 using Queued invalidation
566 [ 3.335746] IOMMU: dmar1 using Queued invalidation
574 To use virtual functions of NIC with SRIOV support, use extended form
575 of NIC PCI slot definition:
577 .. code-block:: python
579 WHITELIST_NICS = ['0000:03:00.0|vf0', '0000:03:00.1|vf3']
581 Where ``vf`` is an indication of virtual function usage and following
582 number defines a VF to be used. In case that VF usage is detected,
583 then vswitchperf will enable SRIOV support for given card and it will
584 detect PCI slot numbers of selected VFs.
586 So in example above, one VF will be configured for NIC '0000:05:00.0'
587 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
588 will detect PCI addresses of selected VFs and it will use them during
591 At the end of vswitchperf execution, SRIOV support will be disabled.
593 SRIOV support is generic and it can be used in different testing scenarios.
597 * vSwitch tests with DPDK or without DPDK support to verify impact
598 of VF usage on vSwitch performance
599 * tests without vSwitch, where traffic is forwared directly
600 between VF interfaces by packet forwarder (e.g. testpmd application)
601 * tests without vSwitch, where VM accesses VF interfaces directly
602 by PCI-passthrough to measure raw VM throughput performance.
604 Using QEMU with PCI passthrough support
605 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
607 Raw virtual machine throughput performance can be measured by execution of PVP
608 test with direct access to NICs by PCI passthrough. To execute VM with direct
609 access to PCI devices, enable vfio-pci. In order to use virtual functions,
610 SRIOV-support_ must be enabled.
612 Execution of test with PCI passthrough with vswitch disabled:
614 .. code-block:: console
616 $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
617 --vswitch none --vnf QemuPciPassthrough pvp_tput
619 Any of supported guest-loopback-application can be used inside VM with
620 PCI passthrough support.
622 Note: Qemu with PCI passthrough support can be used only with PVP test
628 The results for the packet forwarding test cases are uploaded to artifacts.
629 The link for the same can be found below
633 http://artifacts.opnfv.org/kvmfornfv.html