1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
8 =======================
9 About Packet Forwarding
10 =======================
12 Packet Forwarding is a test suite of KVMFORNFV which is used to measure the total time taken by a
13 **Packet** generated by the traffic generator to return from Guest/Host as per the implemented
14 scenario. Packet Forwarding is implemented using VSWITCHPERF/``VSPERF software of OPNFV`` and an
15 ``IXIA Traffic Generator``.
20 +-----------------------------+---------------------------------------------------+
22 | **Release** | **Features** |
24 +=============================+===================================================+
25 | | - Packet Forwarding is not part of Colorado |
26 | Colorado | release of KVMFORNFV |
28 +-----------------------------+---------------------------------------------------+
29 | | - Packet Forwarding is a testcase in KVMFORNFV |
30 | | - Implements three scenarios (Host/Guest/SRIOV) |
31 | | as part of testing in KVMFORNFV |
32 | Danube | - Uses available testcases of OPNFV's VSWTICHPERF |
33 | | software (PVP/PVVP) |
34 | | - Works with IXIA Traffic Generator |
35 +-----------------------------+---------------------------------------------------+
41 VSPerf is an OPNFV testing project.
42 VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated
43 tests, that will serve as a basis for validating the suitability of different vSwitch
44 implementations in a Telco NFV deployment environment. The output of this project will be utilized
45 by the OPNFV Performance and Test group and its associated projects, as part of OPNFV Platform and
46 VNF level testing and validation.
48 For complete VSPERF documentation go to `link.`_
50 .. _link.: <http://artifacts.opnfv.org/vswitchperf/colorado/index.html>
55 Guidelines of installating `VSPERF`_.
57 .. _VSPERF: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
59 Supported Operating Systems
60 ---------------------------
71 The vSwitch must support Open Flow 1.3 or greater.
73 * OVS (built from source).
74 * OVS with DPDK (built from source).
83 The test suite requires Python 3.3 and relies on a number of other
84 packages. These need to be installed for the test suite to function.
86 Installation of required packages, preparation of Python 3 virtual
87 environment and compilation of OVS, DPDK and QEMU is performed by
88 script **systems/build_base_machine.sh**. It should be executed under
89 user account, which will be used for vsperf execution.
91 **Please Note:** Password-less sudo access must be configured for given user
92 before script is executed.
94 Execution of installation script:
96 .. code:: bashFtrace.debugging.tool.userguide.rst
100 $ ./build_base_machine.sh
102 Script **build_base_machine.sh** will install all the vsperf dependencies
103 in terms of system packages, Python 3.x and required Python modules.
104 In case of CentOS 7 it will install Python 3.3 from an additional repository
105 provided by Software Collections (`a link`_). In case of RedHat 7 it will
106 install Python 3.4 as an alternate installation in /usr/local/bin. Installation
107 script will also use `virtualenv`_ to create a vsperf virtual environment,
108 which is isolated from the default Python environment. This environment will
109 reside in a directory called **vsperfenv** in $HOME.
111 You will need to activate the virtual environment every time you start a
112 new shell session. Its activation is specific to your OS:
114 For running testcases VSPERF is installed on Intel pod1-node2 in which centos
115 operating system is installed. Only VSPERF installion on Centos is discussed here.
116 For installation steps on other operating systems please refer to `here`_.
118 .. _here: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
125 To avoid file permission errors and Python version issues, use virtualenv to create an isolated environment with Python3.
126 The required Python 3 packages can be found in the `requirements.txt` file in the root of the test suite.
127 They can be installed in your virtual environment like so:
131 scl enable python33 bash
132 # Create virtual environment
136 pip install -r requirements.txt
139 You need to activate the virtual environment every time you start a new shell session.
140 To activate, simple run:
144 scl enable python33 bash
149 Working Behind a Proxy
150 -----------------------
152 If you're behind a proxy, you'll likely want to configure this before running any of the above. For example:
156 export http_proxy=proxy.mycompany.com:123
157 export https_proxy=proxy.mycompany.com:123
161 .. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
162 .. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
164 For other OS specific activation click `this link`_:
166 .. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements
170 VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_.
172 .. _this: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html>
174 VSPERF supports the following traffic generators:
176 * Dummy (DEFAULT): Allows you to use your own external
178 * IXIA (IxNet and IxOS)
183 To see the list of traffic gens from the cli:
185 .. code-block:: console
187 $ ./vsperf --list-trafficgens
189 This guide provides the details of how to install
190 and configure the various traffic generators.
192 As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_.
194 .. _link: <https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD>
200 =====================
201 Hardware Requirements
202 =====================
203 VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
208 Follow the [installation instructions] to install.
212 On the CentOS 7 system
213 ----------------------
214 You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz.
216 On the IXIA client software system
217 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
218 Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server)
219 - Right click on IxNetwork TCL Server, select properties
220 - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
222 .. Figure:: ../images/IXIA1.png
224 - Hit Ok and start the TCL server application
229 There are several configuration options specific to the IxNetworks traffic generator
230 from IXIA. It is essential to set them correctly, before the VSPERF is executed
233 Detailed description of options follows:
235 * TRAFFICGEN_IXNET_MACHINE - IP address of server, where IxNetwork TCL Server is running
236 * TRAFFICGEN_IXNET_PORT - PORT, where IxNetwork TCL Server is accepting connections from
238 * TRAFFICGEN_IXNET_USER - username, which will be used during communication with IxNetwork
239 TCL Server and IXIA chassis
240 * TRAFFICGEN_IXIA_HOST - IP address of IXIA traffic generator chassis
241 * TRAFFICGEN_IXIA_CARD - identification of card with dedicated ports at IXIA chassis
242 * TRAFFICGEN_IXIA_PORT1 - identification of the first dedicated port at TRAFFICGEN_IXIA_CARD
243 at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
244 unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC
245 at DUT, i.e. to the first PCI handle from WHITELIST_NICS list. Otherwise traffic may not
246 be able to pass through the vSwitch.
247 * TRAFFICGEN_IXIA_PORT2 - identification of the second dedicated port at TRAFFICGEN_IXIA_CARD
248 at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
249 unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC
250 at DUT, i.e. to the second PCI handle from WHITELIST_NICS list. Otherwise traffic may not
251 be able to pass through the vSwitch.
252 * TRAFFICGEN_IXNET_LIB_PATH - path to the DUT specific installation of IxNetwork TCL API
253 * TRAFFICGEN_IXNET_TCL_SCRIPT - name of the TCL script, which VSPERF will use for
254 communication with IXIA TCL server
255 * TRAFFICGEN_IXNET_TESTER_RESULT_DIR - folder accessible from IxNetwork TCL server,
256 where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_
257 * TRAFFICGEN_IXNET_DUT_RESULT_DIR - directory accessible from the DUT, where test
258 results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see
261 .. _test-results-share:
266 VSPERF is not able to retrieve test results via TCL API directly. Instead, all test
267 results are stored at IxNetwork TCL server. Results are stored at folder defined by
268 ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this
269 folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where
270 VSPERF is executed. VSPERF expects, that test results will be available at directory
271 configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter.
273 Example of sharing configuration:
275 * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results``
276 * Modify sharing options of ``ixia_results`` folder to share it with everybody
277 * Create a new directory at DUT, where shared directory with results
278 will be mounted, e.g. ``/mnt/ixia_results``
279 * Update your custom VSPERF configuration file as follows:
281 .. code-block:: python
283 TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results'
284 TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results'
286 Note: It is essential to use slashes '/' also in path
287 configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter.
288 * Install cifs-utils package.
290 e.g. at rpm based Linux distribution:
292 .. code-block:: console
294 yum install cifs-utils
296 * Mount shared directory, so VSPERF can access test results.
298 e.g. by adding new record into ``/etc/fstab``
300 .. code-block:: console
302 mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results
303 -o file_mode=0777,dir_mode=0777,nounix
305 It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder
306 is visible at DUT inside ``/mnt/ixia_results`` directory.
309 Cloning and building src dependencies
310 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
311 In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build
312 them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory
313 contains makefiles that will allow you to clone and build the libraries that VSPERF depends on,
314 such as DPDK and OVS. To clone and build simply:
321 To delete a src subdirectory and its contents to allow you to re-clone simply use:
327 Configure the `./conf/10_custom.conf` file
328 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
329 The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values.
331 The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
334 Using a custom settings file
335 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
336 Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument.
340 ./vsperf --conf-file <path_to_settings_py> ...
342 Note that configuration passed in via the environment (`--load-env`) or via another command line
343 argument will override both the default and your custom configuration files. This
344 "priority hierarchy" can be described like so (1 = max priority):
346 1. Command line arguments
347 2. Environment variables
348 3. Configuration file(s)
352 Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:
355 username ALL=(ALL) NOPASSWD: ALL
357 username in the example above should be replaced with a real username.
359 To list the available tests:
363 ./vsperf --list-tests
366 To run a group of tests, for example all tests with a name containing
371 ./vsperf --conf-file=user_settings.py --tests="RFC2544"
377 ./vsperf --conf-file=user_settings.py
379 Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes).
383 ./vsperf --conf-file user_settings.py
385 --test-param "rfc2544_duration=10;packet_sizes=128"
387 For all available options, check out the help dialog:
396 Available Tests in VSPERF are:
401 * phy2phy_tput_mod_vlan
406 * phy2phy_scalability
414 VSPERF modes of operation
415 --------------------------
417 VSPERF can be run in different modes. By default it will configure vSwitch,
418 traffic generator and VNF. However it can be used just for configuration
419 and execution of traffic generator. Another option is execution of all
420 components except traffic generator itself.
422 Mode of operation is driven by configuration parameter -m or --mode
424 .. code-block:: console
426 -m MODE, --mode MODE vsperf mode of operation;
428 "normal" - execute vSwitch, VNF and traffic generator
429 "trafficgen" - execute only traffic generator
430 "trafficgen-off" - execute vSwitch and VNF
431 "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
433 In case, that VSPERF is executed in "trafficgen" mode, then configuration
434 of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the
435 ``--test-params`` option. It is not needed to specify all values of ``TRAFFIC``
436 dictionary. It is sufficient to specify only values, which should be changed.
437 Detailed description of ``TRAFFIC`` dictionary can be found at: ref:`configuration-of-traffic-dictionary`.
439 Example of execution of VSPERF in "trafficgen" mode:
441 .. code-block:: console
443 $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
444 --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
447 ================================
448 Packet Forwarding Test Scenarios
449 ================================
450 KVMFORNFV currently implements three scenarios as part of testing:
457 Packet Forwarding Host Scenario
458 -------------------------------
459 Here Host is NODE-2. It has VSPERF installed in it and is properly configured to use IXIA Traffic-generator by providing IXIA CARD, PORTS and Lib paths along with IP.
460 please refer to figure.2
462 .. Figure:: ../images/Host_Scenario.png
464 Packet Forwarding Guest Scenario
465 --------------------------------
466 Here the guest is a Virtual Machine (VM) launched by using a modified CentOS image(vsperf provided)
467 on Node-2 (Host) using Qemu. In this scenario, the packet is initially forwarded to Host which is
468 then forwarded to the launched guest. The time taken by the packet to reach the IXIA traffic-generator
469 via Host and Guest is calculated and published as a test result of this scenario.
471 .. Figure:: ../images/Guest_Scenario.png
473 Packet Forwarding SRIOV Scenario
474 --------------------------------
475 Unlike the packet forwarding to Guest-via-Host scenario, here the packet generated at the IXIA is
476 directly forwarded to the Guest VM launched on Host by implementing SR-IOV interface at NIC level
477 of Host .i.e., Node-2. The time taken by the packet to reach the IXIA traffic-generator is calculated
478 and published as a test result for this scenario. SRIOV-support_ is given below, it details how to use SR-IOV.
480 .. Figure:: ../images/SRIOV_Scenario.png
482 Using vfio_pci with DPDK
483 ------------------------
485 To use vfio with DPDK instead of igb_uio add into your custom configuration
486 file the following parameter:
488 .. code-block:: python
490 PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
493 **NOTE:** In case, that DPDK is installed from binary package, then please
495 set ``PATHS['dpdk']['bin']['modules']`` instead.
497 **NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
499 **NOTE:** Please ensure your boot/grub parameters include
502 .. code-block:: console
504 iommu=pt intel_iommu=on
506 To check that IOMMU is enabled on your platform:
508 .. code-block:: console
511 [ 0.000000] Intel-IOMMU: enabled
512 [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
513 [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
514 [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
515 [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
516 [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
517 [ 3.335744] IOMMU: dmar0 using Queued invalidation
518 [ 3.335746] IOMMU: dmar1 using Queued invalidation
526 To use virtual functions of NIC with SRIOV support, use extended form
527 of NIC PCI slot definition:
529 .. code-block:: python
531 WHITELIST_NICS = ['0000:03:00.0|vf0', '0000:03:00.1|vf3']
533 Where ``vf`` is an indication of virtual function usage and following
534 number defines a VF to be used. In case that VF usage is detected,
535 then vswitchperf will enable SRIOV support for given card and it will
536 detect PCI slot numbers of selected VFs.
538 So in example above, one VF will be configured for NIC '0000:05:00.0'
539 and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
540 will detect PCI addresses of selected VFs and it will use them during
543 At the end of vswitchperf execution, SRIOV support will be disabled.
545 SRIOV support is generic and it can be used in different testing scenarios.
549 * vSwitch tests with DPDK or without DPDK support to verify impact
550 of VF usage on vSwitch performance
551 * tests without vSwitch, where traffic is forwared directly
552 between VF interfaces by packet forwarder (e.g. testpmd application)
553 * tests without vSwitch, where VM accesses VF interfaces directly
554 by PCI-passthrough to measure raw VM throughput performance.