1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. (c) OPNFV, Intel Corporation, AT&T and others.
5 .. _vsperf-installation:
11 Downloading vswitchperf
12 -----------------------
14 The vswitchperf can be downloaded from its official git repository, which is
15 hosted by OPNFV. It is necessary to install a ``git`` at your DUT before downloading
16 vswitchperf. Installation of ``git`` is specific to the packaging system used by
17 Linux OS installed at DUT.
19 Example of installation of GIT package and its dependencies:
21 * in case of OS based on RedHat Linux:
28 * in case of Ubuntu or Debian:
32 sudo apt-get install git
34 After the ``git`` is successfully installed at DUT, then vswitchperf can be downloaded
39 git clone http://git.opnfv.org/vswitchperf
41 The last command will create a directory ``vswitchperf`` with a local copy of vswitchperf
44 Supported Operating Systems
45 ---------------------------
48 * Fedora 24 (kernel 4.8 requires DPDK 16.11 and newer)
49 * Fedora 25 (kernel 4.9 requires DPDK 16.11 and newer)
52 * RedHat 7.2 Enterprise Linux
53 * RedHat 7.3 Enterprise Linux
56 * Ubuntu 16.10 (kernel 4.8 requires DPDK 16.11 and newer)
61 The vSwitch must support Open Flow 1.3 or greater.
64 * Open vSwitch with DPDK support
65 * TestPMD application from DPDK (supports p2p and pvp scenarios)
71 * Qemu version 2.3 or greater (version 2.5.0 is recommended)
76 In theory, it is possible to use any VNF image, which is compatible
77 with supported hypervisor. However such VNF must ensure, that appropriate
78 number of network interfaces is configured and that traffic is properly
79 forwarded among them. For new vswitchperf users it is recommended to start
80 with official vloop-vnf_ image, which is maintained by vswitchperf community.
87 The official VM image is called vloop-vnf and it is available for free download
88 from OPNFV artifactory. This image is based on Linux Ubuntu distribution and it
89 supports following applications for traffic forwarding:
95 The vloop-vnf can be downloaded to DUT, for example by ``wget``:
99 wget http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160823.qcow2
101 **NOTE:** In case that ``wget`` is not installed at your DUT, you could install it at RPM
102 based system by ``sudo yum install wget`` or at DEB based system by ``sudo apt-get install
105 Changelog of vloop-vnf:
107 * `vloop-vnf-ubuntu-14.04_20160823`_
110 * only 1 NIC is configured by default to speed up boot with 1 NIC setup
111 * security updates applied
113 * `vloop-vnf-ubuntu-14.04_20160804`_
115 * Linux kernel 4.4.0 installed
116 * libnuma-dev installed
117 * security updates applied
119 * `vloop-vnf-ubuntu-14.04_20160303`_
121 * snmpd service is disabled by default to avoid error messages during VM boot
122 * security updates applied
124 * `vloop-vnf-ubuntu-14.04_20151216`_
126 * version with development tools required for build of DPDK and l2fwd
128 .. _vsperf-installation-script:
133 The test suite requires Python 3.3 or newer and relies on a number of other
134 system and python packages. These need to be installed for the test suite
137 Installation of required packages, preparation of Python 3 virtual
138 environment and compilation of OVS, DPDK and QEMU is performed by
139 script **systems/build_base_machine.sh**. It should be executed under
140 user account, which will be used for vsperf execution.
142 **NOTE:** Password-less sudo access must be configured for given
143 user account before script is executed.
148 $ ./build_base_machine.sh
150 **NOTE:** you don't need to go into any of the systems subdirectories,
151 simply run the top level **build_base_machine.sh**, your OS will be detected
154 Script **build_base_machine.sh** will install all the vsperf dependencies
155 in terms of system packages, Python 3.x and required Python modules.
156 In case of CentOS 7 or RHEL it will install Python 3.3 from an additional
157 repository provided by Software Collections (`a link`_). Installation script
158 will also use `virtualenv`_ to create a vsperf virtual environment, which is
159 isolated from the default Python environment. This environment will reside in a
160 directory called **vsperfenv** in $HOME. It will ensure, that system wide Python
161 installation is not modified or broken by VSPERF installation. The complete list
162 of Python packages installed inside virtualenv can be found at file
163 ``requirements.txt``, which is located at vswitchperf repository.
165 **NOTE:** For RHEL 7.3 Enterprise and CentOS 7.3 OVS Vanilla is not
166 built from upstream source due to kernel incompatibilities. Please see the
167 instructions in the vswitchperf_design document for details on configuring
168 OVS Vanilla for binary package usage.
170 .. _vpp-installation:
175 VPP installation is now included as part of the VSPerf installation scripts.
177 In case of an error message about a missing file such as
178 "Couldn't open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7" you can resolve this
179 issue by simply downloading the file.
183 $ wget https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
189 You will need to activate the virtual environment every time you start a
190 new shell session. Its activation is specific to your OS:
196 $ scl enable python33 bash
197 $ source $HOME/vsperfenv/bin/activate
203 $ source $HOME/vsperfenv/bin/activate
205 After the virtual environment is configued, then VSPERF can be used.
210 (vsperfenv) $ cd vswitchperf
211 (vsperfenv) $ ./vsperf --help
216 In case you will see following error during environment activation:
220 $ source $HOME/vsperfenv/bin/activate
223 then check what type of shell you are using:
230 See what scripts are available in $HOME/vsperfenv/bin
234 $ ls $HOME/vsperfenv/bin/
235 activate activate.csh activate.fish activate_this.py
237 source the appropriate script
241 $ source bin/activate.csh
243 Working Behind a Proxy
244 ======================
246 If you're behind a proxy, you'll likely want to configure this before
247 running any of the above. For example:
251 export http_proxy=proxy.mycompany.com:123
252 export https_proxy=proxy.mycompany.com:123
254 .. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
255 .. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
256 .. _vloop-vnf-ubuntu-14.04_20160823: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160823.qcow2
257 .. _vloop-vnf-ubuntu-14.04_20160804: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160804.qcow2
258 .. _vloop-vnf-ubuntu-14.04_20160303: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160303.qcow2
259 .. _vloop-vnf-ubuntu-14.04_20151216: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20151216.qcow2
264 VSPerf supports the default DPDK bind tool, but also supports driverctl. The
265 driverctl tool is a new tool being used that allows driver binding to be
266 persistent across reboots. The driverctl tool is not provided by VSPerf, but can
267 be downloaded from upstream sources. Once installed set the bind tool to
268 driverctl to allow VSPERF to correctly bind cards for DPDK tests.
272 PATHS['dpdk']['src']['bind-tool'] = 'driverctl'
274 Hugepage Configuration
275 ----------------------
277 Systems running vsperf with either dpdk and/or tests with guests must configure
278 hugepage amounts to support running these configurations. It is recommended
279 to configure 1GB hugepages as the pagesize.
281 The amount of hugepages needed depends on your configuration files in vsperf.
282 Each guest image requires 2048 MB by default according to the default settings
283 in the ``04_vnf.conf`` file.
287 GUEST_MEMORY = ['2048']
289 The dpdk startup parameters also require an amount of hugepages depending on
290 your configuration in the ``02_vswitch.conf`` file.
294 DPDK_SOCKET_MEM = ['1024', '0']
296 **NOTE:** Option ``DPDK_SOCKET_MEM`` is used by all vSwitches with DPDK support.
297 It means Open vSwitch, VPP and TestPMD.
299 VSPerf will verify hugepage amounts are free before executing test
300 environments. In case of hugepage amounts not being free, test initialization
301 will fail and testing will stop.
303 **NOTE:** In some instances on a test failure dpdk resources may not
304 release hugepages used in dpdk configuration. It is recommended to configure a
305 few extra hugepages to prevent a false detection by VSPerf that not enough free
306 hugepages are available to execute the test environment. Normally dpdk would use
307 previously allocated hugepages upon initialization.
309 Depending on your OS selection configuration of hugepages may vary. Please refer
310 to your OS documentation to set hugepages correctly. It is recommended to set
311 the required amount of hugepages to be allocated by default on reboots.
313 Information on hugepage requirements for dpdk can be found at
314 http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html
316 You can review your hugepage amounts by executing the following command
320 cat /proc/meminfo | grep Huge
322 If no hugepages are available vsperf will try to automatically allocate some.
323 Allocation is controlled by ``HUGEPAGE_RAM_ALLOCATION`` configuration parameter in
324 ``02_vswitch.conf`` file. Default is 2GB, resulting in either 2 1GB hugepages
325 or 1024 2MB hugepages.
327 Tuning Considerations
328 ---------------------
330 With the large amount of tuning guides available online on how to properly
331 tune a DUT, it becomes difficult to achieve consistent numbers for DPDK testing.
332 VSPerf recommends a simple approach that has been tested by different companies
333 to achieve proper CPU isolation.
335 The idea behind CPU isolation when running DPDK based tests is to achieve as few
336 interruptions to a PMD process as possible. There is now a utility available on
337 most Linux Systems to achieve proper CPU isolation with very little effort and
338 customization. The tool is called tuned-adm and is most likely installed by
339 default on the Linux DUT
341 VSPerf recommends the latest tuned-adm package, which can be downloaded from the
344 http://www.tuned-project.org/2017/04/27/tuned-2-8-0-released/
346 Follow the instructions to install the latest tuned-adm onto your system. For
347 current RHEL customers you should already have the most current version. You
348 just need to install the cpu-partitioning profile.
352 yum install -y tuned-profiles-cpu-partitioning.noarch
354 Proper CPU isolation starts with knowing what NUMA your NIC is installed onto.
355 You can identify this by checking the output of the following command
359 cat /sys/class/net/<NIC NAME>/device/numa_node
361 You can then use utilities such as lscpu or cpu_layout.py which is located in
362 the src dpdk area of VSPerf. These tools will show the CPU layout of which
363 cores/hyperthreads are located on the same NUMA.
365 Determine which CPUS/Hyperthreads will be used for PMD threads and VCPUs for
366 VNFs. Then modify the /etc/tuned/cpu-partitioning-variables.conf and add the
367 CPUs into the isolated_cores variable in some form of x-y or x,y,z or x-y,z,
368 etc. Then apply the profile.
372 tuned-adm profile cpu-partitioning
374 After applying the profile, reboot your system.
376 After rebooting the DUT, you can verify the profile is active by running
382 Now you should have proper CPU isolation active and can achieve consistent
383 results with DPDK based tests.
385 The last consideration is when running TestPMD inside of a VNF, it may make
386 sense to enable enough cores to run a PMD thread on separate core/HT. To achieve
387 this, set the number of VCPUs to 3 and enable enough nb-cores in the TestPMD
388 config. You can modify options in the conf files.
393 GUEST_TESTPMD_PARAMS = ['-l 0,1,2 -n 4 --socket-mem 512 -- '
394 '--burst=64 -i --txqflags=0xf00 '
395 '--disable-hw-vlan --nb-cores=2']
397 Verify you set the VCPU core locations appropriately on the same NUMA as with
398 your PMD mask for OVS-DPDK.