1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Intel Corporation and others.
14 The device under test (DUT) consists of a system following;
15 * A single or dual processor and PCH chip, except for System on Chip (SoC) cases
16 * DRAM memory size and frequency (normally single DIMM per channel)
17 * Specific Intel Network Interface Cards (NICs)
18 * BIOS settings noting those that updated from the basic settings
19 * DPDK build configuration settings, and commands used for tests
20 Connected to the DUT is an IXIA* or Software Traffic generator like pktgen or TRex,
21 simulation platform to generate packet traffic to the DUT ports and
22 determine the throughput/latency at the tester side.
24 Below are the supported/tested (:term:`VNF`) deployment type.
26 .. image:: images/deploy_type.png
28 :alt: SampleVNF supported topology
30 Hardware & Software Ingredients
31 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
36 +-----------+------------------+
37 | Item | Description |
38 +-----------+------------------+
40 +-----------+------------------+
42 +-----------+------------------+
43 | OS | Ubuntu 16.04 LTS |
44 +-----------+------------------+
45 | kernel | 4.4.0-34-generic|
46 +-----------+------------------+
48 +-----------+------------------+
50 Boot and BIOS settings:
53 +------------------+---------------------------------------------------+
54 | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
55 | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
56 | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
57 | | Note: nohz_full and rcu_nocbs is to disable Linux*|
58 | | kernel interrupts, and it’s import |
59 +------------------+---------------------------------------------------+
60 |BIOS | CPU Power and Performance Policy <Performance> |
61 | | CPU C-state Disabled |
62 | | CPU P-state Disabled |
63 | | Enhanced Intel® Speedstep® Tech Disabled |
64 | | Hyper-Threading Technology (If supported) Enable |
65 | | Virtualization Techology Enable |
66 | | Coherency Enable |
67 | | Turbo Boost Disabled |
68 +------------------+---------------------------------------------------+
70 Network Topology for testing VNFs
71 ---------------------------------
72 The ethernet cables should be connected between traffic generator and the VNF server (BM,
73 SRIOV or OVS) setup based on the test profile.
75 The connectivity could be
77 1) Single port pair : One pair ports used for traffic
79 e.g. Single port pair link0 and link1 of VNF are used
80 TG:port 0 <------> VNF:Port 0
81 TG:port 1 <------> VNF:Port 1
83 2) Multi port pair : More than one pair of traffic
85 e.g. Two port pair link 0, link1, link2 and link3 of VNF are used
86 TG:port 0 <------> VNF:Port 0
87 TG:port 1 <------> VNF:Port 1
88 TG:port 2 <------> VNF:Port 2
89 TG:port 3 <------> VNF:Port 3
91 For correalted traffic, use below configuration
92 TG_1:port 0 <------> VNF:Port 0
93 VNF:Port 1 <------> TG_2:port 0 (UDP Replay)
94 (TG_2(UDP_Replay) reflects all the traffic on the given port)
96 Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
98 * Standalone Virtualization - PHY-VM-PHY
100 Refer below link to setup sriov
101 https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms
104 Refer below link to setup ovs-dpdk
105 http://docs.openvswitch.org/en/latest/intro/install/general/
106 http://docs.openvswitch.org/en/latest/intro/install/dpdk/
109 Use any OPNFV installer to deploy the openstack.
111 Setup Traffic generator
112 -----------------------
114 Step 0: Preparing hardware connection
116 Connect Traffic generator and VNF system back to back as shown in previous section
117 TRex port 0 ↔ (VNF Port 0) ↔ (VNF Port 1) ↔ TRex port 1
119 Step 1: Setting up Traffic generator (TRex)
121 TRex Software preparations
122 ^^^^^^^^^^^^^^^^^^^^^^^^^^
123 * Install the OS (Bare metal Linux, not VM!)
124 * Obtain the latest TRex package: wget https://trex-tgn.cisco.com/trex/release/latest
125 * Untar the package: tar -xzf latest
126 * Change dir to unzipped TRex
127 * Create config file using command: sudo python dpdk_setup_ports.py -i
128 In case of Ubuntu 16 need python3
129 See paragraph config creation for detailed step-by-step
130 (Refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html)
136 Step 2: Procedure to build SampleVNFs
138 a) Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf
144 * Interactive options:
148 ./tools/vnf_build.sh -i
149 Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs.
150 It will automatically download selected DPDK version and any required patches and will setup everything and build VNFs.
151 Following are the options for setup:
152 ----------------------------------------------------------
153 Step 1: Environment setup.
154 ----------------------------------------------------------
155 [1] Check OS and network connection
156 [2] Select DPDK RTE version
158 ----------------------------------------------------------
159 Step 2: Download and Install
160 ----------------------------------------------------------
161 [3] Agree to download
162 [4] Download packages
163 [5] Download DPDK zip
164 [6] Build and Install DPDK
166 [8] Download civetweb
168 ----------------------------------------------------------
170 ----------------------------------------------------------
171 [9] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX)
176 * Non-Interactive options:
180 ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02>
187 1) Download DPDK supported version from dpdk.org
188 http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
189 unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions)
191 make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
192 cd x86_64-native-linuxapp-gcc
195 2) Download civetweb 1.9 version from the following link
196 https://sourceforge.net/projects/civetweb/files/1.9/CivetWeb_V1.9.zip
197 unzip CivetWeb_V1.9.zip
198 mv civetweb-master civetweb
203 For 1G/2M hugepage sizes, for example 1G pages, the size must be
204 specified explicitly and can also be optionally set as the
205 default hugepage size for the system. For example, to reserve 8G
206 of hugepage memory in the form of eight 1G pages, the following
207 options should be passed to the kernel: * default_hugepagesz=1G
208 hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
209 4) Add this to Go to /etc/default/grub configuration file.
210 Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048”
211 to the GRUB_CMDLINE_LINUX entry.
212 5) Setup Environment Variable
213 export RTE_SDK=<samplevnf>/dpdk
214 export RTE_TARGET=x86_64-native-linuxapp-gcc
215 export VNF_CORE=<samplevnf>
216 or using ./tools/setenv.sh
220 or to build individual VNFs
224 The vFW executable will be created at the following location
225 <samplevnf>/VNFs/vFW/build/vFW
228 Virtual Firewall - How to run
229 -----------------------------
231 Step 3: Bind the datapath ports to DPDK
233 a) Bind ports to DPDK
237 For DPDK versions 17.xx
238 1) cd <samplevnf>/dpdk
239 2) ./usertools/dpdk-devbind.py --status <--- List the network device
240 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
241 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
244 b) Prepare script to enalble VNF to route the packets
248 cd <samplevnf>/VNFs/vFW/config
249 Open -> VFW_SWLB_SinglePortPair_script.tc. Replace the bold items based on your setting.
251 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
254 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
257 ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
258 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
259 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
261 ; IPv4 static ARP; disable if dynamic arp is enabled.
262 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
263 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
264 p action add 0 accept
267 p action add 1 accept
272 p action add 0 conntrack
273 p action add 1 conntrack
274 p action add 2 conntrack
275 p action add 3 conntrack
277 p vfw add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
278 p vfw add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
279 p vfw add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
283 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
287 cd <samplevnf>/VNFs/vFW/
288 ./build/vFW -p 0x3 -f ./config/VFW_SWLB_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_SinglePortPair_script.tc
291 step 4: Run Test using traffic geneator
295 On traffic generator system:
296 cd <trex eg v2.28/stl>
297 Update the bench.py to generate the traffic.
299 class STLBench(object):
301 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
302 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
304 Run the TRex server: sudo ./t-rex-64 -i -c 7
305 In another shell run TRex console: trex-console
306 The console can be run from another computer with -s argument, --help for more info.
307 Other options for TRex client are automation or GUI
308 In the console, run "tui" command, and then send the traffic with commands like:
309 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
310 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
313 Virtual Access Control list - How to run
314 ----------------------------------------
316 Step 3: Bind the datapath ports to DPDK
318 a) Bind ports to DPDK
322 For DPDK versions 17.xx
323 1) cd <samplevnf>/dpdk
324 2) ./usertools/dpdk-devbind.py --status <--- List the network device
325 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
326 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
329 b) Prepare script to enalble VNF to route the packets
333 cd <samplevnf>/VNFs/vACL/config
334 Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
336 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
339 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
342 ; routeadd <port #> <ipv4 nhip address in decimal> <Mask>
343 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
344 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
346 ; IPv4 static ARP; disable if dynamic arp is enabled.
347 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
348 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
349 p action add 0 accept
352 p action add 1 accept
357 p action add 0 conntrack
358 p action add 1 conntrack
359 p action add 2 conntrack
360 p action add 3 conntrack
362 p acl add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
363 p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
364 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
368 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
372 cd <samplevnf>/VNFs/vFW/
373 ./build/vFW -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
376 step 4: Run Test using traffic geneator
380 On traffic generator system:
381 cd <trex eg v2.28/stl>
382 Update the bench.py to generate the traffic.
384 class STLBench(object):
386 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
387 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
389 Run the TRex server: sudo ./t-rex-64 -i -c 7
390 In another shell run TRex console: trex-console
391 The console can be run from another computer with -s argument, --help for more info.
392 Other options for TRex client are automation or GUI
393 In the console, run "tui" command, and then send the traffic with commands like:
394 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
395 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
401 Step 3: Bind the datapath ports to DPDK
403 a) Bind ports to DPDK
407 For DPDK versions 17.xx
408 1) cd <samplevnf>/dpdk
409 2) ./usertools/dpdk-devbind.py --status <--- List the network device
410 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
411 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
414 b) Prepare script to enalble VNF to route the packets
418 cd <samplevnf>/VNFs/vCGNAPT/config
419 Open -> sample_swlb_2port_2WT.tc Replace the bold items based on your setting.
421 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
424 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
427 ; uncomment to enable static NAPT
428 ;p <cgnapt pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port>
429 ;p 5 entry addm 202.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535
431 ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
432 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
433 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
435 ; IPv4 static ARP; disable if dynamic arp is enabled.
436 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
437 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
438 For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator
439 (TG1) (port 0) --> (port 0) VNF (CGNAPT) (Port 1) --> (port0)(UDPReplay)
442 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
446 cd <samplevnf>/VNFs/vCGNAPT/
447 ./build/vCGNAPT -p 0x3 -f ./config/sample_swlb_2port_2WT.cfg -s ./config/sample_swlb_2port_2WT.tc
450 step 4: Run Test using traffic geneator
452 On traffic generator system:
456 cd <trex eg v2.28/stl>
457 Update the bench.py to generate the traffic.
459 class STLBench(object):
461 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
462 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
464 Run the TRex server: sudo ./t-rex-64 -i -c 7
465 In another shell run TRex console: trex-console
466 The console can be run from another computer with -s argument, --help for more info.
467 Other options for TRex client are automation or GUI
468 In the console, run "tui" command, and then send the traffic with commands like:
469 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
470 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
473 UDP_Replay - How to run
474 ----------------------------------------
476 Step 3: Bind the datapath ports to DPDK
478 a) Bind ports to DPDK
482 For DPDK versions 17.xx
483 1) cd <samplevnf>/dpdk
484 2) ./usertools/dpdk-devbind.py --status <--- List the network device
485 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
486 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
488 b) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
492 cd <samplevnf>/VNFs/UDP_Replay/
493 cmd: ./build/UDP_Replay -c 0x7 -n 4 -w <pci> -w <pci> -- --no-hw-csum -p <portmask> --config='(port, queue, cpucore)'
494 e.g ./build/UDP_Replay -c 0x7 -n 4 -w 0000:07:00.0 -w 0000:07:00.1 -- --no-hw-csum -p 0x3 --config='(0, 0, 1)(1, 0, 2)'
497 step 4: Run Test using traffic geneator
501 On traffic generator system:
502 cd <trex eg v2.28/stl>
503 Update the bench.py to generate the traffic.
505 class STLBench(object):
507 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
508 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
510 Run the TRex server: sudo ./t-rex-64 -i -c 7
511 In another shell run TRex console: trex-console
512 The console can be run from another computer with -s argument, --help for more info.
513 Other options for TRex client are automation or GUI
514 In the console, run "tui" command, and then send the traffic with commands like:
515 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
516 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
524 This is PROX, the Packet pROcessing eXecution engine, part of Intel(R)
525 Data Plane Performance Demonstrators, and formerly known as DPPD-BNG.
526 PROX is a DPDK-based application implementing Telco use-cases such as
527 a simplified BRAS/BNG, light-weight AFTR... It also allows configuring
528 finer grained network functions like QoS, Routing, load-balancing...
530 Compiling and running this application
531 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
533 This application supports DPDK 16.04, 16.11, 17.02 and 17.05.
534 The following commands assume that the following variables have been set:
536 export RTE_SDK=/path/to/dpdk
537 export RTE_TARGET=x86_64-native-linuxapp-gcc
539 Example: DPDK 17.05 installation
540 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
542 * git clone http://dpdk.org/git/dpdk
544 * git checkout v17.05
545 * make install T=$RTE_TARGET
550 The Makefile with this application expects RTE_SDK to point to the
551 root directory of DPDK (e.g. export RTE_SDK=/root/dpdk). If RTE_TARGET
552 has not been set, x86_64-native-linuxapp-gcc will be assumed.
557 After DPDK has been set up, run make from the directory where you have
558 extracted this application. A build directory will be created
559 containing the PROX executable. The usage of the application is shown
560 below. Note that this application assumes that all required ports have
561 been bound to the DPDK provided igb_uio driver. Refer to the "Getting
562 Started Guide - DPDK" document for more details.
566 Usage: ./build/prox [-f CONFIG_FILE] [-l LOG_FILE] [-p] [-o DISPLAY] [-v] [-a|-e] [-m|-s|-i] [-n] [-w DEF] [-q] [-k] [-d] [-z] [-r VAL] [-u] [-t]
567 -f CONFIG_FILE : configuration file to load, ./prox.cfg by default
568 -l LOG_FILE : log file name, ./prox.log by default
569 -p : include PID in log file name if default log file is used
570 -o DISPLAY: Set display to use, can be 'curses' (default), 'cli' or 'none'
571 -v verbosity : initial logging verbosity
572 -a : autostart all cores (by default)
574 -n : Create NULL devices instead of using PCI devices, useful together with -i
575 -m : list supported task modes and exit
576 -s : check configuration file syntax and exit
577 -i : check initialization sequence and exit
578 -u : Listen on UDS /tmp/prox.sock
579 -t : Listen on TCP port 8474
580 -q : Pass argument to Lua interpreter, useful to define variables
581 -w : define variable using syntax varname=value
582 takes precedence over variables defined in CONFIG_FILE
583 -k : Log statistics to file "stats_dump" in current directory
584 -d : Run as daemon, the parent process will block until PROX is not initialized
585 -z : Ignore CPU topology, implies -i
586 -r : Change initial screen refresh rate. If set to a lower than 0.001 seconds,
587 screen refreshing will be disabled
589 While applications using DPDK typically rely on the core mask and the
590 number of channels to be specified on the command line, this
591 application is configured using a .cfg file. The core mask and number
592 of channels is derived from this config. For example, to run the
593 application from the source directory execute:
595 user@target:~$ ./build/prox -f ./config/nop.cfg
597 Provided example configurations
598 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
599 PROX can be configured either as the SUT (System Under Test) or as the
600 Traffic Generator. Some example configuration files are provided, both
601 in the config directory to run PROX as a SUT, and in the gen directory
602 to run it as a Traffic Generator.
603 A quick description of these example configurations is provided below.
604 Additional details are provided in the example configuration files.
606 Basic configurations, mostly used as sanity check:
608 - config/nop-rings.cfg
611 Simplified BNG (Border Network Gateway) configurations, using different
612 number of ports, with and without QoS, running on the host or in a VM:
613 - config/bng-4ports.cfg
614 - config/bng-8ports.cfg
615 - config/bng-qos-4ports.cfg
616 - config/bng-qos-8ports.cfg
617 - config/bng-1q-4ports.cfg
618 - config/bng-ovs-usv-4ports.cfg
619 - config/bng-no-cpu-topology-4ports.cfg
620 - gen/bng-4ports-gen.cfg
621 - gen/bng-8ports-gen.cfg
622 - gen/bng-ovs-usv-4ports-gen.cfg
624 Light-weight AFTR configurations:
626 - gen/lw_aftr-gen.cfg