1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Intel Corporation and others.
14 The device under test (DUT) consists of a system following;
15 * A single or dual processor and PCH chip, except for System on Chip (SoC) cases
16 * DRAM memory size and frequency (normally single DIMM per channel)
17 * Specific Intel Network Interface Cards (NICs)
18 * BIOS settings noting those that updated from the basic settings
19 * DPDK build configuration settings, and commands used for tests
20 Connected to the DUT is an IXIA* or Software Traffic generator like pktgen or TRex,
21 simulation platform to generate packet traffic to the DUT ports and
22 determine the throughput/latency at the tester side.
24 Below are the supported/tested (:term `VNF`) deployment type.
26 .. image:: images/deploy_type.png
28 :alt: SampleVNF supported topology
30 Hardware & Software Ingredients
31 -------------------------------
36 +-----------+------------------+
37 | Item | Description |
38 +-----------+------------------+
40 +-----------+------------------+
42 +-----------+------------------+
43 | OS | Ubuntu 16.04 LTS |
44 +-----------+------------------+
45 | kernel | 4.4.0-34-generic|
46 +-----------+------------------+
48 +-----------+------------------+
50 Boot and BIOS settings:
51 ^^^^^^^^^^^^^^^^^^^^^^
53 +------------------+---------------------------------------------------+
54 | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
55 | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
56 | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
57 | | Note: nohz_full and rcu_nocbs is to disable Linux*|
58 | | kernel interrupts, and it’s import |
59 +------------------+---------------------------------------------------+
60 |BIOS | CPU Power and Performance Policy <Performance> |
61 | | CPU C-state Disabled |
62 | | CPU P-state Disabled |
63 | | Enhanced Intel® Speedstep® Tech Disabled |
64 | | Hyper-Threading Technology (If supported) Enable |
65 | | Virtualization Techology Enable |
66 | | Coherency Enable |
67 | | Turbo Boost Disabled |
68 +------------------+---------------------------------------------------+
70 Network Topology for testing VNFs
71 ---------------------------------
72 The ethernet cables should be connected between traffic generator and the VNF server (BM,
73 SRIOV or OVS) setup based on the test profile.
75 The connectivity could be
76 1) Single port pair : One pair ports used for traffic
78 e.g. Single port pair link0 and link1 of VNF are used
79 TG:port 0 <------> VNF:Port 0
80 TG:port 1 <------> VNF:Port 1
82 2) Multi port pair : More than one pair of traffic
84 e.g. Two port pair link 0, link1, link2 and link3 of VNF are used
85 TG:port 0 <------> VNF:Port 0
86 TG:port 1 <------> VNF:Port 1
87 TG:port 2 <------> VNF:Port 2
88 TG:port 3 <------> VNF:Port 3
90 For correalted traffic, use below configuration
91 TG_1:port 0 <------> VNF:Port 0
92 VNF:Port 1 <------> TG_2:port 0 (UDP Replay)
93 (TG_2(UDP_Replay) reflects all the traffic on the given port)
95 Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
97 * Standalone Virtualization - PHY-VM-PHY
99 Refer below link to setup sriov
100 https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms
103 Refer below link to setup ovs-dpdk
104 http://docs.openvswitch.org/en/latest/intro/install/general/
105 http://docs.openvswitch.org/en/latest/intro/install/dpdk/
108 Use any OPNFV installer to deploy the openstack.
110 Setup Traffic generator
111 -----------------------
113 Step 0: Preparing hardware connection
115 Connect Traffic generator and VNF system back to back as shown in previous section
116 TRex port 0 ↔ (VNF Port 0) ↔ (VNF Port 1) ↔ TRex port 1
118 Step 1: Setting up Traffic generator (TRex)
120 TRex Software preparations
121 ^^^^^^^^^^^^^^^^^^^^^^^^^^
122 * Install the OS (Bare metal Linux, not VM!)
123 * Obtain the latest TRex package: wget https://trex-tgn.cisco.com/trex/release/latest
124 * Untar the package: tar -xzf latest
125 * Change dir to unzipped TRex
126 * Create config file using command: sudo python dpdk_setup_ports.py -i
127 In case of Ubuntu 16 need python3
128 See paragraph config creation for detailed step-by-step
129 (Refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html)
134 Step 2: Procedure to build SampleVNFs
136 a) Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf
140 * Interactive options:
141 ./tools/vnf_build.sh -i
142 Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs.
143 It will automatically download selected DPDK version and any required patches and will setup everything and build VNFs.
144 Following are the options for setup:
145 ----------------------------------------------------------
146 Step 1: Environment setup.
147 ----------------------------------------------------------
148 [1] Check OS and network connection
149 [2] Select DPDK RTE version
151 ----------------------------------------------------------
152 Step 2: Download and Install
153 ----------------------------------------------------------
154 [3] Agree to download
155 [4] Download packages
156 [5] Download DPDK zip
157 [6] Build and Install DPDK
160 ----------------------------------------------------------
162 ----------------------------------------------------------
163 [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX)
166 * non-Interactive options:
167 ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02>
171 1) Download DPDK supported version from dpdk.org
172 http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
173 unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions)
175 make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
176 cd x86_64-native-linuxapp-gcc
179 For 1G/2M hugepage sizes, for example 1G pages, the size must be
180 specified explicitly and can also be optionally set as the
181 default hugepage size for the system. For example, to reserve 8G
182 of hugepage memory in the form of eight 1G pages, the following
183 options should be passed to the kernel: * default_hugepagesz=1G
184 hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
185 3) Add this to Go to /etc/default/grub configuration file.
186 Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048”
187 to the GRUB_CMDLINE_LINUX entry.
188 4) Setup Environment Variable
189 export RTE_SDK=<samplevnf>/dpdk
190 export RTE_TARGET=x86_64-native-linuxapp-gcc
191 export VNF_CORE=<samplevnf>
192 or using ./tools/setenv.sh
196 or to build individual VNFs
200 The vFW executable will be created at the following location
201 <samplevnf>/VNFs/vFW/build/vFW
203 Virtual Firewall - How to run
204 -----------------------------
206 Step 3: Bind the datapath ports to DPDK
208 a) Bind ports to DPDK
209 For DPDK versions 17.xx
210 1) cd <samplevnf>/dpdk
211 2) ./usertools/dpdk-devbind.py --status <--- List the network device
212 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
213 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
214 b) Prepare script to enalble VNF to route the packets
215 cd <samplevnf>/VNFs/vFW/config
216 Open -> VFW_SWLB_SinglePortPair_script.tc. Replace the bold items based on your setting.
218 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
221 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
224 ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
225 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
226 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
228 ; IPv4 static ARP; disable if dynamic arp is enabled.
229 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
230 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
231 p action add 0 accept
234 p action add 1 accept
239 p action add 0 conntrack
240 p action add 1 conntrack
241 p action add 2 conntrack
242 p action add 3 conntrack
244 p vfw add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
245 p vfw add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
246 p vfw add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
248 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
249 cd <samplevnf>/VNFs/vFW/
250 ./build/vFW -p 0x3 -f ./config/VFW_SWLB_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_SinglePortPair_script.tc
252 step 4: Run Test using traffic geneator
254 On traffic generator system:
255 cd <trex eg v2.28/stl>
256 Update the bench.py to generate the traffic.
258 class STLBench(object):
260 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
261 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
263 Run the TRex server: sudo ./t-rex-64 -i -c 7
264 In another shell run TRex console: trex-console
265 The console can be run from another computer with -s argument, --help for more info.
266 Other options for TRex client are automation or GUI
267 In the console, run "tui" command, and then send the traffic with commands like:
268 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
269 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
271 Virtual Access Control list - How to run
272 ----------------------------------------
274 Step 3: Bind the datapath ports to DPDK
276 a) Bind ports to DPDK
277 For DPDK versions 17.xx
278 1) cd <samplevnf>/dpdk
279 2) ./usertools/dpdk-devbind.py --status <--- List the network device
280 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
281 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
282 b) Prepare script to enalble VNF to route the packets
283 cd <samplevnf>/VNFs/vACL/config
284 Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
286 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
289 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
292 ; routeadd <port #> <ipv4 nhip address in decimal> <Mask>
293 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
294 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
296 ; IPv4 static ARP; disable if dynamic arp is enabled.
297 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
298 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
299 p action add 0 accept
302 p action add 1 accept
307 p action add 0 conntrack
308 p action add 1 conntrack
309 p action add 2 conntrack
310 p action add 3 conntrack
312 p acl add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
313 p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
314 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
316 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
317 cd <samplevnf>/VNFs/vFW/
318 ./build/vFW -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
320 step 4: Run Test using traffic geneator
322 On traffic generator system:
323 cd <trex eg v2.28/stl>
324 Update the bench.py to generate the traffic.
326 class STLBench(object):
328 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
329 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
331 Run the TRex server: sudo ./t-rex-64 -i -c 7
332 In another shell run TRex console: trex-console
333 The console can be run from another computer with -s argument, --help for more info.
334 Other options for TRex client are automation or GUI
335 In the console, run "tui" command, and then send the traffic with commands like:
336 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
337 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
339 Virtual Access Control list - How to run
340 ----------------------------------------
342 Step 3: Bind the datapath ports to DPDK
344 a) Bind ports to DPDK
345 For DPDK versions 17.xx
346 1) cd <samplevnf>/dpdk
347 2) ./usertools/dpdk-devbind.py --status <--- List the network device
348 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
349 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
350 b) Prepare script to enalble VNF to route the packets
351 cd <samplevnf>/VNFs/vACL/config
352 Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
354 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
357 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
360 ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
361 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
362 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
364 ; IPv4 static ARP; disable if dynamic arp is enabled.
365 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
366 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
367 p action add 0 accept
370 p action add 1 accept
375 p action add 0 conntrack
376 p action add 1 conntrack
377 p action add 2 conntrack
378 p action add 3 conntrack
380 p acl add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
381 p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
382 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
384 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
385 cd <samplevnf>/VNFs/vACL/
386 ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
388 step 4: Run Test using traffic geneator
390 On traffic generator system:
391 cd <trex eg v2.28/stl>
392 Update the bench.py to generate the traffic.
394 class STLBench(object):
396 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
397 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
399 Run the TRex server: sudo ./t-rex-64 -i -c 7
400 In another shell run TRex console: trex-console
401 The console can be run from another computer with -s argument, --help for more info.
402 Other options for TRex client are automation or GUI
403 In the console, run "tui" command, and then send the traffic with commands like:
404 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
405 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
408 ----------------------------------------
410 Step 3: Bind the datapath ports to DPDK
412 a) Bind ports to DPDK
413 For DPDK versions 17.xx
414 1) cd <samplevnf>/dpdk
415 2) ./usertools/dpdk-devbind.py --status <--- List the network device
416 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
417 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
418 b) Prepare script to enalble VNF to route the packets
419 cd <samplevnf>/VNFs/vCGNAPT/config
420 Open -> sample_swlb_2port_2WT.tc Replace the bold items based on your setting.
422 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
425 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
428 ; uncomment to enable static NAPT
429 ;p <cgnapt pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port>
430 ;p 5 entry addm 202.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535
432 ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
433 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
434 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
436 ; IPv4 static ARP; disable if dynamic arp is enabled.
437 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
438 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
439 For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator
440 (TG1) (port 0) --> (port 0) VNF (CGNAPT) (Port 1) --> (port0)(UDPReplay)
442 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
443 cd <samplevnf>/VNFs/vCGNAPT/
444 ./build/vCGNAPT -p 0x3 -f ./config/sample_swlb_2port_2WT.cfg -s ./config/sample_swlb_2port_2WT.tc
447 step 4: Run Test using traffic geneator
449 On traffic generator system:
450 cd <trex eg v2.28/stl>
451 Update the bench.py to generate the traffic.
453 class STLBench(object):
455 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
456 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
458 Run the TRex server: sudo ./t-rex-64 -i -c 7
459 In another shell run TRex console: trex-console
460 The console can be run from another computer with -s argument, --help for more info.
461 Other options for TRex client are automation or GUI
462 In the console, run "tui" command, and then send the traffic with commands like:
463 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
464 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
466 UDP_Replay - How to run
467 ----------------------------------------
469 Step 3: Bind the datapath ports to DPDK
471 a) Bind ports to DPDK
472 For DPDK versions 17.xx
473 1) cd <samplevnf>/dpdk
474 2) ./usertools/dpdk-devbind.py --status <--- List the network device
475 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
476 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
477 b) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
478 cd <samplevnf>/VNFs/UDP_Replay/
479 cmd: ./build/UDP_Replay -c 0x7 -n 4 -w <pci> -w <pci> -- --no-hw-csum -p <portmask> --config='(port, queue, cpucore)'
480 e.g ./build/UDP_Replay -c 0x7 -n 4 -w 0000:07:00.0 -w 0000:07:00.1 -- --no-hw-csum -p 0x3 --config='(0, 0, 1)(1, 0, 2)'
482 step 4: Run Test using traffic geneator
484 On traffic generator system:
485 cd <trex eg v2.28/stl>
486 Update the bench.py to generate the traffic.
488 class STLBench(object):
490 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
491 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
493 Run the TRex server: sudo ./t-rex-64 -i -c 7
494 In another shell run TRex console: trex-console
495 The console can be run from another computer with -s argument, --help for more info.
496 Other options for TRex client are automation or GUI
497 In the console, run "tui" command, and then send the traffic with commands like:
498 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
499 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
506 This is PROX, the Packet pROcessing eXecution engine, part of Intel(R)
507 Data Plane Performance Demonstrators, and formerly known as DPPD-BNG.
508 PROX is a DPDK-based application implementing Telco use-cases such as
509 a simplified BRAS/BNG, light-weight AFTR... It also allows configuring
510 finer grained network functions like QoS, Routing, load-balancing...
512 Compiling and running this application
513 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
514 This application supports DPDK 16.04, 16.11, 17.02 and 17.05.
515 The following commands assume that the following variables have been set:
517 export RTE_SDK=/path/to/dpdk
518 export RTE_TARGET=x86_64-native-linuxapp-gcc
520 Example: DPDK 17.05 installation
521 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
522 * git clone http://dpdk.org/git/dpdk
524 * git checkout v17.05
525 * make install T=$RTE_TARGET
529 The Makefile with this application expects RTE_SDK to point to the
530 root directory of DPDK (e.g. export RTE_SDK=/root/dpdk). If RTE_TARGET
531 has not been set, x86_64-native-linuxapp-gcc will be assumed.
535 After DPDK has been set up, run make from the directory where you have
536 extracted this application. A build directory will be created
537 containing the PROX executable. The usage of the application is shown
538 below. Note that this application assumes that all required ports have
539 been bound to the DPDK provided igb_uio driver. Refer to the "Getting
540 Started Guide - DPDK" document for more details.
543 Usage: ./build/prox [-f CONFIG_FILE] [-l LOG_FILE] [-p] [-o DISPLAY] [-v] [-a|-e] [-m|-s|-i] [-n] [-w DEF] [-q] [-k] [-d] [-z] [-r VAL] [-u] [-t]
544 -f CONFIG_FILE : configuration file to load, ./prox.cfg by default
545 -l LOG_FILE : log file name, ./prox.log by default
546 -p : include PID in log file name if default log file is used
547 -o DISPLAY: Set display to use, can be 'curses' (default), 'cli' or 'none'
548 -v verbosity : initial logging verbosity
549 -a : autostart all cores (by default)
551 -n : Create NULL devices instead of using PCI devices, useful together with -i
552 -m : list supported task modes and exit
553 -s : check configuration file syntax and exit
554 -i : check initialization sequence and exit
555 -u : Listen on UDS /tmp/prox.sock
556 -t : Listen on TCP port 8474
557 -q : Pass argument to Lua interpreter, useful to define variables
558 -w : define variable using syntax varname=value
559 takes precedence over variables defined in CONFIG_FILE
560 -k : Log statistics to file "stats_dump" in current directory
561 -d : Run as daemon, the parent process will block until PROX is not initialized
562 -z : Ignore CPU topology, implies -i
563 -r : Change initial screen refresh rate. If set to a lower than 0.001 seconds,
564 screen refreshing will be disabled
566 While applications using DPDK typically rely on the core mask and the
567 number of channels to be specified on the command line, this
568 application is configured using a .cfg file. The core mask and number
569 of channels is derived from this config. For example, to run the
570 application from the source directory execute:
572 user@target:~$ ./build/prox -f ./config/nop.cfg
574 Provided example configurations
575 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
576 PROX can be configured either as the SUT (System Under Test) or as the
577 Traffic Generator. Some example configuration files are provided, both
578 in the config directory to run PROX as a SUT, and in the gen directory
579 to run it as a Traffic Generator.
580 A quick description of these example configurations is provided below.
581 Additional details are provided in the example configuration files.
583 Basic configurations, mostly used as sanity check:
585 - config/nop-rings.cfg
588 Simplified BNG (Border Network Gateway) configurations, using different
589 number of ports, with and without QoS, running on the host or in a VM:
590 - config/bng-4ports.cfg
591 - config/bng-8ports.cfg
592 - config/bng-qos-4ports.cfg
593 - config/bng-qos-8ports.cfg
594 - config/bng-1q-4ports.cfg
595 - config/bng-ovs-usv-4ports.cfg
596 - config/bng-no-cpu-topology-4ports.cfg
597 - gen/bng-4ports-gen.cfg
598 - gen/bng-8ports-gen.cfg
599 - gen/bng-ovs-usv-4ports-gen.cfg
601 Light-weight AFTR configurations:
603 - gen/lw_aftr-gen.cfg