1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Intel Corporation and others.
14 The device under test (DUT) consists of a system following;
15 * A single or dual processor and PCH chip, except for System on Chip (SoC) cases
16 * DRAM memory size and frequency (normally single DIMM per channel)
17 * Specific Intel Network Interface Cards (NICs)
18 * BIOS settings noting those that updated from the basic settings
19 * DPDK build configuration settings, and commands used for tests
20 Connected to the DUT is an IXIA* or Software Traffic generator like pktgen or TRex,
21 simulation platform to generate packet traffic to the DUT ports and
22 determine the throughput/latency at the tester side.
24 Below are the supported/tested (:term `VNF`) deployment type.
25 .. image:: images/deploy_type.png
27 :alt: SampleVNF supported topology
29 Hardware & Software Ingredients
30 -------------------------------
31 .. code-block:: console
32 +-----------+------------------+
33 | Item | Description |
34 +-----------+------------------+
36 +-----------+------------------+
38 +-----------+------------------+
39 | OS | Ubuntu 16.04 LTS |
40 +-----------+------------------+
41 | kernel | 4.4.0-34-generic|
42 +-----------+------------------+
44 +-----------+------------------+
46 Boot and BIOS settings
47 +------------------+---------------------------------------------------+
48 | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
49 | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
50 | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
51 | | Note: nohz_full and rcu_nocbs is to disable Linux*|
52 | | kernel interrupts, and it’s import |
53 +------------------+---------------------------------------------------+
54 |BIOS | CPU Power and Performance Policy <Performance> |
55 | | CPU C-state Disabled |
56 | | CPU P-state Disabled |
57 | | Enhanced Intel® Speedstep® Tech Disabled |
58 | | Hyper-Threading Technology (If supported) Enable |
59 | | Virtualization Techology Enable |
60 | | Coherency Enable |
61 | | Turbo Boost Disabled |
62 +------------------+---------------------------------------------------+
64 Network Topology for testing VNFs
65 ---------------------------------
66 The ethernet cables should be connected between traffic generator and the VNF server (BM,
67 SRIOV or OVS) setup based on the test profile.
69 The connectivity could be
70 1. Single port pair : One pair ports used for traffic
72 e.g. Single port pair link0 and link1 of VNF are used
73 TG:port 0 ------ VNF:Port 0
74 TG:port 1 ------ VNF:Port 1
76 2. Multi port pair : More than one pair of traffic
78 e.g. Two port pair link 0, link1, link2 and link3 of VNF are used
79 TG:port 0 ------ VNF:Port 0
80 TG:port 1 ------ VNF:Port 1
81 TG:port 2 ------ VNF:Port 2
82 TG:port 3 ------ VNF:Port 3
85 Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
87 * Stadalone Virtualization - PHY-VM-PHY
89 Refer below link to setup sriov
90 https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms
93 Refer below link to setup ovs/ovs-dpdk
94 http://docs.openvswitch.org/en/latest/intro/install/general/
95 http://docs.openvswitch.org/en/latest/intro/install/dpdk/
98 use OPNFV installer to deploy the openstack.
100 Setup Traffic generator
101 -----------------------
103 Step 0: Preparing hardware connection::
104 Connect Traffic generator and VNF system back to back as shown in previous section
105 TRex port 0 ↔ (VNF Port 0) ↔ (VNF Port 1) ↔ TRex port 1
107 Step 1: Setting up Traffic generator (TRex) ::
108 (Refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html)
109 TRex Software preparations
110 ^^^^^^^^^^^^^^^^^^^^^^^^^^
111 a. Install the OS (Bare metal Linux, not VM!)
112 b. Obtain the latest TRex package: wget https://trex-tgn.cisco.com/trex/release/latest
113 c. Untar the package: tar -xzf latest
114 d. Change dir to unzipped TRex
115 e. Create config file using command: sudo python dpdk_setup_ports.py -i
116 In case of Ubuntu 16 need python3
117 See paragraph config creation for detailed step-by-step
122 Step 2: Procedure to build SampleVNFs::
123 a) Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf
127 * Interactive options:
128 ./tools/vnf_build.sh -i
129 Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs.
130 It will automatically download selected DPDK version and any required patches and will setup everything and build VNFs.
131 Following are the options for setup:
132 ----------------------------------------------------------
133 Step 1: Environment setup.
134 ----------------------------------------------------------
135 [1] Check OS and network connection
136 [2] Select DPDK RTE version
138 ----------------------------------------------------------
139 Step 2: Download and Install
140 ----------------------------------------------------------
141 [3] Agree to download
142 [4] Download packages
143 [5] Download DPDK zip
144 [6] Build and Install DPDK
147 ----------------------------------------------------------
149 ----------------------------------------------------------
150 [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX)
153 * non-Interactive options:
154 ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02>
157 1. Download DPDK supported version from dpdk.org
158 http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
159 unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions)
161 make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
162 cd x86_64-native-linuxapp-gcc
165 For 1G/2M hugepage sizes, for example 1G pages, the size must be
166 specified explicitly and can also be optionally set as the
167 default hugepage size for the system. For example, to reserve 8G
168 of hugepage memory in the form of eight 1G pages, the following
169 options should be passed to the kernel: * default_hugepagesz=1G
170 hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
171 3. Add this to Go to /etc/default/grub configuration file.
172 Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048”
173 to the GRUB_CMDLINE_LINUX entry.
174 4. Setup Environment Variable
175 export RTE_SDK=<samplevnf>/dpdk
176 export RTE_TARGET=x86_64-native-linuxapp-gcc
177 export VNF_CORE=<samplevnf>
178 or using ./tools/setenv.sh
182 or to build individual VNFs
186 The vFW executable will be created at the following location
187 <samplevnf>/VNFs/vFW/build/vFW
189 Virtual Firewall - How to run
190 -----------------------------
192 Step 3: Bind the datapath ports to DPDK ::
193 a. Bind ports to DPDK
194 For DPDK versions 17.xx
195 1. cd <samplevnf>/dpdk
196 2. ./usertools/dpdk-devbind.py --status <--- List the network device
197 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
198 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
199 b. Prepare script to enalble VNF to route the packets
200 cd <samplevnf>/VNFs/vFW/config
201 Open -> VFW_SWLB_SinglePortPair_script.tc. Replace the bold items based on your setting.
203 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
206 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
209 ; routeadd <port #> <ipv4 nhip address in decimal> <Mask>
210 routeadd 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
211 routeadd 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
213 ; IPv4 static ARP; disable if dynamic arp is enabled.
214 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
215 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
216 p action add 0 accept
219 p action add 1 accept
224 p action add 0 conntrack
225 p action add 1 conntrack
226 p action add 2 conntrack
227 p action add 3 conntrack
229 p vfw add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
230 p vfw add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
231 p vfw add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
233 c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
234 cd <samplevnf>/VNFs/vFW/
235 ./build/vFW -p 0x3 -f ./config/VFW_SWLB_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_SinglePortPair_script.tc
237 step 4: Run Test using traffic geneator ::
238 On traffic generator system:
239 cd <trex eg v2.28/stl>
240 Update the bench.py to generate the traffic.
242 class STLBench(object):
244 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
245 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
247 Run the TRex server: sudo ./t-rex-64 -i -c 7
248 In another shell run TRex console: trex-console
249 The console can be run from another computer with -s argument, --help for more info.
250 Other options for TRex client are automation or GUI
251 In the console, run "tui" command, and then send the traffic with commands like:
252 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
253 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
255 Virtual Access Control list - How to run
256 ----------------------------------------
258 Step 3: Bind the datapath ports to DPDK ::
259 a. Bind ports to DPDK
260 For DPDK versions 17.xx
261 1. cd <samplevnf>/dpdk
262 2. ./usertools/dpdk-devbind.py --status <--- List the network device
263 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
264 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
265 b. Prepare script to enalble VNF to route the packets
266 cd <samplevnf>/VNFs/vACL/config
267 Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
269 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
272 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
274 ; routeadd <port #> <ipv4 nhip address in decimal> <Mask>
275 routeadd 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
276 routeadd 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
278 ; IPv4 static ARP; disable if dynamic arp is enabled.
279 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
280 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
281 p action add 0 accept
284 p action add 1 accept
289 p action add 0 conntrack
290 p action add 1 conntrack
291 p action add 2 conntrack
292 p action add 3 conntrack
294 p acl add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
295 p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
296 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
298 c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
299 cd <samplevnf>/VNFs/vFW/
300 ./build/vFW -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
302 step 4: Run Test using traffic geneator ::
303 On traffic generator system:
304 cd <trex eg v2.28/stl>
305 Update the bench.py to generate the traffic.
307 class STLBench(object):
309 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
310 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
312 Run the TRex server: sudo ./t-rex-64 -i -c 7
313 In another shell run TRex console: trex-console
314 The console can be run from another computer with -s argument, --help for more info.
315 Other options for TRex client are automation or GUI
316 In the console, run "tui" command, and then send the traffic with commands like:
317 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
318 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
320 Virtual Access Control list - How to run
321 ----------------------------------------
323 Step 3: Bind the datapath ports to DPDK ::
324 a. Bind ports to DPDK
325 For DPDK versions 17.xx
326 1. cd <samplevnf>/dpdk
327 2. ./usertools/dpdk-devbind.py --status <--- List the network device
328 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
329 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
330 b. Prepare script to enalble VNF to route the packets
331 cd <samplevnf>/VNFs/vACL/config
332 Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
334 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
337 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
339 ; routeadd <port #> <ipv4 nhip address in decimal> <Mask>
340 routeadd 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
341 routeadd 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
343 ; IPv4 static ARP; disable if dynamic arp is enabled.
344 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
345 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
346 p action add 0 accept
349 p action add 1 accept
354 p action add 0 conntrack
355 p action add 1 conntrack
356 p action add 2 conntrack
357 p action add 3 conntrack
359 p acl add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
360 p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
361 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
363 c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
364 cd <samplevnf>/VNFs/vACL/
365 ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
367 step 4: Run Test using traffic geneator ::
368 On traffic generator system:
369 cd <trex eg v2.28/stl>
370 Update the bench.py to generate the traffic.
372 class STLBench(object):
374 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
375 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
377 Run the TRex server: sudo ./t-rex-64 -i -c 7
378 In another shell run TRex console: trex-console
379 The console can be run from another computer with -s argument, --help for more info.
380 Other options for TRex client are automation or GUI
381 In the console, run "tui" command, and then send the traffic with commands like:
382 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
383 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
386 ----------------------------------------
388 Step 3: Bind the datapath ports to DPDK ::
389 a. Bind ports to DPDK
390 For DPDK versions 17.xx
391 1. cd <samplevnf>/dpdk
392 2. ./usertools/dpdk-devbind.py --status <--- List the network device
393 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
394 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
395 b. Prepare script to enalble VNF to route the packets
396 cd <samplevnf>/VNFs/vCGNAPT/config
397 Open -> sample_swlb_2port_2WT.tc Replace the bold items based on your setting.
399 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
402 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
405 ; uncomment to enable static NAPT
406 ;p <cgnapt pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port>
407 ;p 5 entry addm 202.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535
410 ; routeadd <port #> <ipv4 nhip address in decimal> <Mask>
411 routeadd 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
412 routeadd 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
414 ; IPv4 static ARP; disable if dynamic arp is enabled.
415 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
416 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
417 For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator
418 (TG1) (port 0) --> (port 0) VNF (CGNAPT) (Port 1) --> (port0)(UDPReplay)
420 c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
421 cd <samplevnf>/VNFs/vCGNAPT/
422 ./build/vCGNAPT -p 0x3 -f ./config/sample_swlb_2port_2WT.cfg -s ./config/sample_swlb_2port_2WT.tc
425 step 4: Run Test using traffic geneator ::
426 On traffic generator system:
427 cd <trex eg v2.28/stl>
428 Update the bench.py to generate the traffic.
430 class STLBench(object):
432 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
433 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
435 Run the TRex server: sudo ./t-rex-64 -i -c 7
436 In another shell run TRex console: trex-console
437 The console can be run from another computer with -s argument, --help for more info.
438 Other options for TRex client are automation or GUI
439 In the console, run "tui" command, and then send the traffic with commands like:
440 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
441 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
443 UDP_Replay - How to run
444 ----------------------------------------
446 Step 3: Bind the datapath ports to DPDK ::
447 a. Bind ports to DPDK
448 For DPDK versions 17.xx
449 1. cd <samplevnf>/dpdk
450 2. ./usertools/dpdk-devbind.py --status <--- List the network device
451 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
452 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
453 b. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
454 cd <samplevnf>/VNFs/UDP_Replay/
455 cmd: ./build/UDP_Replay -c 0x7 -n 4 -w <pci> -w <pci> -- --no-hw-csum -p <portmask> --config='(port, queue, cpucore)'
456 e.g ./build/UDP_Replay -c 0x7 -n 4 -w 0000:07:00.0 -w 0000:07:00.1 -- --no-hw-csum -p 0x3 --config='(0, 0, 1)(1, 0, 2)'
458 step 4: Run Test using traffic geneator ::
459 On traffic generator system:
460 cd <trex eg v2.28/stl>
461 Update the bench.py to generate the traffic.
463 class STLBench(object):
465 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
466 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
468 Run the TRex server: sudo ./t-rex-64 -i -c 7
469 In another shell run TRex console: trex-console
470 The console can be run from another computer with -s argument, --help for more info.
471 Other options for TRex client are automation or GUI
472 In the console, run "tui" command, and then send the traffic with commands like:
473 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
474 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
477 ---------------------
481 This is PROX, the Packet pROcessing eXecution engine, part of Intel(R)
482 Data Plane Performance Demonstrators, and formerly known as DPPD-BNG.
483 PROX is a DPDK-based application implementing Telco use-cases such as
484 a simplified BRAS/BNG, light-weight AFTR... It also allows configuring
485 finer grained network functions like QoS, Routing, load-balancing...
487 Compiling and running this application
488 --------------------------------------
489 This application supports DPDK 16.04, 16.11, 17.02 and 17.05.
490 The following commands assume that the following variables have been set:
492 export RTE_SDK=/path/to/dpdk
493 export RTE_TARGET=x86_64-native-linuxapp-gcc
495 Example: DPDK 17.05 installation
496 --------------------------------
497 git clone http://dpdk.org/git/dpdk
500 make install T=$RTE_TARGET
504 The Makefile with this application expects RTE_SDK to point to the
505 root directory of DPDK (e.g. export RTE_SDK=/root/dpdk). If RTE_TARGET
506 has not been set, x86_64-native-linuxapp-gcc will be assumed.
510 After DPDK has been set up, run make from the directory where you have
511 extracted this application. A build directory will be created
512 containing the PROX executable. The usage of the application is shown
513 below. Note that this application assumes that all required ports have
514 been bound to the DPDK provided igb_uio driver. Refer to the "Getting
515 Started Guide - DPDK" document for more details.
517 Usage: ./build/prox [-f CONFIG_FILE] [-l LOG_FILE] [-p] [-o DISPLAY] [-v] [-a|-e] \
518 [-m|-s|-i] [-n] [-w DEF] [-q] [-k] [-d] [-z] [-r VAL] [-u] [-t]
519 -f CONFIG_FILE : configuration file to load, ./prox.cfg by default
520 -l LOG_FILE : log file name, ./prox.log by default
521 -p : include PID in log file name if default log file is used
522 -o DISPLAY: Set display to use, can be 'curses' (default), 'cli' or 'none'
523 -v verbosity : initial logging verbosity
524 -a : autostart all cores (by default)
526 -n : Create NULL devices instead of using PCI devices, useful together with -i
527 -m : list supported task modes and exit
528 -s : check configuration file syntax and exit
529 -i : check initialization sequence and exit
530 -u : Listen on UDS /tmp/prox.sock
531 -t : Listen on TCP port 8474
532 -q : Pass argument to Lua interpreter, useful to define variables
533 -w : define variable using syntax varname=value
534 takes precedence over variables defined in CONFIG_FILE
535 -k : Log statistics to file "stats_dump" in current directory
536 -d : Run as daemon, the parent process will block until PROX is not initialized
537 -z : Ignore CPU topology, implies -i
538 -r : Change initial screen refresh rate. If set to a lower than 0.001 seconds,
539 screen refreshing will be disabled
541 While applications using DPDK typically rely on the core mask and the
542 number of channels to be specified on the command line, this
543 application is configured using a .cfg file. The core mask and number
544 of channels is derived from this config. For example, to run the
545 application from the source directory execute:
547 user@target:~$ ./build/prox -f ./config/nop.cfg
549 Provided example configurations
550 -------------------------------
551 PROX can be configured either as the SUT (System Under Test) or as the
552 Traffic Generator. Some example configuration files are provided, both
553 in the config directory to run PROX as a SUT, and in the gen directory
554 to run it as a Traffic Generator.
555 A quick description of these example configurations is provided below.
556 Additional details are provided in the example configuration files.
558 Basic configurations, mostly used as sanity check:
560 - config/nop-rings.cfg
563 Simplified BNG (Border Network Gateway) configurations, using different
564 number of ports, with and without QoS, running on the host or in a VM:
565 - config/bng-4ports.cfg
566 - config/bng-8ports.cfg
567 - config/bng-qos-4ports.cfg
568 - config/bng-qos-8ports.cfg
569 - config/bng-1q-4ports.cfg
570 - config/bng-ovs-usv-4ports.cfg
571 - config/bng-no-cpu-topology-4ports.cfg
572 - gen/bng-4ports-gen.cfg
573 - gen/bng-8ports-gen.cfg
574 - gen/bng-ovs-usv-4ports-gen.cfg
576 Light-weight AFTR configurations:
578 - gen/lw_aftr-gen.cfg