1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, Intel Corporation and others.
14 The device under test (DUT) consists of a system following;
15 * A single or dual processor and PCH chip, except for System on Chip (SoC) cases
16 * DRAM memory size and frequency (normally single DIMM per channel)
17 * Specific Intel Network Interface Cards (NICs)
18 * BIOS settings noting those that updated from the basic settings
19 * DPDK build configuration settings, and commands used for tests
20 Connected to the DUT is an IXIA* or Software Traffic generator like pktgen or TRex,
21 simulation platform to generate packet traffic to the DUT ports and
22 determine the throughput/latency at the tester side.
24 Below are the supported/tested (:term:`VNF`) deployment type.
26 .. image:: images/deploy_type.png
28 :alt: SampleVNF supported topology
30 Hardware & Software Ingredients
31 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
36 +-----------+------------------+
37 | Item | Description |
38 +-----------+------------------+
40 +-----------+------------------+
42 +-----------+------------------+
43 | OS | Ubuntu 16.04 LTS |
44 +-----------+------------------+
45 | kernel | 4.4.0-34-generic |
46 +-----------+------------------+
48 +-----------+------------------+
50 Boot and BIOS settings:
53 +------------------+---------------------------------------------------+
54 | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
55 | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
56 | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
57 | | Note: nohz_full and rcu_nocbs is to disable Linux*|
58 | | kernel interrupts, and it’s import |
59 +------------------+---------------------------------------------------+
60 |BIOS | CPU Power and Performance Policy <Performance> |
61 | | CPU C-state Disabled |
62 | | CPU P-state Disabled |
63 | | Enhanced Intel® Speedstep® Tech Disabled |
64 | | Hyper-Threading Technology (If supported) Enable |
65 | | Virtualization Techology Enable |
66 | | Coherency Enable |
67 | | Turbo Boost Disabled |
68 +------------------+---------------------------------------------------+
70 Network Topology for testing VNFs
71 ---------------------------------
72 The ethernet cables should be connected between traffic generator and the VNF server (BM,
73 SRIOV or OVS) setup based on the test profile.
75 The connectivity could be
77 1) Single port pair : One pair ports used for traffic
80 e.g. Single port pair link0 and link1 of VNF are used
81 TG:port 0 <------> VNF:Port 0
82 TG:port 1 <------> VNF:Port 1
84 For correalted traffic, use below configuration
85 TG_1:port 0 <------> VNF:Port 0
86 VNF:Port 1 <------> TG_2:port 0 (UDP Replay)
87 (TG_2(UDP_Replay) reflects all the traffic on the given port)
89 2) Multi port pair : More than one pair of traffic
92 e.g. Two port pair link 0, link1, link2 and link3 of VNF are used
93 TG:port 0 <------> VNF:Port 0
94 TG:port 1 <------> VNF:Port 1
95 TG:port 2 <------> VNF:Port 2
96 TG:port 3 <------> VNF:Port 3
98 For correalted traffic, use below configuration
99 TG_1:port 0 <------> VNF:Port 0
100 VNF:Port 1 <------> TG_2:port 0 (UDP Replay)
101 TG_1:port 1 <------> VNF:Port 2
102 VNF:Port 3 <------> TG_2:port 1 (UDP Replay)
103 (TG_2(UDP_Replay) reflects all the traffic on the given port)
106 Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
108 * Standalone Virtualization - PHY-VM-PHY
110 Refer below link to setup sriov
111 https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms
114 Refer below link to setup ovs-dpdk
115 http://docs.openvswitch.org/en/latest/intro/install/general/
116 http://docs.openvswitch.org/en/latest/intro/install/dpdk/
119 Use any OPNFV installer to deploy the openstack.
121 Setup Traffic generator
122 -----------------------
124 Step 0: Preparing hardware connection
126 Connect Traffic generator and VNF system back to back as shown in previous section
129 TRex port 0 ↔ (VNF Port 0) ↔ (VNF Port 1) ↔ TRex port 1
131 Step 1: Setting up Traffic generator (TRex)
133 TRex Software preparations
134 **************************
135 * Install the OS (Bare metal Linux, not VM!)
136 * Obtain the latest TRex package: wget https://trex-tgn.cisco.com/trex/release/latest
137 * Untar the package: tar -xzf latest
138 * Change dir to unzipped TRex
139 * Create config file using command: sudo python dpdk_setup_ports.py -i
140 In case of Ubuntu 16 need python3
141 See paragraph config creation for detailed step-by-step
142 (Refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html)
148 Step 2: Procedure to build SampleVNFs
150 a) Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf
155 * Interactive options:
158 ./tools/vnf_build.sh -i
159 Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs.
160 It will automatically download selected DPDK version and any required patches and will setup everything and build VNFs.
161 Following are the options for setup:
162 ----------------------------------------------------------
163 Step 1: Environment setup.
164 ----------------------------------------------------------
165 [1] Check OS and network connection
166 [2] Select DPDK RTE version
168 ----------------------------------------------------------
169 Step 2: Download and Install
170 ----------------------------------------------------------
171 [3] Agree to download
172 [4] Download packages
173 [5] Download DPDK zip
174 [6] Build and Install DPDK
176 [8] Download civetweb
178 ----------------------------------------------------------
180 ----------------------------------------------------------
181 [9] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX)
186 * Non-Interactive options:
189 ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02>
190 if system is behind the proxy
191 ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02> -p=<proxy>
197 1) Download DPDK supported version from dpdk.org
198 http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
199 unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions)
201 make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
202 cd x86_64-native-linuxapp-gcc
205 2) Download civetweb 1.9 version from the following link
206 https://sourceforge.net/projects/civetweb/files/1.9/CivetWeb_V1.9.zip
207 unzip CivetWeb_V1.9.zip
208 mv civetweb-master civetweb
213 For 1G/2M hugepage sizes, for example 1G pages, the size must be
214 specified explicitly and can also be optionally set as the
215 default hugepage size for the system. For example, to reserve 8G
216 of hugepage memory in the form of eight 1G pages, the following
217 options should be passed to the kernel: * default_hugepagesz=1G
218 hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
219 4) Add this to Go to /etc/default/grub configuration file.
220 Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048”
221 to the GRUB_CMDLINE_LINUX entry.
222 5) Setup Environment Variable
223 export RTE_SDK=<samplevnf>/dpdk
224 export RTE_TARGET=x86_64-native-linuxapp-gcc
225 export VNF_CORE=<samplevnf>
226 or using ./tools/setenv.sh
230 or to build individual VNFs
234 The vFW executable will be created at the following location
235 <samplevnf>/VNFs/vFW/build/vFW
238 Virtual Firewall - How to run
239 -----------------------------
241 Step 3: Bind the datapath ports to DPDK
243 a) Bind ports to DPDK
247 For DPDK versions 17.xx
248 1) cd <samplevnf>/dpdk
249 2) ./usertools/dpdk-devbind.py --status <--- List the network device
250 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
251 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
254 b) Prepare script to enalble VNF to route the packets
258 cd <samplevnf>/VNFs/vFW/config
259 Open -> VFW_SWLB_SinglePortPair_script.tc. Replace the bold items based on your setting.
261 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
264 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
267 ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
268 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
269 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
271 ; IPv4 static ARP; disable if dynamic arp is enabled.
272 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
273 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
274 p action add 0 accept
277 p action add 1 accept
282 p action add 0 conntrack
283 p action add 1 conntrack
284 p action add 2 conntrack
285 p action add 3 conntrack
287 p vfw add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
288 p vfw add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
289 p vfw add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
292 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
296 cd <samplevnf>/VNFs/vFW/
297 ./build/vFW -p 0x3 -f ./config/VFW_SWLB_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_SinglePortPair_script.tc
300 step 4: Run Test using traffic geneator
304 On traffic generator system:
305 cd <trex eg v2.28/stl>
306 Update the bench.py to generate the traffic.
308 class STLBench(object):
310 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
311 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
313 Run the TRex server: sudo ./t-rex-64 -i -c 7
314 In another shell run TRex console: trex-console
315 The console can be run from another computer with -s argument, --help for more info.
316 Other options for TRex client are automation or GUI
317 In the console, run "tui" command, and then send the traffic with commands like:
318 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
319 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
322 Virtual Access Control list - How to run
323 ----------------------------------------
325 Step 3: Bind the datapath ports to DPDK
327 a) Bind ports to DPDK
331 For DPDK versions 17.xx
332 1) cd <samplevnf>/dpdk
333 2) ./usertools/dpdk-devbind.py --status <--- List the network device
334 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
335 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
338 b) Prepare script to enalble VNF to route the packets
342 cd <samplevnf>/VNFs/vACL/config
343 Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
345 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
348 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
351 ; routeadd <port #> <ipv4 nhip address in decimal> <Mask>
352 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
353 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
355 ; IPv4 static ARP; disable if dynamic arp is enabled.
356 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
357 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
358 p action add 0 accept
361 p action add 1 accept
366 p action add 0 conntrack
367 p action add 1 conntrack
368 p action add 2 conntrack
369 p action add 3 conntrack
371 p acl add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
372 p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
373 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
377 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
381 cd <samplevnf>/VNFs/vFW/
382 ./build/vFW -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
385 step 4: Run Test using traffic geneator
389 On traffic generator system:
390 cd <trex eg v2.28/stl>
391 Update the bench.py to generate the traffic.
393 class STLBench(object):
395 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
396 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
398 Run the TRex server: sudo ./t-rex-64 -i -c 7
399 In another shell run TRex console: trex-console
400 The console can be run from another computer with -s argument, --help for more info.
401 Other options for TRex client are automation or GUI
402 In the console, run "tui" command, and then send the traffic with commands like:
403 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
404 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
410 Step 3: Bind the datapath ports to DPDK
412 a) Bind ports to DPDK
416 For DPDK versions 17.xx
417 1) cd <samplevnf>/dpdk
418 2) ./usertools/dpdk-devbind.py --status <--- List the network device
419 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
420 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
423 b) Prepare script to enalble VNF to route the packets
427 cd <samplevnf>/VNFs/vCGNAPT/config
428 Open -> sample_swlb_2port_2WT.tc Replace the bold items based on your setting.
430 link 0 config <VNF port 0 IP eg 202.16.100.10> 8
433 link 1 config <VNF port 0 IP eg 172.16.40.10> 8
436 ; uncomment to enable static NAPT
437 ;p <cgnapt pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port>
438 ;p 5 entry addm 202.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535
440 ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
441 routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
442 routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
444 ; IPv4 static ARP; disable if dynamic arp is enabled.
445 p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
446 p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
448 For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator
449 (TG1) (port 0) --> (port 0) VNF (CGNAPT) (Port 1) --> (port0)(UDPReplay)
451 c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
455 cd <samplevnf>/VNFs/vCGNAPT/
456 ./build/vCGNAPT -p 0x3 -f ./config/sample_swlb_2port_2WT.cfg -s ./config/sample_swlb_2port_2WT.tc
458 d) Run UDP_replay to reflect the traffic on public side.
459 cmd: ./build/UDP_Replay -c 0x7 -n 4 -w <pci> -w <pci> -- --no-hw-csum -p <portmask> --config='(port, queue, cpucore)'
460 e.g ./build/UDP_Replay -c 0x7 -n 4 -w 0000:07:00.0 -w 0000:07:00.1 -- --no-hw-csum -p 0x3 --config='(0, 0, 1)(1, 0, 2)'
462 step 4: Run Test using traffic geneator
464 On traffic generator system:
467 cd <trex eg v2.28/stl>
468 Update the bench.py to generate the traffic.
470 class STLBench(object):
472 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
473 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
475 Run the TRex server: sudo ./t-rex-64 -i -c 7
476 In another shell run TRex console: trex-console
477 The console can be run from another computer with -s argument, --help for more info.
478 Other options for TRex client are automation or GUI
479 In the console, run "tui" command, and then send the traffic with commands like:
480 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
481 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
484 UDP_Replay - How to run
485 ----------------------------------------
487 Step 3: Bind the datapath ports to DPDK
489 a) Bind ports to DPDK
493 For DPDK versions 17.xx
494 1) cd <samplevnf>/dpdk
495 2) ./usertools/dpdk-devbind.py --status <--- List the network device
496 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
497 .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
499 b) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
503 cd <samplevnf>/VNFs/UDP_Replay/
504 cmd: ./build/UDP_Replay -c 0x7 -n 4 -w <pci> -w <pci> -- --no-hw-csum -p <portmask> --config='(port, queue, cpucore)'
505 e.g ./build/UDP_Replay -c 0x7 -n 4 -w 0000:07:00.0 -w 0000:07:00.1 -- --no-hw-csum -p 0x3 --config='(0, 0, 1)(1, 0, 2)'
508 step 4: Run Test using traffic geneator
512 On traffic generator system:
513 cd <trex eg v2.28/stl>
514 Update the bench.py to generate the traffic.
516 class STLBench(object):
518 ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
519 ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'}
521 Run the TRex server: sudo ./t-rex-64 -i -c 7
522 In another shell run TRex console: trex-console
523 The console can be run from another computer with -s argument, --help for more info.
524 Other options for TRex client are automation or GUI
525 In the console, run "tui" command, and then send the traffic with commands like:
526 start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
527 For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
535 This is PROX, the Packet pROcessing eXecution engine, part of Intel(R)
536 Data Plane Performance Demonstrators, and formerly known as DPPD-BNG.
537 PROX is a DPDK-based application implementing Telco use-cases such as
538 a simplified BRAS/BNG, light-weight AFTR... It also allows configuring
539 finer grained network functions like QoS, Routing, load-balancing...
541 Compiling and running this application
542 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
544 This application supports DPDK 16.04, 16.11, 17.02 and 17.05.
545 The following commands assume that the following variables have been set:
547 export RTE_SDK=/path/to/dpdk
548 export RTE_TARGET=x86_64-native-linuxapp-gcc
550 Example: DPDK 17.05 installation
551 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
553 * git clone http://dpdk.org/git/dpdk
555 * git checkout v17.05
556 * make install T=$RTE_TARGET
561 The Makefile with this application expects RTE_SDK to point to the
562 root directory of DPDK (e.g. export RTE_SDK=/root/dpdk). If RTE_TARGET
563 has not been set, x86_64-native-linuxapp-gcc will be assumed.
568 After DPDK has been set up, run make from the directory where you have
569 extracted this application. A build directory will be created
570 containing the PROX executable. The usage of the application is shown
571 below. Note that this application assumes that all required ports have
572 been bound to the DPDK provided igb_uio driver. Refer to the "Getting
573 Started Guide - DPDK" document for more details.
577 Usage: ./build/prox [-f CONFIG_FILE] [-l LOG_FILE] [-p] [-o DISPLAY] [-v] [-a|-e] [-m|-s|-i] [-n] [-w DEF] [-q] [-k] [-d] [-z] [-r VAL] [-u] [-t]
578 -f CONFIG_FILE : configuration file to load, ./prox.cfg by default
579 -l LOG_FILE : log file name, ./prox.log by default
580 -p : include PID in log file name if default log file is used
581 -o DISPLAY: Set display to use, can be 'curses' (default), 'cli' or 'none'
582 -v verbosity : initial logging verbosity
583 -a : autostart all cores (by default)
585 -n : Create NULL devices instead of using PCI devices, useful together with -i
586 -m : list supported task modes and exit
587 -s : check configuration file syntax and exit
588 -i : check initialization sequence and exit
589 -u : Listen on UDS /tmp/prox.sock
590 -t : Listen on TCP port 8474
591 -q : Pass argument to Lua interpreter, useful to define variables
592 -w : define variable using syntax varname=value
593 takes precedence over variables defined in CONFIG_FILE
594 -k : Log statistics to file "stats_dump" in current directory
595 -d : Run as daemon, the parent process will block until PROX is not initialized
596 -z : Ignore CPU topology, implies -i
597 -r : Change initial screen refresh rate. If set to a lower than 0.001 seconds,
598 screen refreshing will be disabled
600 While applications using DPDK typically rely on the core mask and the
601 number of channels to be specified on the command line, this
602 application is configured using a .cfg file. The core mask and number
603 of channels is derived from this config. For example, to run the
604 application from the source directory execute:
607 user@target:~$ ./build/prox -f ./config/nop.cfg
609 Provided example configurations
610 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
611 PROX can be configured either as the SUT (System Under Test) or as the
612 Traffic Generator. Some example configuration files are provided, both
613 in the config directory to run PROX as a SUT, and in the gen directory
614 to run it as a Traffic Generator.
615 A quick description of these example configurations is provided below.
616 Additional details are provided in the example configuration files.
618 Basic configurations, mostly used as sanity check:
622 - config/nop-rings.cfg
625 Simplified BNG (Border Network Gateway) configurations, using different
626 number of ports, with and without QoS, running on the host or in a VM:
629 - config/bng-4ports.cfg
630 - config/bng-8ports.cfg
631 - config/bng-qos-4ports.cfg
632 - config/bng-qos-8ports.cfg
633 - config/bng-1q-4ports.cfg
634 - config/bng-ovs-usv-4ports.cfg
635 - config/bng-no-cpu-topology-4ports.cfg
636 - gen/bng-4ports-gen.cfg
637 - gen/bng-8ports-gen.cfg
638 - gen/bng-ovs-usv-4ports-gen.cfg
640 Light-weight AFTR configurations:
644 - gen/lw_aftr-gen.cfg