1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
6 Yardstick - NSB Testing - Operation
7 ===================================
12 NSB test configuration and OpenStack setup requirements
15 OpenStack Network Configuration
16 -------------------------------
18 NSB requires certain OpenStack deployment configurations.
19 For optimal VNF characterization using external traffic generators NSB requires
20 provider/external networks.
26 The VNFs require a clear L2 connect to the external network in order to
27 generate realistic traffic from multiple address ranges and ports.
29 In order to prevent Neutron from filtering traffic we have to disable Neutron
30 Port Security. We also disable DHCP on the data ports because we are binding
31 the ports to DPDK and do not need DHCP addresses. We also disable gateways
32 because multiple default gateways can prevent SSH access to the VNF from the
33 floating IP. We only want a gateway on the mgmt network
40 port_security_enabled: False
46 By default Heat will attach every node to every Neutron network that is
47 created. For scale-out tests we do not want to attach every node to every
50 For each node you can specify which ports are on which network using the
51 network_ports dictionary.
53 In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay``
75 # Trex always needs two ports
91 The configuration of the availability zone is requred in cases where location
92 of exact compute host/group of compute hosts needs to be specified for SampleVNF
93 or traffic generator in the heat test case. If this is the case, please follow
94 the instructions below.
96 .. _`Create a host aggregate`:
98 1. Create a host aggregate in the OpenStack and add the available compute hosts
99 into the aggregate group.
101 .. note:: Change the ``<AZ_NAME>`` (availability zone name), ``<AGG_NAME>``
102 (host aggregate name) and ``<HOST>`` (host name of one of the compute) in the
107 # create host aggregate
108 openstack aggregate create --zone <AZ_NAME> --property availability_zone=<AZ_NAME> <AGG_NAME>
109 # show available hosts
110 openstack compute service list --service nova-compute
111 # add selected host into the host aggregate
112 openstack aggregate add host <AGG_NAME> <HOST>
114 2. To specify the OpenStack location (the exact compute host or group of the hosts)
115 of SampleVNF or traffic generator in the heat test case, the ``availability_zone`` server
116 configuration option should be used. For example:
118 .. note:: The ``<AZ_NAME>`` (availability zone name) should be changed according
119 to the name used during the host aggregate creation steps above.
125 image: yardstick-samplevnfs
130 availability_zone: <AZ_NAME>
134 availability_zone: <AZ_NAME>
139 There are two example of SampleVNF scale out test case which use the availability zone
140 feature to specify the exact location of scaled VNFs and traffic generators.
144 .. code-block:: console
146 <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml
147 <repo>/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml
149 .. note:: This section describes the PROX scale-out testcase, but the same
150 procedure is used for the vFW test case.
152 1. Before running the scale-out test case, make sure the host aggregates are
153 configured in the OpenStack environment. To check this, run the following
156 .. code-block:: console
158 # show configured host aggregates (example)
159 openstack aggregate list
160 +----+------+-------------------+
161 | ID | Name | Availability Zone |
162 +----+------+-------------------+
163 | 4 | agg0 | AZ_NAME_0 |
164 | 5 | agg1 | AZ_NAME_1 |
165 +----+------+-------------------+
167 2. If no host aggregates are configured, please use `steps above`__ to
170 __ `Create a host aggregate`_
173 3. Run the SampleVNF PROX scale-out test case, specifying the availability
174 zone of each VNF and traffic generator as a task arguments.
176 .. note:: The ``az_0`` and ``az_1`` should be changed according to the host
177 aggregates created in the OpenStack.
179 .. code-block:: console
181 yardstick -d task start\
182 <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml\
184 "num_vnfs": 4, "availability_zone": {
185 "vnf_0": "az_0", "tg_0": "az_1",
186 "vnf_1": "az_0", "tg_1": "az_1",
187 "vnf_2": "az_0", "tg_2": "az_1",
188 "vnf_3": "az_0", "tg_3": "az_1"
192 ``num_vnfs`` specifies how many VNFs are going to be deployed in the
193 ``heat`` contexts. ``vnf_X`` and ``tg_X`` arguments configure the
194 availability zone where the VNF and traffic generator is going to be deployed.
200 NSB can collect KPIs from collected. We have support for various plugins
201 enabled by the Barometer project.
203 The default yardstick-samplevnf has collectd installed. This allows for
204 collecting KPIs from the VNF.
206 Collecting KPIs from the NFVi is more complicated and requires manual setup.
207 We assume that collectd is not installed on the compute nodes.
209 To collectd KPIs from the NFVi compute nodes:
212 * install_collectd on the compute nodes
213 * create pod.yaml for the compute nodes
214 * enable specific plugins depending on the vswitch and DPDK
216 example pod.yaml section for Compute node running collectd.
235 ovs_socket_path: /var/run/openvswitch/db.sock
243 VNFs performance data with scale-up
245 * Helps to figure out optimal number of cores specification in the Virtual
246 Machine template creation or VNF
247 * Helps in comparison between different VNF vendor offerings
248 * Better the scale-up index, indicates the performance scalability of a
253 For VNF scale-up tests we increase the number for VNF worker threads. In the
254 case of VNFs we also need to increase the number of VCPUs and memory allocated
257 An example scale-up Heat testcase is:
259 .. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
262 This testcase template requires specifying the number of VCPUs, Memory and Ports.
263 We set the VCPUs and memory using the ``--task-args`` options
265 .. code-block:: console
267 yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "vports": 2}' \
268 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
270 In order to support ports scale-up, traffic and topology templates need to be used in testcase.
272 A example topology template is:
274 .. literalinclude:: /../samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
277 This template has ``vports`` as an argument. To pass this argument it needs to
278 be configured in ``extra_args`` scenario definition. Please note that more
279 argument can be defined in that section. All of them will be passed to topology
280 and traffic profile templates
286 schema: yardstick:task:0.1
289 traffic_profile: ../../traffic_profiles/ipv4_throughput-scale-up.yaml
292 topology: vfw-tg-topology-scale-up.yaml
294 A example traffic profile template is:
296 .. literalinclude:: /../samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
299 There is an option to provide predefined config for SampleVNFs. Path to config
300 file may by specified in ``vnf_config`` scenario section.
305 rules: acl_1rule.yaml
306 vnf_config: {lb_config: 'SW', file: vfw_vnf_pipeline_cores_4_ports_2_lb_1_sw.conf }
311 1. Follow above traffic generator section to setup.
312 2. Edit num of threads in
313 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
314 e.g, 6 Threads for given VNF
319 schema: yardstick:task:0.1
321 {% for worker_thread in [1, 2 ,3 , 4, 5, 6] %}
323 traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
324 topology: vfw-tg-topology.yaml
326 tg__0: trafficgen_1.yardstick
327 vnf__0: vnf.yardstick
333 src_ip: [{'tg__0': 'xe0'}]
334 dst_ip: [{'tg__0': 'xe1'}]
338 allowed_drop_rate: 0.0001 - 0.0001
340 rules: acl_1rule.yaml
341 vnf_config: {lb_config: 'HW', lb_count: 1, worker_config: '1C/1T', worker_threads: {{worker_thread}}}
352 file: /etc/yardstick/nodes/pod.yaml
357 VNFs performance data with scale-out helps
359 * in capacity planning to meet the given network node requirements
360 * in comparison between different VNF vendor offerings
361 * better the scale-out index, provides the flexibility in meeting future
362 capacity requirements
368 Scale-out not supported on Baremetal.
370 1. Follow above traffic generator section to setup.
371 2. Generate testcase for standalone virtualization using ansible scripts
373 .. code-block:: console
376 trex: standalone_ovs_scale_out_trex_test.yaml or standalone_sriov_scale_out_trex_test.yaml
377 ixia: standalone_ovs_scale_out_ixia_test.yaml or standalone_sriov_scale_out_ixia_test.yaml
378 ixia_correlated: standalone_ovs_scale_out_ixia_correlated_test.yaml or standalone_sriov_scale_out_ixia_correlated_test.yaml
380 update the ovs_dpdk or sriov above Ansible scripts reflect the setup
384 .. code-block:: console
386 <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-1.yaml
387 <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-2.yaml
392 There are sample scale-out all-VM Heat tests. These tests only use VMs and
393 don't use external traffic.
395 The tests use UDP_Replay and correlated traffic.
397 .. code-block:: console
399 <repo>/samples/vnf_samples/nsut/cgnapt/tc_heat_rfc2544_ipv4_1flow_64B_trex_correlated_scale_4.yaml
401 To run the test you need to increase OpenStack CPU, Memory and Port quotas.
404 Traffic Generator tuning
405 ------------------------
407 The TRex traffic generator can be setup to use multiple threads per core, this
408 is for multiqueue testing.
410 TRex does not automatically enable multiple threads because we currently cannot
411 detect the number of queues on a device.
413 To enable multiple queue set the ``queues_per_port`` value in the TG VNF
421 tg__0: tg_0.yardstick
428 Standalone configuration
429 ------------------------
431 NSB supports certain Standalone deployment configurations.
432 Standalone supports provisioning a VM in a standalone visualised environment using kvm/qemu.
433 There two types of Standalone contexts available: OVS-DPDK and SRIOV.
434 OVS-DPDK uses OVS network with DPDK drivers.
435 SRIOV enables network traffic to bypass the software switch layer of the Hyper-V stack.
437 Emulated machine type
438 ^^^^^^^^^^^^^^^^^^^^^
440 For better performance test results of emulated VM spawned by Yardstick SA
441 context (OvS-DPDK/SRIOV), it may be important to control the emulated machine
442 type used by QEMU emulator. This attribute can be configured via TC definition
443 in ``contexts`` section under ``extra_specs`` configuration.
451 - type: StandaloneSriov
457 machine_type: pc-i440fx-bionic
459 Where, ``machine_type`` can be set to one of the emulated machine type
460 supported by QEMU running on SUT platform. To get full list of supported
461 emulated machine types, the following command can be used on the target SUT
466 # qemu-system-x86_64 -machine ?
468 By default, the ``machine_type`` option is set to ``pc-i440fx-xenial`` which is
469 suitable for running Ubuntu 16.04 VM image. So, if this type is not supported
470 by the target platform or another VM image is used for stand alone (SA) context
471 VM (e.g.: ``bionic`` image for Ubuntu 18.04), this configuration should be
474 Standalone with OVS-DPDK
475 ^^^^^^^^^^^^^^^^^^^^^^^^
477 SampleVNF image is spawned in a VM on a baremetal server.
478 OVS with DPDK is installed on the baremetal server.
480 .. note:: Ubuntu 17.10 requires DPDK v.17.05 and higher, DPDK v.17.05 requires OVS v.2.8.0.
482 Default values for OVS-DPDK:
486 * pmd_cpu_mask: "0x6"
488 Sample test case file
489 ^^^^^^^^^^^^^^^^^^^^^
491 1. Prepare SampleVNF image and copy it to ``flavor/images``.
492 2. Prepare context files for TREX and SampleVNF under ``contexts/file``.
493 3. Add bridge named ``br-int`` to the baremetal where SampleVNF image is deployed.
494 4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
497 .. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
500 Preparing test run of vEPC test case
501 ------------------------------------
503 Provided vEPC test cases are examples of emulation of vEPC infrastructure
504 components, such as UE, eNodeB, MME, SGW, PGW.
506 Location of vEPC test cases: ``samples/vnf_samples/nsut/vepc/``.
508 Before running a specific vEPC test case using NSB, some preconfiguration
511 Update Spirent Landslide TG configuration in pod file
512 =====================================================
514 Examples of ``pod.yaml`` files could be found in
515 :file:`etc/yardstick/nodes/standalone`.
516 The name of related pod file could be checked in the context section of NSB
519 The ``pod.yaml`` related to vEPC test case uses some sub-structures that hold the
520 details of accessing the Spirent Landslide traffic generator.
521 These subsections and the changes to be done in provided example pod file are
524 1. ``tas_manager``: data under this key holds the information required to
525 access Landslide TAS (Test Administration Server) and perform needed
526 configurations on it.
528 * ``ip``: IP address of TAS Manager node; should be updated according to test
530 * ``super_user``: superuser name; could be retrieved from Landslide documentation
531 * ``super_user_password``: superuser password; could be retrieved from
532 Landslide documentation
533 * ``cfguser_password``: password of predefined user named 'cfguser'; default
534 password could be retrieved from Landslide documentation
535 * ``test_user``: username to be used during test run as a Landslide library
536 name; to be defined by test run operator
537 * ``test_user_password``: password of test user; to be defined by test run
539 * ``proto``: *http* or *https*; to be defined by test run operator
540 * ``license``: Landslide license number installed on TAS
542 2. The ``config`` section holds information about test servers (TSs) and
543 systems under test (SUTs). Data is represented as a list of entries.
544 Each such entry contains:
546 * ``test_server``: this subsection represents data related to test server
547 configuration, such as:
549 * ``name``: test server name; unique custom name to be defined by test
551 * ``role``: this value is used as a key to bind specific Test Server and
552 TestCase; should be set to one of test types supported by TAS license
553 * ``ip``: Test Server IP address
554 * ``thread_model``: parameter related to Test Server performance mode.
555 The value should be one of the following: "Legacy" | "Max" | "Fireball".
556 Refer to Landslide documentation for details.
557 * ``phySubnets``: a structure used to specify IP ranges reservations on
558 specific network interfaces of related Test Server. Structure fields are:
560 * ``base``: start of IP address range
561 * ``mask``: IP range mask in CIDR format
562 * ``name``: network interface name, e.g. *eth1*
563 * ``numIps``: size of IP address range
565 * ``preResolvedArpAddress``: a structure used to specify the range of IP
566 addresses for which the ARP responses will be emulated
568 * ``StartingAddress``: IP address specifying the start of IP address range
569 * ``NumNodes``: size of the IP address range
571 * ``suts``: a structure that contains definitions of each specific SUT
572 (represents a vEPC component). SUT structure contains following key/value
575 * ``name``: unique custom string specifying SUT name
576 * ``role``: string value corresponding with an SUT role specified in the
577 session profile (test session template) file
578 * ``managementIp``: SUT management IP adress
579 * ``phy``: network interface name, e.g. *eth1*
580 * ``ip``: vEPC component IP address used in test case topology
581 * ``nextHop``: next hop IP address, to allow for vEPC inter-node communication
583 Update NSB test case definitions
584 ================================
585 NSB test case file designated for vEPC testing contains an example of specific
586 test scenario configuration.
587 Test operator may change these definitions as required for the use case that
589 Specifically, following subsections of the vEPC test case (section **scenarios**)
592 1. Subsection ``options``: contains custom parameters used for vEPC testing
594 * subsection ``dmf``: may contain one or more parameters specified in
595 ``traffic_profile`` template file
596 * subsection ``test_cases``: contains re-definitions of parameters specified
597 in ``session_profile`` template file
599 .. note:: All parameters in ``session_profile``, value of which is a
600 placeholder, needs to be re-defined to construct a valid test session.
602 2. Subsection ``runner``: specifies the test duration and the interval of
603 TG and VNF side KPIs polling. For more details, refer to :doc:`03-architecture`.