1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) OPNFV, 2016-2018 Intel Corporation.
6 Convention for heading levels in Yardstick documentation:
8 ======= Heading 0 (reserved for the title in a document)
14 Avoid deeper levels because they do not render well.
16 Yardstick - NSB Testing - Operation
17 ===================================
22 NSB test configuration and OpenStack setup requirements
25 OpenStack Network Configuration
26 -------------------------------
28 NSB requires certain OpenStack deployment configurations.
29 For optimal VNF characterization using external traffic generators NSB requires
30 provider/external networks.
36 The VNFs require a clear L2 connect to the external network in order to
37 generate realistic traffic from multiple address ranges and ports.
39 In order to prevent Neutron from filtering traffic we have to disable Neutron
40 Port Security. We also disable DHCP on the data ports because we are binding
41 the ports to DPDK and do not need DHCP addresses. We also disable gateways
42 because multiple default gateways can prevent SSH access to the VNF from the
43 floating IP. We only want a gateway on the mgmt network
50 port_security_enabled: False
56 By default Heat will attach every node to every Neutron network that is
57 created. For scale-out tests we do not want to attach every node to every
60 For each node you can specify which ports are on which network using the
61 network_ports dictionary.
63 In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay``
85 # Trex always needs two ports
101 The configuration of the availability zone is requred in cases where location
102 of exact compute host/group of compute hosts needs to be specified for
103 :term:`SampleVNF` or traffic generator in the heat test case. If this is the
104 case, please follow the instructions below.
106 .. _`Create a host aggregate`:
108 1. Create a host aggregate in the OpenStack and add the available compute hosts
109 into the aggregate group.
111 .. note:: Change the ``<AZ_NAME>`` (availability zone name), ``<AGG_NAME>``
112 (host aggregate name) and ``<HOST>`` (host name of one of the compute) in the
117 # create host aggregate
118 openstack aggregate create --zone <AZ_NAME> \
119 --property availability_zone=<AZ_NAME> <AGG_NAME>
120 # show available hosts
121 openstack compute service list --service nova-compute
122 # add selected host into the host aggregate
123 openstack aggregate add host <AGG_NAME> <HOST>
125 2. To specify the OpenStack location (the exact compute host or group of the hosts)
126 of SampleVNF or traffic generator in the heat test case, the ``availability_zone`` server
127 configuration option should be used. For example:
129 .. note:: The ``<AZ_NAME>`` (availability zone name) should be changed according
130 to the name used during the host aggregate creation steps above.
136 image: yardstick-samplevnfs
141 availability_zone: <AZ_NAME>
145 availability_zone: <AZ_NAME>
150 There are two example of SampleVNF scale out test case which use the
151 ``availability zone`` feature to specify the exact location of scaled VNFs and
156 .. code-block:: console
158 <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml
159 <repo>/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml
161 .. note:: This section describes the PROX scale-out testcase, but the same
162 procedure is used for the vFW test case.
164 1. Before running the scale-out test case, make sure the host aggregates are
165 configured in the OpenStack environment. To check this, run the following
168 .. code-block:: console
170 # show configured host aggregates (example)
171 openstack aggregate list
172 +----+------+-------------------+
173 | ID | Name | Availability Zone |
174 +----+------+-------------------+
175 | 4 | agg0 | AZ_NAME_0 |
176 | 5 | agg1 | AZ_NAME_1 |
177 +----+------+-------------------+
179 2. If no host aggregates are configured, please follow the instructions to
180 `Create a host aggregate`_
183 3. Run the SampleVNF PROX scale-out test case, specifying the
184 ``availability zone`` of each VNF and traffic generator as task arguments.
186 .. note:: The ``az_0`` and ``az_1`` should be changed according to the host
187 aggregates created in the OpenStack.
189 .. code-block:: console
191 yardstick -d task start \
192 <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml\
194 "num_vnfs": 4, "availability_zone": {
195 "vnf_0": "az_0", "tg_0": "az_1",
196 "vnf_1": "az_0", "tg_1": "az_1",
197 "vnf_2": "az_0", "tg_2": "az_1",
198 "vnf_3": "az_0", "tg_3": "az_1"
202 ``num_vnfs`` specifies how many VNFs are going to be deployed in the
203 ``heat`` contexts. ``vnf_X`` and ``tg_X`` arguments configure the
204 availability zone where the VNF and traffic generator is going to be deployed.
210 NSB can collect KPIs from collected. We have support for various plugins
211 enabled by the :term:`Barometer` project.
213 The default yardstick-samplevnf has collectd installed. This allows for
214 collecting KPIs from the VNF.
216 Collecting KPIs from the NFVi is more complicated and requires manual setup.
217 We assume that collectd is not installed on the compute nodes.
219 To collectd KPIs from the NFVi compute nodes:
221 * install_collectd on the compute nodes
222 * create pod.yaml for the compute nodes
223 * enable specific plugins depending on the vswitch and DPDK
225 example ``pod.yaml`` section for Compute node running collectd.
244 ovs_socket_path: /var/run/openvswitch/db.sock
252 VNFs performance data with scale-up
254 * Helps to figure out optimal number of cores specification in the Virtual
255 Machine template creation or VNF
256 * Helps in comparison between different VNF vendor offerings
257 * Better the scale-up index, indicates the performance scalability of a
262 For VNF scale-up tests we increase the number for VNF worker threads. In the
263 case of VNFs we also need to increase the number of VCPUs and memory allocated
266 An example scale-up Heat testcase is:
268 .. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
271 This testcase template requires specifying the number of VCPUs, Memory and Ports.
272 We set the VCPUs and memory using the ``--task-args`` options
274 .. code-block:: console
276 yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "vports": 2}' \
277 samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
279 In order to support ports scale-up, traffic and topology templates need to be used in testcase.
281 A example topology template is:
283 .. literalinclude:: /../samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
286 This template has ``vports`` as an argument. To pass this argument it needs to
287 be configured in ``extra_args`` scenario definition. Please note that more
288 argument can be defined in that section. All of them will be passed to topology
289 and traffic profile templates
295 schema: yardstick:task:0.1
298 traffic_profile: ../../traffic_profiles/ipv4_throughput-scale-up.yaml
301 topology: vfw-tg-topology-scale-up.yaml
303 A example traffic profile template is:
305 .. literalinclude:: /../samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
308 There is an option to provide predefined config for SampleVNFs. Path to config
309 file may by specified in ``vnf_config`` scenario section.
314 rules: acl_1rule.yaml
315 vnf_config: {lb_config: 'SW', file: vfw_vnf_pipeline_cores_4_ports_2_lb_1_sw.conf }
320 1. Follow above traffic generator section to setup.
321 2. Edit num of threads in
322 ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
323 e.g, 6 Threads for given VNF
328 schema: yardstick:task:0.1
330 {% for worker_thread in [1, 2 ,3 , 4, 5, 6] %}
332 traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
333 topology: vfw-tg-topology.yaml
335 tg__0: trafficgen_1.yardstick
336 vnf__0: vnf.yardstick
342 src_ip: [{'tg__0': 'xe0'}]
343 dst_ip: [{'tg__0': 'xe1'}]
347 allowed_drop_rate: 0.0001 - 0.0001
349 rules: acl_1rule.yaml
350 vnf_config: {lb_config: 'HW', lb_count: 1, worker_config: '1C/1T', worker_threads: {{worker_thread}}}
361 file: /etc/yardstick/nodes/pod.yaml
366 VNFs performance data with scale-out helps
368 * capacity planning to meet the given network node requirements
369 * comparison between different VNF vendor offerings
370 * better the scale-out index, provides the flexibility in meeting future
371 capacity requirements
377 Scale-out not supported on Baremetal.
379 1. Follow above traffic generator section to setup.
380 2. Generate testcase for standalone virtualization using ansible scripts
382 .. code-block:: console
385 trex: standalone_ovs_scale_out_trex_test.yaml or standalone_sriov_scale_out_trex_test.yaml
386 ixia: standalone_ovs_scale_out_ixia_test.yaml or standalone_sriov_scale_out_ixia_test.yaml
387 ixia_correlated: standalone_ovs_scale_out_ixia_correlated_test.yaml or standalone_sriov_scale_out_ixia_correlated_test.yaml
389 update the ovs_dpdk or sriov above Ansible scripts reflect the setup
393 .. code-block:: console
395 <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-1.yaml
396 <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-2.yaml
401 There are sample scale-out all-VM Heat tests. These tests only use VMs and
402 don't use external traffic.
404 The tests use UDP_Replay and correlated traffic.
406 .. code-block:: console
408 <repo>/samples/vnf_samples/nsut/cgnapt/tc_heat_rfc2544_ipv4_1flow_64B_trex_correlated_scale_4.yaml
410 To run the test you need to increase OpenStack CPU, Memory and Port quotas.
413 Traffic Generator tuning
414 ------------------------
416 The TRex traffic generator can be setup to use multiple threads per core, this
417 is for multiqueue testing.
419 TRex does not automatically enable multiple threads because we currently cannot
420 detect the number of queues on a device.
422 To enable multiple queue set the ``queues_per_port`` value in the TG VNF
430 tg__0: tg_0.yardstick
437 Standalone configuration
438 ------------------------
440 NSB supports certain Standalone deployment configurations.
441 Standalone supports provisioning a VM in a standalone visualised environment using kvm/qemu.
442 There two types of Standalone contexts available: OVS-DPDK and SRIOV.
443 OVS-DPDK uses OVS network with DPDK drivers.
444 SRIOV enables network traffic to bypass the software switch layer of the Hyper-V stack.
446 Emulated machine type
447 ^^^^^^^^^^^^^^^^^^^^^
449 For better performance test results of emulated VM spawned by Yardstick SA
450 context (OvS-DPDK/SRIOV), it may be important to control the emulated machine
451 type used by QEMU emulator. This attribute can be configured via TC definition
452 in ``contexts`` section under ``extra_specs`` configuration.
460 - type: StandaloneSriov
466 machine_type: pc-i440fx-bionic
468 Where, ``machine_type`` can be set to one of the emulated machine type
469 supported by QEMU running on SUT platform. To get full list of supported
470 emulated machine types, the following command can be used on the target SUT
475 # qemu-system-x86_64 -machine ?
477 By default, the ``machine_type`` option is set to ``pc-i440fx-xenial`` which is
478 suitable for running Ubuntu 16.04 VM image. So, if this type is not supported
479 by the target platform or another VM image is used for stand alone (SA) context
480 VM (e.g.: ``bionic`` image for Ubuntu 18.04), this configuration should be
483 Standalone with OVS-DPDK
484 ^^^^^^^^^^^^^^^^^^^^^^^^
486 SampleVNF image is spawned in a VM on a baremetal server.
487 OVS with DPDK is installed on the baremetal server.
489 .. note:: Ubuntu 17.10 requires DPDK v.17.05 and higher, DPDK v.17.05 requires OVS v.2.8.0.
491 Default values for OVS-DPDK:
495 * pmd_cpu_mask: "0x6"
497 Sample test case file
498 ^^^^^^^^^^^^^^^^^^^^^
500 1. Prepare SampleVNF image and copy it to ``flavor/images``.
501 2. Prepare context files for TREX and SampleVNF under ``contexts/file``.
502 3. Add bridge named ``br-int`` to the baremetal where SampleVNF image is deployed.
503 4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
506 .. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
509 Preparing test run of vEPC test case
510 ------------------------------------
512 Provided vEPC test cases are examples of emulation of vEPC infrastructure
513 components, such as UE, eNodeB, MME, SGW, PGW.
515 Location of vEPC test cases: ``samples/vnf_samples/nsut/vepc/``.
517 Before running a specific vEPC test case using NSB, some preconfiguration
520 Update Spirent Landslide TG configuration in pod file
521 =====================================================
523 Examples of ``pod.yaml`` files could be found in
524 :file:`etc/yardstick/nodes/standalone`.
525 The name of related pod file could be checked in the context section of NSB
528 The ``pod.yaml`` related to vEPC test case uses some sub-structures that hold the
529 details of accessing the Spirent Landslide traffic generator.
530 These subsections and the changes to be done in provided example pod file are
533 1. ``tas_manager``: data under this key holds the information required to
534 access Landslide TAS (Test Administration Server) and perform needed
535 configurations on it.
537 * ``ip``: IP address of TAS Manager node; should be updated according to test
539 * ``super_user``: superuser name; could be retrieved from Landslide documentation
540 * ``super_user_password``: superuser password; could be retrieved from
541 Landslide documentation
542 * ``cfguser_password``: password of predefined user named 'cfguser'; default
543 password could be retrieved from Landslide documentation
544 * ``test_user``: username to be used during test run as a Landslide library
545 name; to be defined by test run operator
546 * ``test_user_password``: password of test user; to be defined by test run
548 * ``proto``: *http* or *https*; to be defined by test run operator
549 * ``license``: Landslide license number installed on TAS
551 2. The ``config`` section holds information about test servers (TSs) and
552 systems under test (SUTs). Data is represented as a list of entries.
553 Each such entry contains:
555 * ``test_server``: this subsection represents data related to test server
556 configuration, such as:
558 * ``name``: test server name; unique custom name to be defined by test
560 * ``role``: this value is used as a key to bind specific Test Server and
561 TestCase; should be set to one of test types supported by TAS license
562 * ``ip``: Test Server IP address
563 * ``thread_model``: parameter related to Test Server performance mode.
564 The value should be one of the following: "Legacy" | "Max" | "Fireball".
565 Refer to Landslide documentation for details.
566 * ``phySubnets``: a structure used to specify IP ranges reservations on
567 specific network interfaces of related Test Server. Structure fields are:
569 * ``base``: start of IP address range
570 * ``mask``: IP range mask in CIDR format
571 * ``name``: network interface name, e.g. *eth1*
572 * ``numIps``: size of IP address range
574 * ``preResolvedArpAddress``: a structure used to specify the range of IP
575 addresses for which the ARP responses will be emulated
577 * ``StartingAddress``: IP address specifying the start of IP address range
578 * ``NumNodes``: size of the IP address range
580 * ``suts``: a structure that contains definitions of each specific SUT
581 (represents a vEPC component). SUT structure contains following key/value
584 * ``name``: unique custom string specifying SUT name
585 * ``role``: string value corresponding with an SUT role specified in the
586 session profile (test session template) file
587 * ``managementIp``: SUT management IP adress
588 * ``phy``: network interface name, e.g. *eth1*
589 * ``ip``: vEPC component IP address used in test case topology
590 * ``nextHop``: next hop IP address, to allow for vEPC inter-node communication
592 Update NSB test case definitions
593 ================================
594 NSB test case file designated for vEPC testing contains an example of specific
595 test scenario configuration.
596 Test operator may change these definitions as required for the use case that
598 Specifically, following subsections of the vEPC test case (section **scenarios**)
601 1. Subsection ``options``: contains custom parameters used for vEPC testing
603 * subsection ``dmf``: may contain one or more parameters specified in
604 ``traffic_profile`` template file
605 * subsection ``test_cases``: contains re-definitions of parameters specified
606 in ``session_profile`` template file
608 .. note:: All parameters in ``session_profile``, value of which is a
609 placeholder, needs to be re-defined to construct a valid test session.
611 2. Subsection ``runner``: specifies the test duration and the interval of
612 TG and VNF side KPIs polling. For more details, refer to :doc:`03-architecture`.
614 Preparing test run of vPE test case
615 -----------------------------------
616 The vPE (Provider Edge Router) is a :term: `VNF` approximation
617 serving as an Edge Router. The vPE is approximated using the
618 ``ip_pipeline`` dpdk application.
620 .. image:: images/vPE_Diagram.png
622 :alt: NSB vPE Diagram
624 The ``vpe_config`` file must be passed as it is not auto generated.
625 The ``vpe_script`` defines the rules applied to each of the pipelines. This can be
626 auto generated or a file can be passed using the ``script_file`` option in
627 ``vnf_config`` as shown below. The ``full_tm_profile_file`` option must be
628 used if a traffic manager is defined in ``vpe_config``.
632 vnf_config: { file: './vpe_config/vpe_config_2_ports',
633 action_bulk_file: './vpe_config/action_bulk_512.txt',
634 full_tm_profile_file: './vpe_config/full_tm_profile_10G.cfg',
635 script_file: './vpe_config/vpe_script_sample' }
637 Testcases for vPE can be found in the ``vnf_samples/nsut/vpe`` directory.
638 A testcase can be started with the following command as an example:
642 yardstick task start /yardstick/samples/vnf_samples/nsut/vpe/tc_baremetal_rfc2544_ipv4_1flow_64B_ixia.yaml