.. code-block:: ini
- cat ./ansible/yardstick-install-inventory.ini
+ cat ./ansible/install-inventory.ini
[jumphost]
localhost ansible_connection=local
ansible_user=root
ansible_pass=root
+.. note::
+
+ SSH access without password needs to be configured for all your nodes defined in
+ ``install-inventory.ini`` file.
+ If you want to use password authentication you need to install sshpass
+
+ .. code-block:: console
+
+ sudo -EH apt-get install sshpass
To execute an installation for a Bare-Metal or a Standalone context:
setup. Refer chapter :doc:`04-installation` for more on docker
**Install Yardstick using Docker (recommended)**
+Another way to execute an installation for a Bare-Metal or a Standalone context
+is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
+for more details.
+
System Topology:
================
SR-IOV Pre-requisites
^^^^^^^^^^^^^^^^^^^^^
-On Host:
- a) Create a bridge for VM to connect to external network
+On Host, where VM is created:
+ a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
+ Currently this can be done using VXLAN tunnel.
+
+ Execute the following on host, where VM is created:
.. code-block:: console
+ ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
+ brctl addif br-int vxlan0
+ ip link set dev vxlan0 up
+ ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
+ ip link set dev br-int up
+
+ .. note:: May be needed to add extra rules to iptable to forward traffic.
+
+ .. code-block:: console
+
+ iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
+ iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
+
+ Execute the following on a jump host:
+
+ .. code-block:: console
+
+ ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
+ ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
+ ip link set dev vxlan0 up
+
+ .. note:: Host and jump host are different baremetal servers.
+
+ b) Modify test case management CIDR.
+ IP addresses IP#1, IP#2 and CIDR must be in the same network.
+
+ .. code-block:: YAML
- b) Build guest image for VNF to run.
+ servers:
+ vnf:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.7/24'
+
+ c) Build guest image for VNF to run.
Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
+ ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with SampleVNF.
It is necessary to have ``sudo`` rights to use this tool.
Also you may need to install several additional packages to use this tool, by
OVS-DPDK Pre-requisites
^^^^^^^^^^^^^^^^^^^^^^^
-On Host:
- a) Create a bridge for VM to connect to external network
+On Host, where VM is created:
+ a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
+ Currently this can be done using VXLAN tunnel.
+
+ Execute the following on host, where VM is created:
.. code-block:: console
+ ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
+ brctl addif br-int vxlan0
+ ip link set dev vxlan0 up
+ ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
+ ip link set dev br-int up
+
+ .. note:: May be needed to add extra rules to iptable to forward traffic.
+
+ .. code-block:: console
+
+ iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
+ iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
+
+ Execute the following on a jump host:
+
+ .. code-block:: console
+
+ ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
+ ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
+ ip link set dev vxlan0 up
+
+ .. note:: Host and jump host are different baremetal servers.
+
+ b) Modify test case management CIDR.
+ IP addresses IP#1, IP#2 and CIDR must be in the same network.
- b) Build guest image for VNF to run.
+ .. code-block:: YAML
+
+ servers:
+ vnf:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.7/24'
+
+ c) Build guest image for VNF to run.
Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
+ ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with SampleVNF.
It is necessary to have ``sudo`` rights to use this tool.
Also you may need to install several additional packages to use this tool, by
Setup SR-IOV ports on the host:
-.. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
+.. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
interface names should be changed according to the HW environment used for
testing.
Config ``pod_ixia.yaml``
- .. code-block:: yaml
-
- nodes:
- -
- name: trafficgen_1
- role: IxNet
- ip: 1.2.1.1 #ixia machine ip
- user: user
- password: r00t
- key_filename: /root/.ssh/id_rsa
- tg_config:
- ixchassis: "1.2.1.7" #ixia chassis ip
- tcl_port: "8009" # tcl server port
- lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
- root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
- py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
- py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
- dut_result_dir: "/mnt/ixia"
- version: 8.1
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:5" # Card:port
- driver: "none"
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:10:64:14:00"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:6" # [(Card, port)]
- driver: "none"
- dpdk_port_num: 1
- local_ip: "152.40.40.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:28:28:14:00"
+ .. literalinclude:: code/pod_ixia.yaml
+ :language: console
for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
IxNetwork
---------
-1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
- (Download from ixia support site)
- Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
-2. Update pod_ixia.yaml file with ixia details.
+IxNetwork testcases use IxNetwork API Python Bindings module, which is
+installed as part of the requirements of the project.
+
+1. Update ``pod_ixia.yaml`` file with ixia details.
.. code-block:: console
Config pod_ixia.yaml
- .. code-block:: yaml
-
- nodes:
- -
- name: trafficgen_1
- role: IxNet
- ip: 1.2.1.1 #ixia machine ip
- user: user
- password: r00t
- key_filename: /root/.ssh/id_rsa
- tg_config:
- ixchassis: "1.2.1.7" #ixia chassis ip
- tcl_port: "8009" # tcl server port
- lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
- root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
- py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
- py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
- dut_result_dir: "/mnt/ixia"
- version: 8.1
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:5" # Card:port
- driver: "none"
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:10:64:14:00"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:6" # [(Card, port)]
- driver: "none"
- dpdk_port_num: 1
- local_ip: "152.40.40.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:28:28:14:00"
+ .. literalinclude:: code/pod_ixia.yaml
+ :language: console
for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
-3. Start IxNetwork TCL Server
+2. Start IxNetwork TCL Server
You will also need to configure the IxNetwork machine to start the IXIA
IxNetworkTclServer. This can be started like so:
``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
(or ``IxNetworkApiServer``)
-4. Execute testcase in samplevnf folder e.g.
+3. Execute testcase in samplevnf folder e.g.
``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
-