This section gives some guidelines about how to troubleshoot the test cases
owned by Functest.
-**IMPORTANT**: The steps defined below must be executed inside the Functest Docker
-container and after sourcing the OpenStack credentials::
+**IMPORTANT**: As in the previous section, the steps defined below must be
+executed inside the Functest Docker container and after sourcing the OpenStack credentials::
. $creds
vPing common
^^^^^^^^^^^^
-For both vPing test cases (vPing_SSH, and vPing_userdata), the first steps are
+For both vPing test cases (**vPing_ssh**, and **vPing_userdata**), the first steps are
similar:
-* Create Glance image
-* Create Network
-* Create Security Group
-* Create instances
-After these actions, the test cases differ and will be explained in their section.
+ * Create Glance image
+ * Create Network
+ * Create Security Group
+ * Create instances
-This test cases can be run inside the container as follows::
+After these actions, the test cases differ and will be explained in their respective section.
+
+These test cases can be run inside the container as follows::
$repos_dir/functest/docker/run_tests.sh -t vping_ssh
$repos_dir/functest/docker/run_tests.sh -t vping_userdata
-The *run_tests.sh* script is calling internally the vPing scripts, located in
-*$repos_dir/functest/testcases/vPing/CI/libraries/vPing_ssh.py* or
+The **run_tests.sh** script is basically calling internally the corresponding
+vPing scripts, located in
+*$repos_dir/functest/testcases/vPing/CI/libraries/vPing_ssh.py* and
*$repos_dir/functest/testcases/vPing/CI/libraries/vPing_userdata.py* with the
appropriate flags.
After finishing the test execution, the corresponding script will remove all
created resources in OpenStack (image, instances, network and security group).
-When troubleshooting, it is handy sometimes to keep those resources in case the
+When troubleshooting, it is advisable sometimes to keep those resources in case the
test fails and a manual testing is needed. This can be achieved by adding the flag *-n*::
$repos_dir/functest/docker/run_tests.sh -n -t vping_ssh
neutron security-group-rule-create sg-test --direction egress --ethertype IPv4 --protocol tcp --port-range-min 80 --port-range-max 80 --remote-ip-prefix 0.0.0.0/0
The next step is to create the instances. The image used is located in
-*/home/opnfv/functest/data/cirros-0.3.4-x86_64-disk.img* and a glance image is created
-with the name *functest-vping*. If booting the instances fails (i.e. the status
+*/home/opnfv/functest/data/cirros-0.3.4-x86_64-disk.img* and a Glance image is created
+with the name **functest-vping**. If booting the instances fails (i.e. the status
is not **ACTIVE**), you can check why it failed by doing::
nova list
It might show some messages about the booting failure. To try that manually::
- net_id=$(neutron net-list | grep net-test | awk '{print $2}')
- nova boot --flavor 2 --image functest-vping --nic net-id=$net_id nova-test
+ nova boot --flavor 2 --image functest-vping --nic net-id=<NET_ID> nova-test
-This will spawn a VM using the network created previously manually. If you want to use
-the existing vPing network, just replace *net-test* by *vping-net*.
+This will spawn a VM using the network created previously manually.
In all the OPNFV tested scenarios from CI, it never has been a problem with the
previous actions. Further possible problems are explained in the following sections.
vPing_SSH
^^^^^^^^^
This test case creates a floating IP on the external network and assigns it to
-the second instance with name *opnfv-vping-2*. The purpose of this is to establish
-a SSH connection to that instance to SCP a script that will ping the first insntace.
+the second instance **opnfv-vping-2**. The purpose of this is to establish
+a SSH connection to that instance and SCP a script that will ping the first instance.
This script is located in the repository under
*$repos_dir/functest/testcases/vPing/CI/libraries/ping.sh* and takes an IP as
a parameter. When the SCP is completed, the test will do an SSH call to that script
vPing_ssh- ERROR - Cannot establish connection to IP xxx.xxx.xxx.xxx. Aborting
If this is displayed, stop the test or wait for it to finish (if you have used the flag
-*-n* in *run_tests.sh* explained previously) so that the test does not clean
+*-n* in **run_tests.sh** explained previously) so that the test does not clean
the OpenStack resources. It means that the Container can not reach the public
-IP assigned to the instance *opnfv-vping-2*. There are many possible reasons, and
+IP assigned to the instance **opnfv-vping-2**. There are many possible reasons, and
they really depend on the chosen scenario. For most of the ODL-L3 and ONOS scenarios
this has been noticed and it is a known limitation.
-First, make sure that the instance *opnfv-vping-2* managed to get an IP from
+First, make sure that the instance **opnfv-vping-2** succeeded to get an IP from
the DHCP agent. It can be checked by doing::
nova console-log opnfv-vping-2
Further troubleshooting is out of scope of this document, as it might be due to
-problems with the SDN controller. Contact the installer team members.
+problems with the SDN controller. Contact the installer team members or send an
+email to the corresponding OPNFV mailing list for more information.
This test case does not create any floating IP neither establishes an SSH
connection. Instead, it uses nova-metadata service when creating an instance
to pass the same script as before (ping.sh) but as 1-line text. This script
-will be executed automatically when the second instance *opnfv-vping-2* is booted.
+will be executed automatically when the second instance **opnfv-vping-2** is booted.
The only known problem here for this test to fail is mainly the lack of support
of cloud-init (nova-metadata service). Check the console of the instance::
nova console-log opnfv-vping-2
-If this text or similar is showed::
+If this text or similar is shown::
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 1.13. request failed
failed to read iid from metadata. tried 20
it means that the instance failed to read from the metadata service. Contact
-the installer team members for more information.
+the Functest or installer teams for more information.
-Cloud-init in not supported on scenario dealing with ONOS and the tests have been
+NOTE: Cloud-init in not supported on scenario dealing with ONOS and the tests have been
excluded from CI in those scenarios.
In the upstream OpenStack CI all the Tempest test cases are supposed to pass.
If some test cases fail in an OPNFV deployment, the reason is very probably one
-of the following::
-
- +-----------------------------+------------------------------------------------+
- | Error | Details |
- +=============================+================================================+
- | Resources required for test | Such resources could be e.g. an external |
- | case execution are missing | network and access to the management subnet |
- | | (adminURL) from the Functest docker container. |
- +-----------------------------+------------------------------------------------+
- | OpenStack components or | Check running services in the controller and |
- | services are missing or not | compute nodes (e.g. with "systemctl" or |
- | configured properly | "service" commands). Configuration parameters |
- | | can be verified from related .conf files |
- | | located under /etc/<component> directories. |
- +-----------------------------+------------------------------------------------+
- | Some resources required for | The tempest.conf file, automatically generated |
- | execution test cases are | by Rally in Functest, does not contain all the |
- | missing | needed parameters or some parameters are not |
- | | set properly. |
- | | The tempest.conf file is located in /home/opnfv|
- | | /.rally/tempest/for-deployment-<UUID> in |
- | | Functest container |
- | | Use "rally deployment list" command in order to|
- | | check UUID of current deployment. |
- +-----------------------------+------------------------------------------------+
+of the following
+
++-----------------------------+------------------------------------------------+
+| Error | Details |
++=============================+================================================+
+| Resources required for test | Such resources could be e.g. an external |
+| case execution are missing | network and access to the management subnet |
+| | (adminURL) from the Functest docker container. |
++-----------------------------+------------------------------------------------+
+| OpenStack components or | Check running services in the controller and |
+| services are missing or not | compute nodes (e.g. with "systemctl" or |
+| configured properly | "service" commands). Configuration parameters |
+| | can be verified from related .conf files |
+| | located under /etc/<component> directories. |
++-----------------------------+------------------------------------------------+
+| Some resources required for | The tempest.conf file, automatically generated |
+| execution test cases are | by Rally in Functest, does not contain all the |
+| missing | needed parameters or some parameters are not |
+| | set properly. |
+| | The tempest.conf file is located in /home/opnfv|
+| | /.rally/tempest/for-deployment-<UUID> in |
+| | Functest container |
+| | Use "rally deployment list" command in order to|
+| | check UUID of current deployment. |
++-----------------------------+------------------------------------------------+
When some Tempest test case fails, captured traceback and possibly also related
* vm
To know more about what those scenarios are doing, they are defined in:
-*$repos_dir/functest/testcases/VIM/OpenStack/CI/suites*. For more info about
+*$repos_dir/functest/testcases/VIM/OpenStack/CI/rally_cert/scenario*. For more info about
Rally scenario definition please refer to the Rally official documentation.
If the flag *all* is specified, it will run all the scenarios one by one. Please
ONOS
^^^^
-TODO
-
-OpenContrail
-^^^^^^^^^^^^
+Please refer to the ONOS documentation.
Feature
-------
vIMS deployment may fail for several reasons, the most frequent ones are
described in the following table:
-+===================================+====================================+
-| Error | Comments |
+-----------------------------------+------------------------------------+
+| Error | Comments |
++===================================+====================================+
| Keystone admin API not reachable | Impossible to create vIMS user and |
| | tenant |
+-----------------------------------+------------------------------------+
| SSH connection issue between the | if vPing test fails, vIMS test will|
| Test container and the VM | fail... |
+-----------------------------------+------------------------------------+
-| No Internet access from a VM | the VMs of the VNF must have an |
+| No Internet access from the VM | the VMs of the VNF must have an |
| | external access to Internet |
+-----------------------------------+------------------------------------+
+| No access to OpenStack API from | Orchestrator can be installed but |
+| the VM | the vIMS VNF installation fails |
++-----------------------------------+------------------------------------+
Promise
^^^^^^^
-TODO
+Please refer to the Promise documentation.
+
+
+SDNVPN
+^^^^^^^
+
+Please refer to the SNVPN documentation.