1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
7 This section gives some guidelines about how to troubleshoot the test cases
10 **IMPORTANT**: As in the previous section, the steps defined below must be
11 executed inside the Functest Docker container and after sourcing the OpenStack
18 source /home/opnfv/functest/conf/openstack.creds
23 This section covers the test cases related to the VIM (healthcheck, vping_ssh,
24 vping_userdata, tempest_smoke_serial, tempest_full_parallel, rally_sanity,
29 For both vPing test cases (**vPing_ssh**, and **vPing_userdata**), the first steps are
34 * Create Security Group
37 After these actions, the test cases differ and will be explained in their
40 These test cases can be run inside the container, using new Functest CLI as follows::
42 $ functest testcase run vping_ssh
43 $ functest testcase run vping_userdata
45 The Functest CLI is designed to route a call to the corresponding internal
46 python scripts, located in paths:
47 *$repos_dir/functest/testcases/vPing/CI/libraries/vPing_ssh.py* and
48 *$repos_dir/functest/testcases/vPing/CI/libraries/vPing_userdata.py*
51 #. In this Colorado Funtest Userguide, the use of the Functest CLI is
52 emphasized. The Functest CLI replaces the earlier Bash shell script
55 #. There is one difference, between the Functest CLI based test case
56 execution compared to the earlier used Bash shell script, which is
57 relevant to point out in troubleshooting scenarios:
59 The Functest CLI does **not yet** support the option to suppress
60 clean-up of the generated OpenStack resources, following the execution
63 Explanation: After finishing the test execution, the corresponding
64 script will remove, by default, all created resources in OpenStack
65 (image, instances, network and security group). When troubleshooting,
66 it is advisable sometimes to keep those resources in case the test
67 fails and a manual testing is needed.
69 It is actually still possible to invoke test execution, with suppression
70 of OpenStack resource cleanup, however this requires invocation of a
71 **specific Python script:** '/home/opnfv/repos/functest/ci/run_test.py'.
72 The `OPNFV Functest Developer Guide`_ provides guidance on the use of that
73 Python script in such troubleshooting cases.
75 Some of the common errors that can appear in this test case are::
77 vPing_ssh- ERROR - There has been a problem when creating the neutron network....
79 This means that there has been some problems with Neutron, even before creating the
80 instances. Try to create manually a Neutron network and a Subnet to see if that works.
81 The debug messages will also help to see when it failed (subnet and router creation).
82 Example of Neutron commands (using 10.6.0.0/24 range for example)::
84 neutron net-create net-test
85 neutron subnet-create --name subnet-test --allocation-pool start=10.6.0.2,end=10.6.0.100 \
86 --gateway 10.6.0.254 net-test 10.6.0.0/24
87 neutron router-create test_router
88 neutron router-interface-add <ROUTER_ID> test_subnet
89 neutron router-gateway-set <ROUTER_ID> <EXT_NET_NAME>
91 Another related error can occur while creating the Security Groups for the instances::
93 vPing_ssh- ERROR - Failed to create the security group...
95 In this case, proceed to create it manually. These are some hints::
97 neutron security-group-create sg-test
98 neutron security-group-rule-create sg-test --direction ingress --protocol icmp \
99 --remote-ip-prefix 0.0.0.0/0
100 neutron security-group-rule-create sg-test --direction ingress --ethertype IPv4 \
101 --protocol tcp --port-range-min 80 --port-range-max 80 --remote-ip-prefix 0.0.0.0/0
102 neutron security-group-rule-create sg-test --direction egress --ethertype IPv4 \
103 --protocol tcp --port-range-min 80 --port-range-max 80 --remote-ip-prefix 0.0.0.0/0
105 The next step is to create the instances. The image used is located in
106 */home/opnfv/functest/data/cirros-0.3.4-x86_64-disk.img* and a Glance image is created
107 with the name **functest-vping**. If booting the instances fails (i.e. the status
108 is not **ACTIVE**), you can check why it failed by doing::
111 nova show <INSTANCE_ID>
113 It might show some messages about the booting failure. To try that manually::
115 nova boot --flavor m1.small --image functest-vping --nic net-id=<NET_ID> nova-test
117 This will spawn a VM using the network created previously manually.
118 In all the OPNFV tested scenarios from CI, it never has been a problem with the
119 previous actions. Further possible problems are explained in the following sections.
124 This test case creates a floating IP on the external network and assigns it to
125 the second instance **opnfv-vping-2**. The purpose of this is to establish
126 a SSH connection to that instance and SCP a script that will ping the first
127 instance. This script is located in the repository under
128 *$repos_dir/functest/testcases/OpenStack/vPing/ping.sh* and takes an IP as
129 a parameter. When the SCP is completed, the test will do an SSH call to that script
130 inside the second instance. Some problems can happen here::
132 vPing_ssh- ERROR - Cannot establish connection to IP xxx.xxx.xxx.xxx. Aborting
134 If this is displayed, stop the test or wait for it to finish, if you have used
135 the special method of test invocation with specific supression of OpenStack
136 resource clean-up, as explained earler. It means that the Container can not
137 reach the Public/External IP assigned to the instance **opnfv-vping-2**. There
138 are many possible reasons, and they really depend on the chosen scenario. For
139 most of the ODL-L3 and ONOS scenarios this has been noticed and it is a known
142 First, make sure that the instance **opnfv-vping-2** succeeded to get an IP
143 from the DHCP agent. It can be checked by doing::
145 nova console-log opnfv-vping-2
147 If the message *Sending discover* and *No lease, failing* is shown, it probably
148 means that the Neutron dhcp-agent failed to assign an IP or even that it was not
149 responding. At this point it does not make sense to try to ping the floating IP.
151 If the instance got an IP properly, try to ping manually the VM from the container::
157 If the ping does not return anything, try to ping from the Host where the Docker
158 container is running. If that solves the problem, check the iptable rules because
159 there might be some rules rejecting ICMP or TCP traffic coming/going from/to the
162 At this point, if the ping does not work either, try to reproduce the test
163 manually with the steps described above in the vPing common section with the
166 neutron floatingip-create <EXT_NET_NAME>
167 nova floating-ip-associate nova-test <FLOATING_IP>
170 Further troubleshooting is out of scope of this document, as it might be due to
171 problems with the SDN controller. Contact the installer team members or send an
172 email to the corresponding OPNFV mailing list for more information.
178 This test case does not create any floating IP neither establishes an SSH
179 connection. Instead, it uses nova-metadata service when creating an instance
180 to pass the same script as before (ping.sh) but as 1-line text. This script
181 will be executed automatically when the second instance **opnfv-vping-2** is booted.
183 The only known problem here for this test to fail is mainly the lack of support
184 of cloud-init (nova-metadata service). Check the console of the instance::
186 nova console-log opnfv-vping-2
188 If this text or similar is shown::
190 checking http://169.254.169.254/2009-04-04/instance-id
191 failed 1/20: up 1.13. request failed
192 failed 2/20: up 13.18. request failed
193 failed 3/20: up 25.20. request failed
194 failed 4/20: up 37.23. request failed
195 failed 5/20: up 49.25. request failed
196 failed 6/20: up 61.27. request failed
197 failed 7/20: up 73.29. request failed
198 failed 8/20: up 85.32. request failed
199 failed 9/20: up 97.34. request failed
200 failed 10/20: up 109.36. request failed
201 failed 11/20: up 121.38. request failed
202 failed 12/20: up 133.40. request failed
203 failed 13/20: up 145.43. request failed
204 failed 14/20: up 157.45. request failed
205 failed 15/20: up 169.48. request failed
206 failed 16/20: up 181.50. request failed
207 failed 17/20: up 193.52. request failed
208 failed 18/20: up 205.54. request failed
209 failed 19/20: up 217.56. request failed
210 failed 20/20: up 229.58. request failed
211 failed to read iid from metadata. tried 20
213 it means that the instance failed to read from the metadata service. Contact
214 the Functest or installer teams for more information.
216 NOTE: Cloud-init in not supported on scenarios dealing with ONOS and the tests
217 have been excluded from CI in those scenarios.
223 In the upstream OpenStack CI all the Tempest test cases are supposed to pass.
224 If some test cases fail in an OPNFV deployment, the reason is very probably one
227 +-----------------------------+-----------------------------------------------------+
229 +=============================+=====================================================+
230 | Resources required for test | Such resources could be e.g. an external network |
231 | case execution are missing | and access to the management subnet (adminURL) from |
232 | | the Functest docker container. |
233 +-----------------------------+-----------------------------------------------------+
234 | OpenStack components or | Check running services in the controller and compute|
235 | services are missing or not | nodes (e.g. with "systemctl" or "service" commands).|
236 | configured properly | Configuration parameters can be verified from the |
237 | | related .conf files located under '/etc/<component>'|
239 +-----------------------------+-----------------------------------------------------+
240 | Some resources required for | The tempest.conf file, automatically generated by |
241 | execution test cases are | Rally in Functest, does not contain all the needed |
242 | missing | parameters or some parameters are not set properly. |
243 | | The tempest.conf file is located in directory |
244 | | '/home/opnfv/.rally/tempest/for-deployment-<UUID>' |
245 | | in the Functest Docker container. Use the "rally |
246 | | deployment list" command in order to check the UUID |
247 | | the UUID of the current deployment. |
248 +-----------------------------+-----------------------------------------------------+
251 When some Tempest test case fails, captured traceback and possibly also the
252 related REST API requests/responses are output to the console. More detailed debug
253 information can be found from tempest.log file stored into related Rally deployment
260 The same error causes which were mentioned above for Tempest test cases, may also
261 lead to errors in Rally as well.
263 It is possible to run only one Rally scenario, instead of the whole suite.
264 To do that, call the alternative python script as follows::
266 python $repos_dir/functest/testcases/OpenStack/rally/run_rally-cert.py -h
267 usage: run_rally-cert.py [-h] [-d] [-r] [-s] [-v] [-n] test_name
269 positional arguments:
270 test_name Module name to be tested. Possible values are : [
271 authenticate | glance | cinder | heat | keystone | neutron |
272 nova | quotas | requests | vm | all ] The 'all' value
273 performs all possible test scenarios
276 -h, --help show this help message and exit
277 -d, --debug Debug mode
278 -r, --report Create json result file
279 -s, --smoke Smoke test mode
280 -v, --verbose Print verbose info about the progress
281 -n, --noclean Don't clean the created resources for this test.
283 For example, to run the Glance scenario with debug information::
285 python $repos_dir/functest/testcases/OpenStack/rally/run_rally-cert.py -d glance
287 Possible scenarios are:
299 To know more about what those scenarios are doing, they are defined in directory:
300 *$repos_dir/functest/testcases/OpenStack/rally/scenario*
301 For more info about Rally scenario definition please refer to the Rally official
302 documentation. `[3]`_
304 If the flag *all* is specified, it will run all the scenarios one by one. Please
305 note that this might take some time (~1,5hr), taking around 1 hour alone to
306 complete the Nova scenario.
308 To check any possible problems with Rally, the logs are stored under
309 */home/opnfv/functest/results/rally/* in the Functest Docker container.
317 2 versions are supported in Colorado, depending on the scenario:
321 The upstream test suites have not been adapted, so you may get 18 or 15 tests
322 passed on 18 depending on your configuration. The 3 testcases are partly failed
323 due to wrong return code.
328 Please refer to the ONOS documentation. `ONOSFW User Guide`_ .
337 Please refer to the Doctor documentation. `Doctor User Guide`_
342 Please refer to the Promise documentation. `Promise User Guide`_
347 Please refer to the SNVPN documentation. `SDNVPN User Guide`_
355 vIMS deployment may fail for several reasons, the most frequent ones are
356 described in the following table:
358 +-----------------------------------+------------------------------------+
360 +===================================+====================================+
361 | Keystone admin API not reachable | Impossible to create vIMS user and |
363 +-----------------------------------+------------------------------------+
364 | Impossible to retrieve admin role | Impossible to create vIMS user and |
366 +-----------------------------------+------------------------------------+
367 | Error when uploading image from | impossible to deploy VNF |
368 | OpenStack to glance | |
369 +-----------------------------------+------------------------------------+
370 | Cinder quota cannot be updated | Default quotas not sufficient, they|
371 | | are adapted in the script |
372 +-----------------------------------+------------------------------------+
373 | Impossible to create a volume | VNF cannot be deployed |
374 +-----------------------------------+------------------------------------+
375 | SSH connection issue between the | if vPing test fails, vIMS test will|
376 | Test Docker container and the VM | fail... |
377 +-----------------------------------+------------------------------------+
378 | No Internet access from the VM | the VMs of the VNF must have an |
379 | | external access to Internet |
380 +-----------------------------------+------------------------------------+
381 | No access to OpenStack API from | Orchestrator can be installed but |
382 | the VM | the vIMS VNF installation fails |
383 +-----------------------------------+------------------------------------+
386 .. _`OPNFV Functest Developer Guide`: http://artifacts.opnfv.org/functest/docs/devguide/#