1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. SPDX-License-Identifier: CC-BY-4.0
4 ******************************
5 OPNFV FUNCTEST developer guide
6 ******************************
17 Functest is a project dealing with functional testing.
18 Functest produces its own internal test cases but can also be considered
19 as a framework to support feature and VNF onboarding project testing.
20 Functest developed a TestAPI and defined a test collection framework
21 that can be used by any OPNFV project.
23 Therefore there are many ways to contribute to Functest. You can:
25 * Develop new internal test cases
26 * Integrate the tests from your feature project
27 * Develop the framework to ease the integration of external test cases
28 * Develop the API / Test collection framework
29 * Develop dashboards or automatic reporting portals
31 This document describes how, as a developer, you may interact with the
32 Functest project. The first section details the main working areas of
33 the project. The Second part is a list of "How to" to help you to join
34 the Functest family whatever your field of interest is.
37 ========================
38 Functest developer areas
39 ========================
42 Functest High level architecture
43 ================================
45 Functest is project delivering a test container dedicated to OPNFV.
46 It includes the tools, the scripts and the test scenarios.
48 Functest can be described as follow::
50 +----------------------+
52 | +--------------+ | +-------------------+
54 | | Tools | +------------------+ OPNFV |
55 | | Scripts | | | System Under Test |
56 | | Scenarios | +------------------+ |
57 | | | | Management | |
58 | +--------------+ | +-------------------+
62 +----------------------+
64 Functest internal test cases
65 ============================
66 The internal test cases in Danube are:
78 * tempest_full_parallel
79 * tempest_smoke_serial
81 By internal, we mean that this particular test cases have been
82 developped and/or integrated by functest contributors and the associated
83 code is hosted in the Functest repository.
84 An internal case can be fully developed or a simple integration of
85 upstream suites (e.g. Tempest/Rally developped in OpenStack are just
86 integrated in Functest).
87 The structure of this repository is detailed in `[1]`_.
88 The main internal test cases are in the opnfv_tests subfolder of the
89 repository, the internal test cases are:
92 * openstack: api_check, connection_check, snaps_health_check, vping_ssh, vping_userdata, tempest_*, rally_*, snaps_smoke
95 If you want to create a new test case you will have to create a new
96 folder under the testcases directory.
98 Functest external test cases
99 ============================
100 The external test cases are inherited from other OPNFV projects,
101 especially the feature projects.
103 The external test cases are:
123 The code to run these test cases may be directly in the repository of
124 the project. We have also a **features** sub directory under opnfv_tests
125 directory that may be used (it can be useful if you want to reuse
131 Functest can be considered as a framework.
132 Functest is release as a docker file, including tools, scripts and a CLI
133 to prepare the environment and run tests.
134 It simplifies the integration of external test suites in CI pipeline
135 and provide commodity tools to collect and display results.
137 Since Colorado, test categories also known as tiers have been created to
138 group similar tests, provide consistent sub-lists and at the end optimize
139 test duration for CI (see How To section).
141 The definition of the tiers has been agreed by the testing working group.
152 Functest abstraction classes
153 ============================
155 In order to harmonize test integration, 3 abstraction classes have been
156 introduced in Danube:
158 * testcase: base for any test case
159 * feature: abstraction for feature project
160 * vnf_base: abstraction for vnf onboarding
162 The goal is to unify the way to run test from Functest.
164 feature and vnf_base inherit from testcase::
166 +-----------------------------------------+
172 | - publish_report() |
173 | - check_criteria() |
175 +-----------------------------------------+
178 +--------------------+ +--------------------------+
180 | feature | | vnf_base |
182 | - prepare() | | - prepare() |
183 | - execute() | | - deploy_orchestrator() |
184 | - post() | | - deploy_vnf() |
185 | - parse_results() | | - test_vnf() |
189 +--------------------+ +--------------------------+
192 Functest util classes
193 =====================
195 In order to simplify the creation of test cases, Functest develops some
196 functions that can be used by any feature or internal test cases.
197 Several features are supported such as logger, configuration management and
198 Openstack capabilities (snapshot, clean, tacker,..).
199 These functions can be found under <repo>/functest/utils and can be described as
206 |-- functest_logger.py
207 |-- functest_utils.py
208 |-- openstack_clean.py
209 |-- openstack_snapshot.py
210 |-- openstack_tacker.py
211 `-- openstack_utils.py
213 Note that for Openstack, keystone v3 is now deployed by default by compass,
214 fuel and joid in Danube. All installers still support keystone v2 (deprecated in
217 Test collection framework
218 =========================
220 The OPNFV testing group created a test collection database to collect
221 the test results from CI:
224 http://testresults.opnfv.org/test/swagger/spec.html
226 Authentication: opnfv/api@opnfv
228 Any test project running on any lab integrated in CI can push the
229 results to this database.
230 This database can be used to see the evolution of the tests and compare
231 the results versus the installers, the scenarios or the labs.
236 The Test result management can be summarized as follows::
238 +-------------+ +-------------+ +-------------+
240 | Test | | Test | | Test |
241 | Project #1 | | Project #2 | | Project #N |
243 +-------------+ +-------------+ +-------------+
246 +-----------------------------------------+
248 | Test Rest API front end |
249 | http://testresults.opnfv.org/test |
251 +-----------------------------------------+
254 | +-------------------------+
256 | | Test Results DB |
259 | +-------------------------+
262 +----------------------+
266 +----------------------+
270 The TestAPI is used to declare pods, projects, test cases and test
271 results. Pods are the pods used to run the tests.
272 The results pushed in the database are related to pods, projects and
273 cases. If you try to push results of test done on non referenced pod,
274 the API will return an error message.
276 An additional method dashboard has been added to post-process
277 the raw results in release Brahmaputra (deprecated in Colorado).
279 The data model is very basic, 5 objects are created:
287 The code of the API is hosted in the releng repository `[6]`_.
288 The static documentation of the API can be found at `[17]`_.
289 The TestAPI has been dockerized and may be installed locally in your
290 lab. See `[15]`_ for details.
292 The deployment of the TestAPI has been automated.
293 A jenkins job manages:
294 * the unit tests of the TestAPI
295 * the creation of a new docker file
296 * the deployment of the new TestAPI
297 * the archive of the old TestAPI
298 * the backup of the Mongo DB
300 TestAPI Authorization
301 ~~~~~~~~~~~~~~~~~~~~~
303 PUT/DELETE/POST operations of the TestAPI now require token based authorization. The token needs
304 to be added in the request using a header 'X-Auth-Token' for access to the database.
307 headers['X-Auth-Token']
309 The value of the header i.e the token can be accessed in the jenkins environment variable
310 *TestApiToken*. The token value is added as a masked password.
312 .. code-block:: python
314 headers['X-Auth-Token'] = os.environ.get('TestApiToken')
316 The above example is in Python. Token based authentication has been added so that only ci pods
317 jenkins job can have access to the database.
319 Please note that currently token authorization is implemented but is not yet enabled.
324 An automatic reporting page has been created in order to provide a
325 consistent view of the scenarios.
326 In this page, each scenario is evaluated according to test criteria.
327 The code for the automatic reporting is available at `[8]`_.
329 The results are collected from the centralized database every day and,
330 per scenario. A score is calculated based on the results from the last
331 10 days. This score is the addition of single test scores. Each test
332 case has a success criteria reflected in the criteria field from the
335 Considering an instance of a scenario os-odl_l2-nofeature-ha, the
336 scoring is the addition of the scores of all the runnable tests from the
337 categories (tiers healthcheck, smoke and features)
338 corresponding to this scenario.
341 +---------------------+---------+---------+---------+---------+
342 | Test | Apex | Compass | Fuel | Joid |
343 +=====================+=========+=========+=========+=========+
344 | vPing_ssh | X | X | X | X |
345 +---------------------+---------+---------+---------+---------+
346 | vPing_userdata | X | X | X | X |
347 +---------------------+---------+---------+---------+---------+
348 | tempest_smoke_serial| X | X | X | X |
349 +---------------------+---------+---------+---------+---------+
350 | rally_sanity | X | X | X | X |
351 +---------------------+---------+---------+---------+---------+
352 | odl | X | X | X | X |
353 +---------------------+---------+---------+---------+---------+
354 | promise | | | X | X |
355 +---------------------+---------+---------+---------+---------+
356 | doctor | X | | X | |
357 +---------------------+---------+---------+---------+---------+
358 | security_scan | X | | | |
359 +---------------------+---------+---------+---------+---------+
361 +---------------------+---------+---------+---------+---------+
362 | copper | X | | | X |
363 +---------------------+---------+---------+---------+---------+
364 src: colorado (see release note for the last matrix version)
366 All the testcases listed in the table are runnable on
367 os-odl_l2-nofeature scenarios.
368 If no result is available or if all the results are failed, the test
370 If it was successful at least once but not anymore during the 4 runs,
371 the case get 1 point (it worked once).
372 If at least 3 of the last 4 runs were successful, the case get 2 points.
373 If the last 4 runs of the test are successful, the test get 3 points.
375 In the example above, the target score for fuel/os-odl_l2-nofeature-ha
378 The scenario is validated per installer when we got 3 points for all
379 individual test cases (e.g 18/18).
380 Please note that complex or long duration tests are not considered for
381 the scoring. The success criteria are not always easy to define and may
382 require specific hardware configuration. These results however provide
383 a good level of trust on the scenario.
385 A web page is automatically generated every day to display the status.
386 This page can be found at `[9]`_. For the status, click on Status menu,
387 you may also get feedback for vims and tempest_smoke_serial test cases.
389 Any validated scenario is stored in a local file on the web server. In
390 fact as we are using a sliding windows to get results, it may happen
391 that a successful scenarios is no more run (because considered as
392 stable) and then the number of iterations (4 needed) would not be
393 sufficient to get the green status.
395 Please note that other test cases (e.g. sfc_odl, bgpvpn) need also
396 ODL configuration addons and as a consequence specific scenario.
397 There are not considered as runnable on the generic odl_l2 scenario.
402 Dashboard is used to provide a consistent view of the results collected
404 The results showed on the dashboard are post processed from the Database,
405 which only contains raw results.
407 In Brahmaputra, we created a basic dashboard.
408 Since Colorado, it was decided to adopt ELK framework. Mongo DB results
409 are extracted to feed Elasticsearch database (`[7]`_).
411 A script was developed to build elasticsearch data set. This
412 script can be found in `[16]`_.
414 For next versions, it was decided to integrated bitergia dashboard.
415 Bitergia already provides a dashboard for code and infrastructure.
416 A new Test tab will be added. The dataset will be built by consuming
427 The installation and configuration of the Functest docker image is
430 The procedure to start tests is described in `[2]`_
433 How can I contribute to Functest?
434 =================================
436 If you are already a contributor of any OPNFV project, you can
437 contribute to functest. If you are totally new to OPNFV, you must first
438 create your Linux Foundation account, then contact us in order to
439 declare you in the repository database.
441 We distinguish 2 levels of contributors:
443 * the standard contributor can push patch and vote +1/0/-1 on any Functest patch
444 * The commitor can vote -2/-1/0/+1/+2 and merge
446 Functest commitors are promoted by the Functest contributors.
449 Where can I find some help to start?
450 ====================================
452 This guide is made for you. You can also have a look at the project wiki
454 There are references on documentation, video tutorials, tips...
456 You can also directly contact us by mail with [Functest] prefix in the
457 title at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan
461 What kind of testing do you do in Functest?
462 ===========================================
464 Functest is focusing on Functional testing. The results must be PASS or
465 FAIL. We do not deal with performance and/or qualification tests.
466 We consider OPNFV as a black box and execute our tests from the jumphost
467 according to Pharos reference technical architecture.
469 Upstream test suites are integrated (Rally/Tempest/ODL/ONOS,...).
470 If needed Functest may bootstrap temporarily testing activities if they
471 are identified but not covered yet by an existing testing project (e.g
472 security_scan before the creation of the security repository)
475 How are test constraints defined?
476 =================================
478 Test constraints are defined according to 2 parameters:
480 * The scenario (DEPLOY_SCENARIO env variable)
481 * The installer (INSTALLER_TYPE env variable)
483 A scenario is a formal description of the system under test.
484 The rules to define a scenario are described in `[4]`_
486 These 2 constraints are considered to determinate if the test is runnable
487 or not (e.g. no need to run onos suite on odl scenario).
489 In the test declaration for CI, the test owner shall indicate these 2
490 constraints. The file testcases.yaml `[5]`_ must be patched in git to
491 include new test cases. A more elaborated system based on template is
492 planned for next releases
494 For each dependency, it is possible to define a regex::
497 criteria: 'success_rate == 100%'
499 Test suite from Promise project.
501 installer: '(fuel)|(joid)'
504 In the example above, it means that promise test will be runnable only
505 with joid or fuel installers on any scenario.
507 The vims criteria means any installer and exclude onos and odl with
511 criteria: 'status == "PASS"'
513 This test case deploys an OpenSource vIMS solution from Clearwater
514 using the Cloudify orchestrator. It also runs some signaling traffic.
517 scenario: '(ocl)|(nosdn)|^(os-odl)((?!bgpvpn).)*$'
520 How to write and check constraint regex?
521 =======================================
523 Regex are standard regex. You can have a look at `[11]`_
525 You can also easily test your regex via an online regex checker such as `[12]`_.
526 Put your scenario in the TEST STRING window (e.g. os-odl_l3-ovs-ha), put
527 your regex in the REGULAR EXPRESSION window, then you can test your rule.
530 How to know which test I can run?
531 =================================
533 You can use the API `[13]`_. The static declaration is in git `[5]`_
535 If you are in a Functest docker container (assuming that the
536 environment has been prepared): just use the CLI.
538 You can get the list per Test cases or by Tier::
540 # functest testcase list
549 tempest_full_parallel
556 ['vping_ssh', 'vping_userdata', 'tempest_smoke_serial', 'rally_sanity']
560 ['doctor', 'security_scan']
562 ['tempest_full_parallel', 'rally_full']
567 How to manually start Functest tests?
568 =====================================
570 Assuming that you are connected on the jumphost and that the system is
571 "Pharos compliant", i.e the technical architecture is compatible with
572 the one defined in the Pharos project::
574 # docker pull opnfv/functest:latest
575 # envs="-e INSTALLER_TYPE=fuel -e INSTALLER_IP=10.20.0.2 -e DEPLOY_SCENARIO=os-odl_l2-nofeature-ha -e CI_DEBUG=true"
576 # sudo docker run --privileged=true -id ${envs} opnfv/functest:latest /bin/bash
579 Then you must connect to the docker container and source the
582 # docker ps (copy the id)
583 # docker exec -ti <container_id> bash
587 You must first check if the environment is ready::
589 # functest env status
590 Functest environment ready to run tests.
593 If not ready, prepare the env by launching::
595 # functest env prepare
596 Functest environment ready to run tests.
598 Once the Functest env is ready, you can use the CLI to start tests.
600 You can run test cases per test case or per tier:
601 # functest testcase run <case name> or # functest tier run <tier name>
606 # functest testcase run tempest_smoke_serial
607 # functest tier run features
610 If you want to run all the tests you can type::
612 # functest testcase run all
615 If you want to run all the tiers (same at the end that running all the
616 test cases) you can type::
618 # functest tier run all
621 How to declare my tests in Functest?
622 ====================================
624 If you want to add new internal test cases, you can submit patch under
625 the testcases directory of Functest repository.
627 For feature test integration, the code can be kept into your own
628 repository. The Functest files to be modified are:
630 * functest/docker/Dockerfile: get your code in Functest container
631 * functest/ci/testcases.yaml: reference your test and its associated constraints
637 This file lists the repositories (internal or external) to be cloned in
638 the Functest container. You can also add external packages::
640 RUN git clone https://gerrit.opnfv.org/gerrit/<your project> ${REPOS_DIR}/<your project>
645 All the test cases that must be run from CI / CLI must be declared in
648 This file is used to get the constraints related to the test::
650 name: <my_super_test_case>
651 criteria: <not used yet in Colorado, could be > 'PASS', 'rate > 90%'
653 <the description of your super test suite>
655 installer: regex related to installer e.g. 'fuel', '(apex)||(joid)'
656 scenario: regex related to the scenario e.g. 'ovs*no-ha'
659 You must declare your test case in one of the category (tier).
661 If you are integrating test suites from a feature project, the default
662 category is **features**.
665 How to select my list of tests for CI?
666 ======================================
668 Functest can be run automatically from CI, a jenkins job is usually
669 called after an OPNFV fresh installation.
670 By default we try to run all the possible tests (see `[14]` called from
671 Functest jenkins job)::
673 cmd="python ${FUNCTEST_REPO_DIR}/ci/run_tests.py -t all ${flags}"
676 Each case can be configured as daily and/or weekly task.
677 Weekly tasks are used for long duration or experimental tests.
678 Daily tasks correspond to the minimum set of test suites to validate a scenario.
680 When executing run_tests.py, a check based on the jenkins build tag will
681 be considered to detect whether it is a daily and/or a weekly test.
683 in your CI you can customize the list of test you want to run by case or
684 by tier, just change the line::
686 cmd="python ${FUNCTEST_REPO_DIR}/ci/run_tests.py -t <whatever you want> ${flags}"
690 cmd="python ${FUNCTEST_REPO_DIR}/ci/run_tests.py -t healthcheck,smoke ${flags}"
692 This command will run all the test cases of the first 2 tiers, i.e.
693 healthcheck, connection_check, api_check, vping_ssh, vping_userdata,
694 snaps_smoke, tempest_smoke_serial and rally_sanity.
697 How to push your results into the Test Database
698 ===============================================
700 The test database is used to collect test results. By default it is
701 enabled only for CI tests from Production CI pods.
703 The architecture and associated API is described in previous chapter.
704 If you want to push your results from CI, you just have to call the API
705 at the end of your script.
707 You can also reuse a python function defined in functest_utils.py::
709 def push_results_to_db(db_url, case_name, logger, pod_name,version, payload):
711 POST results to the Result target DB
713 url = db_url + "/results"
714 installer = get_installer_type(logger)
715 params = {"project_name": "functest", "case_name": case_name,
716 "pod_name": pod_name, "installer": installer,
717 "version": version, "details": payload}
719 headers = {'Content-Type': 'application/json'}
721 r = requests.post(url, data=json.dumps(params), headers=headers)
726 print "Error [push_results_to_db('%s', '%s', '%s', '%s', '%s')]:" \
727 % (db_url, case_name, pod_name, version, payload), e
731 Where can I find the documentation on the test API?
732 ===================================================
734 http://artifacts.opnfv.org/releng/docs/testapi.html
737 How to exclude Tempest case from default Tempest smoke suite?
738 =============================================================
740 Tempest default smoke suite deals with 165 test cases.
741 Since Colorado the success criteria is 100%, i.e. if 1 test is failed the
742 success criteria is not matched for the scenario.
744 It is necessary to exclude some test cases that are expected to fail due to
745 known upstream bugs (see release notes).
747 A file has been created for such operation: https://git.opnfv.org/cgit/functest/tree/functest/opnfv_tests/openstack/tempest/custom_tests/blacklist.txt.
749 It can be described as follows::
753 - os-odl_l2-bgpvpn-ha
754 - os-odl_l2-bgpvpn-noha
759 - tempest.api.compute.servers.test_create_server.ServersTestJSON.test_list_servers
760 - tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_server_details
761 - tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_list_servers
762 - tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_server_details
763 - tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard
764 - tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops
765 - tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
766 - tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
767 - tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
769 Please note that each exclusion must be justified. the goal is not to exclude
770 test cases because they do not pass. Several scenarios reached the 100% criteria.
771 So it is expected in the patch submitted to exclude the cases to indicate the
772 reasons of the exclusion.
775 How do I know the Functest status of a scenario?
776 ================================================
778 A Functest automatic reporting page is generated daily.
779 This page is dynamically created through a cron job and is based on the results
780 stored in the Test DB.
781 You can access this reporting page: http://testresults.opnfv.org/reporting
783 See https://wiki.opnfv.org/pages/viewpage.action?pageId=6828617 for details.
786 I have tests, to which category should I declare them?
787 ======================================================
789 CATEGORIES/TIERS description:
791 +----------------+-------------------------------------------------------------+
792 | healthcheck | Simple OpenStack healthcheck tests case that validates the |
793 | | basic operations in OpenStack |
794 +----------------+-------------------------------------------------------------+
795 | Smoke | Set of smoke test cases/suites to validate the most common |
796 | | OpenStack and SDN Controller operations |
797 +----------------+-------------------------------------------------------------+
798 | Features | Test cases that validate a specific feature on top of OPNFV.|
799 | | Those come from Feature projects and need a bit of support |
800 | | for integration |
801 +----------------+-------------------------------------------------------------+
802 | Components | Advanced OpenStack tests: Full Tempest, Full Rally |
803 +----------------+-------------------------------------------------------------+
804 | Performance | Out of Functest Scope |
805 +----------------+-------------------------------------------------------------+
806 | VNF | Test cases related to deploy an open source VNF including |
807 | | an orchestrator |
808 +----------------+-------------------------------------------------------------+
810 The main ambiguity could be between features and VNF.
811 In fact sometimes you have to spawn VMs to demonstrate the capabilities of the
812 feature you introduced.
813 We recommend to declare your test in the feature category.
815 VNF category is really dedicated to test including:
817 * creation of resources
818 * deployment of an orchestrator/VNFM
819 * deployment of the VNF
823 The goal is not to study a particular feature on the infrastructure but to have
824 a whole end to end test of a VNF automatically deployed in CI.
825 Moreover VNF are run in weekly jobs (one a week), feature tests are in daily
826 jobs and use to get a scenario score.
831 Functest deals with internal and external testcases. Each testcase can generate
834 Since Colorado we introduce the possibility to push the logs to the artifact.
835 A new script (https://git.opnfv.org/releng/tree/utils/push-test-logs.sh) has
838 When called, and assuming that the POD is authorized to push the logs to
839 artifacts, the script will push all the results or logs locally stored under
840 /home/opnfv/functest/results/.
842 If the POD is not connected to CI, logs are not pushed.
843 But in both cases, logs are stored in /home/opnfv/functest/results in the
845 Projects are encouraged to push their logs here.
847 Since Colorado it is also easy for feature project to integrate this feature by
848 adding the log file as output_file parameter when calling execute_command from
849 functest_utils library
851 ret_val = functest_utils.execute_command(cmd, output_file=log_file)
854 How does Functest deal with VNF onboarding?
855 ===========================================
857 VNF onboarding has been introduced in Brahmaputra through the automation of a
858 clearwater vIMS deployed thanks to cloudify orchestrator.
860 This automation has been described at OpenStack summit Barcelona:
861 https://youtu.be/Jr4nG74glmY
863 The goal of Functest consists in testing OPNFV from a functional perspective:
864 the NFVI and/or the features developed in OPNFV. Feature test suites are
865 provided by the feature project. Functest just simplifies the integration of
866 the suite into the CI and gives a consolidated view of the tests per scenario.
868 Functest does not develop VNFs.
870 Functest does not test any MANO stack.
872 OPNFV projects dealing with VNF onboarding
873 ------------------------------------------
875 Testing VNF is not the main goal however it gives interesting and realistic
876 feedback on OPNFV as a Telco cloud.
878 Onboarding VNF also allows to test a full stack: orchestrator + VNF.
880 Functest is VNF and MANO stack agnostic.
882 An internship has been initiated to reference the Open Source VNF: Intern
883 Project Open Source VNF catalog
885 New projects dealing with orchestrators or VNFs are candidate for Danube.
887 The 2 projects dealing with orchestration are:
889 * orchestra (Openbaton)
892 The Models project address various goals for promoting availability and
893 convergence of information and/or data models related to NFV service/VNF
894 management, as being defined in standards (SDOs) and as developed in open
897 Functest VNF onboarding
898 -----------------------
900 In order to simplify VNF onboarding a new abstraction class has been developed
903 This class is based on vnf_base and can be described as follow:
905 +------------+ +--------------+
906 | test_base |------------>| vnf_base |
907 +------------+ +--------------+
909 |_ deploy_orchestrator (optional)
915 Several methods are declared in vnf_base:
918 * deploy_orchestrator
923 deploy_vnf and test_vnf are mandatory.
925 prepare will create a user and a project.
927 How to declare your orchestrator/VNF?
928 -------------------------------------
931 You must declare your testcase in the file <Functest repo>/functest/ci/testcases.yaml
935 You can precise some configuration parameters in config_functest.yaml
937 3) implement your test
939 Create your own VnfOnboarding file
941 you must create your entry point through a python class as referenced in the
944 e.g. aaa => creation of the file <Functest repo>/functest/opnfv_tests/vnf/aaa/aaa.py
946 the class shall inherit vnf_base.
947 You must implement the methods deploy_vnf() and test_vnf() and may implement
948 deploy_orchestrator()
950 you can call the code from your repo (but need to add the repo in Functest if
955 So far we considered the test as PASS if vnf_deploy and test_vnf is PASS
956 (see example in aaa).
962 _`[1]`: http://artifacts.opnfv.org/functest/docs/configguide/index.html Functest configuration guide
964 _`[2]`: http://artifacts.opnfv.org/functest/docs/userguide/index.html functest user guide
966 _`[3]`: https://wiki.opnfv.org/opnfv_test_dashboard Brahmaputra dashboard
968 _`[4]`: https://wiki.opnfv.org/display/INF/CI+Scenario+Naming
970 _`[5]`: https://git.opnfv.org/cgit/functest/tree/ci/testcases.yaml
972 _`[6]`: https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api
974 _`[7]`: https://git.opnfv.org/cgit/releng/tree/utils/test/scripts
976 _`[8]`: https://git.opnfv.org/cgit/releng/tree/utils/test/reporting/functest
978 _`[9]`: http://testresults.opnfv.org/reporting/
980 _`[10]`: https://wiki.opnfv.org/opnfv_functional_testing
982 _`[11]`: https://docs.python.org/2/howto/regex.html
984 _`[12]`: https://regex101.com/
986 _`[13]`: http://testresults.opnfv.org/test/api/v1/projects/functest/cases
988 _`[14]`: https://git.opnfv.org/cgit/releng/tree/jjb/functest/functest-daily.sh
990 _`[15]`: https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api/README.rst
992 _`[16]`: https://git.opnfv.org/cgit/releng/tree/utils/test/scripts/mongo_to_elasticsearch.py
994 _`[17]`: http://artifacts.opnfv.org/releng/docs/testapi.html
996 OPNFV main site: http://www.opnfv.org
998 OPNFV functional test page: https://wiki.opnfv.org/opnfv_functional_testing
1000 IRC support chan: #opnfv-functest
1002 _`OpenRC`: http://docs.openstack.org/user-guide/common/cli_set_environment_variables_using_openstack_rc.html
1004 _`Rally installation procedure`: https://rally.readthedocs.org/en/latest/tutorial/step_0_installation.html
1006 _`config_functest.yaml` : https://git.opnfv.org/cgit/functest/tree/functest/ci/config_functest.yaml