.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. SPDX-License-Identifier: CC-BY-4.0
-=============
-OPNFV testing
-=============
+======================
+OPNFV Testing Overview
+======================
Introduction
============
-Testing is one of the key activities in OPNFV and includes unit, feature, component, system
-level testing for development, automated deployment, performance characterization or stress
-testing.
+Testing is one of the key activities in OPNFV and includes unit, feature,
+component, system level testing for development, automated deployment,
+performance characterization and stress testing.
-Test projects are dedicated to provide frameworks, tooling and test-cases catagorized as
+Test projects are dedicated to provide frameworks, tooling and test-cases categorized as
functional, performance or compliance testing. Test projects fulfill different roles such as
verifying VIM functionality, benchmarking components and platforms or analysis of measured
-KPIs for the scenarios released in OPNFV.
+KPIs for OPNFV release scenarios.
Feature projects also provide their own test suites that either run independently or within a
test project.
by individual OPNFV projects and provides links to project specific documentation.
-OPNFV testing ecosystem
-=======================
-
-The testing projects
---------------------
+The OPNFV Testing Ecosystem
+===========================
-The OPNFV testing projects may be summarized as follows:
+The OPNFV testing projects are represented in the following diagram:
-.. figure:: https://wiki.opnfv.org/download/attachments/8688867/EcoSystem%20Copy.png
+.. figure:: ../../images/OPNFV_testing_working_group.png
:align: center
:alt: Overview of OPNFV Testing projects
+----------------+---------------------------------------------------------+
| Project | Description |
+================+=========================================================+
-| Bottlenecks | This project aims to find system bottlenecks by testing |
+| Bottlenecks | This project aims to find system bottlenecks by testing |
| | and verifying OPNFV infrastructure in a staging |
| | environment before committing it to a production |
| | environment. Instead of debugging a deployment in |
| | analysis of the OPNFV infrastructure. |
+----------------+---------------------------------------------------------+
| CPerf | SDN Controller benchmarks and performance testing, |
-| | applicable to controllers in general. |
+| | applicable to controllers in general. Collaboration of |
+| | upstream controller testing experts, external test tool |
+| | developers and the standards community. Primarily |
+| | contribute to upstream/external tooling, then add jobs |
+| | to run those tools on OPNFV's infrastructure. |
+----------------+---------------------------------------------------------+
| Dovetail | This project intends to define and provide a set of |
-| | OPNFV related validation criteria that will provide |
-| | input for the evaluation of the use of OPNFV trademarks.|
-| | The dovetail project is executed with the guidance and |
-| | oversight of the Compliance and Certification committee |
-| | and work to secure the goals of the C&C committee for |
-| | each release. The project intends to incrementally |
-| | define qualification criteria that establish the |
-| | foundations of how we are able to measure the ability to|
-| | utilize the OPNFV platform, how the platform itself |
-| | should behave, and how applications may be deployed on |
-| | the platform. |
+| | OPNFV related validation criteria/tests that will |
+| | provide input for the OPNFV Complaince Verification |
+| | Program. The Dovetail project is executed with the |
+| | guidance and oversight of the Complaince and |
+| | Certification (C&C) committee and work to secure the |
+| | goals of the C&C committee for each release. The |
+| | project intends to incrementally define qualification |
+| | criteria that establish the foundations of how one is |
+| | able to measure the ability to utilize the OPNFV |
+| | platform, how the platform itself should behave, and |
+| | how applications may be deployed on the platform. |
+----------------+---------------------------------------------------------+
| Functest | This project deals with the functional testing of the |
| | VIM and NFVI. It leverages several upstream test suites |
| | project to launch feature test suites in CI/CD. |
| | The project is used for scenario validation. |
+----------------+---------------------------------------------------------+
-| Qtip | QTIP as the project for "Platform Performance |
+| NFVbench | NFVbench is a compact and self contained data plane |
+| | performance measurement tool for OpensStack based NFVi |
+| | platforms. It is agnostic of the NFVi distribution, |
+| | Neutron networking implementation and hardware. |
+| | It runs on any Linux server with a DPDK compliant |
+| | NIC connected to the NFVi platform data plane and |
+| | bundles a highly efficient software traffic generator. |
+| | Provides a fully automated measurement of most common |
+| | packet paths at any level of scale and load using |
+| | RFC-2544. Available as a Docker container with simple |
+| | command line and REST interfaces. |
+| | Easy to use as it takes care of most of the guesswork |
+| | generally associated to data plane benchmarking. |
+| | Can run in any lab or in production environments. |
++----------------+---------------------------------------------------------+
+| QTIP | QTIP as the project for "Platform Performance |
| | Benchmarking" in OPNFV aims to provide user a simple |
| | indicator for performance, supported by comprehensive |
| | testing data and transparent calculation formula. |
| | performance benchmarking which helps users to build |
| | indicators by themselves with ease. |
+----------------+---------------------------------------------------------+
-| Storperf | The purpose of this project is to provide a tool to |
+| StorPerf | The purpose of this project is to provide a tool to |
| | measure block and object storage performance in an NFVI.|
| | When complemented with a characterization of typical VF |
| | storage performance requirements, it can provide |
| | pass/fail thresholds for test, staging, and production |
| | NFVI environments. |
+----------------+---------------------------------------------------------+
-| VSperf | This project provides a framework for automation of NFV |
-| | data-plane performance testing and benchmarking. The |
-| | NFVI fast-path includes switch technology and network |
-| | with physical and virtual interfaces. VSperf can be |
-| | used to evaluate the suitability of different Switch |
-| | implementations and features, quantify data-path |
-| | performance and optimize platform configurations. |
+| VSPERF | VSPERF is an OPNFV project that provides an automated |
+| | test-framework and comprehensive test suite based on |
+| | Industry Test Specifications for measuring NFVI |
+| | data-plane performance. The data-path includes switching|
+| | technologies with physical and virtual network |
+| | interfaces. The VSPERF architecture is switch and |
+| | traffic generator agnostic and test cases can be easily |
+| | customized. Software versions and configurations |
+| | including the vSwitch (OVS or VPP) as well as the |
+| | network topology are controlled by VSPERF (independent |
+| | of OpenStack). VSPERF is used as a development tool for |
+| | optimizing switching technologies, qualification of |
+| | packet processing components and for pre-deployment |
+| | evaluation of the NFV platform data-path. |
+----------------+---------------------------------------------------------+
| Yardstick | The goal of the Project is to verify the infrastructure |
| | compliance when running VNF applications. NFV Use Cases |
+----------------+---------------------------------------------------------+
-===================================
-The testing working group resources
-===================================
+===============================
+Testing Working Group Resources
+===============================
-The assets
-==========
+Test Results Collection Framework
+=================================
-Overall Architecture
---------------------
-The Test result management can be summarized as follows::
+Any test project running in the global OPNFV lab infrastructure and is
+integrated with OPNFV CI can push test results to the community Test Database
+using a common Test API. This database can be used to track the evolution of
+testing and analyse test runs to compare results across installers, scenarios
+and between technically and geographically diverse hardware environments.
+
+Results from the databse are used to generate a dashboard with the current test
+status for each testing project. Please note that you can also deploy the Test
+Database and Test API locally in your own environment.
+
+Overall Test Architecture
+-------------------------
+
+The management of test results can be summarized as follows::
+-------------+ +-------------+ +-------------+
| | | | | |
| |
+----------------------+ +----------------------+
| | | |
- | Testing Dashboards | | Landing page |
+ | Testing Dashboards | | Test Landing page |
| | | |
+----------------------+ +----------------------+
-The testing databases
----------------------
-A Mongo DB Database has been introduced for the Brahmaputra release.
+The Test Database
+-----------------
+A Mongo DB Database was introduced for the Brahmaputra release.
The following collections are declared in this database:
* pods: the list of pods used for production CI
* projects: the list of projects providing test cases
- * testcases: the test cases related to a given project
+ * test cases: the test cases related to a given project
* results: the results of the test cases
* scenarios: the OPNFV scenarios tested in CI
-This database can be used by any project through the testapi.
-Please note that projects may also use additional databases. This database is
-mainly use to colelct CI results and scenario trust indicators.
+This database can be used by any project through the Test API.
+Please note that projects may also use additional databases. The Test
+Database is mainly use to collect CI test results and generate scenario
+trust indicators. The Test Database is also cloned for OPNFV Plugfests in
+order to provide a private datastore only accessible to Plugfest participants.
-This database is also cloned for OPNFV Plugfest.
+Test API description
+--------------------
+The Test API is used to declare pods, projects, test cases and test results.
+Pods correspond to a cluster of machines (3 controller and 2 compute nodes in
+HA mode) used to run the tests and are defined in the Pharos project.
+The results pushed in the database are related to pods, projects and test cases.
+Trying to push results generated from a non-referenced pod will return an error
+message by the Test API.
+
+The data model is very basic, 5 objects are available:
+ * Pods
+ * Projects
+ * Test cases
+ * Results
+ * Scenarios
-The test API
-------------
-The Test API is used to declare pods, projects, test cases and test
-results. Pods are the pods used to run the tests.
-The results pushed in the database are related to pods, projects and
-cases. If you try to push results of test done on non referenced pod,
-the API will return an error message.
+For detailed information, please go to http://artifacts.opnfv.org/releng/docs/testapi.html
-An additional method dashboard has been added to post-process
-the raw results in the Brahmaputra release (deprecated in Colorado release).
+The code of the Test API is hosted in the releng-testresults repository `[TST2]`_.
+The static documentation of the Test API can be found at `[TST3]`_.
+The Test API has been dockerized and may be installed locally in your lab.
-The data model is very basic, 4 objects are created:
+The deployment of the Test API has been automated.
+A jenkins job manages:
- * Pods
- * Projects
- * Testcases
- * Results
+ * the unit tests of the Test API
+ * the creation of a new docker file
+ * the deployment of the new Test API
+ * the archive of the old Test API
+ * the backup of the Mongo DB
+
+Test API Authorization
+----------------------
+
+PUT/DELETE/POST operations of the TestAPI now require token based authorization. The token needs
+to be added in the request using a header 'X-Auth-Token' for access to the database.
+
+e.g::
+
+ headers['X-Auth-Token']
+
+The value of the header i.e the token can be accessed in the jenkins environment variable
+*TestApiToken*. The token value is added as a masked password.
+
+.. code-block:: python
+
+ headers['X-Auth-Token'] = os.environ.get('TestApiToken')
+
+The above example is in Python. Token based authentication has been added so
+that only CI pods running Jenkins jobs can access the database. Please note
+that currently token authorization is implemented but is not yet enabled.
+
+
+Test Project Reporting
+======================
+The reporting page for the test projects is http://testresults.opnfv.org/reporting/
+
+.. figure:: ../../images/reporting_page.png
+ :align: center
+ :alt: Testing group reporting page
+
+This page provides reporting per OPNFV release and per testing project.
+
+.. figure:: ../../images/reportingMaster.png
+ :align: center
+ :alt: Testing group Euphrates reporting page
+
+An evolution of the reporting page is planned to unify test reporting by creating
+a landing page that shows the scenario status in one glance (this information was
+previously consolidated manually on a wiki page). The landing page will be displayed
+per scenario and show:
-Pods::
-
- {
- "id": <ID>,
- "details": <URL description of the POD>,
- "creation_date": "YYYY-MM-DD HH:MM:SS",
- "name": <The POD Name>,
- "mode": <metal or virtual>,
- "role": <ci-pod or community-pod or single-node>
- },
-
-Projects::
-
- {
- "id": <ID>,
- "name": <Name of the Project>,
- "creation_date": "YYYY-MM-DD HH:MM:SS",
- "description": <Short description>
- },
-
-Testcases::
-
- {
- "id": <ID>,
- "name":<Name of the test case>,
- "project_name":<Name of belonged project>,
- "creation_date": "YYYY-MM-DD HH:MM:SS",
- "description": <short description>,
- "url":<URL for longer description>
- },
-
-Results::
-
- {
- "_id": <ID>,
- "case_name": <Reference to the test case>,
- "project_name": <Reference to project>,
- "pod_name": <Reference to POD where the test was executed>,
- "installer": <Installer Apex or Compass or Fuel or Joid>,
- "version": <master or Colorado or Brahmaputra>,
- "start_date": "YYYY-MM-DD HH:MM:SS",
- "stop_date": "YYYY-MM-DD HH:MM:SS",
- "build_tag": <such as "jenkins-functest-fuel-baremetal-daily-master-108">,
- "scenario": <Scenario on which the test was executed>,
- "criteria": <PASS or FAILED>,
- "trust_indicator": {
- "current": 0,
- "histories": []
- }
- }
-
- Scenarios::
-
- {
- "id": <ID>,
- "name":<Name of the test case>,
- "name" : "os-odl_l2-nofeature-ha",
- "installers":[
- {
- "installer" : <installer name>,
- "versions": [
- {
- "version": <version name>,
- "owner": <scenario owner>,
- "custom list": { "projects": [{
- "functest" : [ "vping_ssh", "vping_userdata", "tempest_smoke_serial", "rally_sanity", "odl", "doctor"],
- "yardstick" : [ "tc002","tc005","tc010","tc011","tc012","tc014","tc037","tc055","tc063","tc069","tc070","tc071","tc072","tc075"]}]},
- "score": { "projects": [{
- "functest" : [{"date": YYY-MM-DD HH:MM, "score":<score>}, {"date": YYY-MM-DD HH:MM, "score":<score>}, ...],
- "yardstick" : [{"date": YYY-MM-DD HH:MM, "score":<score>}, {"date": YYY-MM-DD HH:MM, "score":<score>}, ...]}]},
- "trust_indicator": { "projects": [{
- "functest" : [{"date": YYY-MM-DD HH:MM,"status":<status>}, {"date": YYY-MM-DD HH:MM,"status":<status>},...],
- "yardstick" : [{"date": YYY-MM-DD HH:MM,"status":<status>}, {"date": YYY-MM-DD HH:MM,"status":<status>},...]}]}},
- { ....
- },
-
-For detailed information, please go to
-
- http://testresults.opnfv.org/test/swagger/spec.html
-
- Authentication: opnfv/api@opnfv
-
-Please notes that POST/DELETE/PUT operations for test or study purpose via
-swagger website is not allowed, because it will change the real data in
-the database.
-
-
-The reporting
--------------
-Until the Colorado release, each testing project was reporting a status on a dedicated page.
-It was decided to unify the reporting by creating a landing page that should give
-the scenario status in one glance (it was previously consolidated manually
-on a wiki page). The landing page will be display per scenario:
* the status of the deployment
- * the score of the test projectS
+ * the score from each test suite. There is no overall score, it is determined
+ by each test project.
* a trust indicator
-Additional filters (version, installer, test collection time window,... )
-This landing page has been dockerized. The back end relies on the testing DB.
+Test Case Catalog
+=================
+Until the Colorado release, each testing project managed the list of its
+test cases. This made it very hard to have a global view of the available test
+cases from the different test projects. A common view was possible through the API
+but it was not very user friendly.
+Test cases per project may be listed by calling:
- TODO: add picture
+ http://testresults.opnfv.org/test/api/v1/projects/<project_name>/cases
-The test case catalog
-----------------------
-Until the Colorado release, each testing project was managing the list of its test cases. It
-was very hard to have a global view of the available test cases among the
-different test projects. A common view was possible through the API but it was
-not very user friendly.
-It was decided to build a web site providing a consistent view of the test cases
-per project and allow any scenario owner to build his/her custom list of tests.
-The test catalog can be described as below::
+with project_name: bottlenecks, functest, qtip, storperf, vsperf, yardstick
- TODO: add picture
+A test case catalog has now been realized `[TST4]`_. Roll over the project then
+click to get the list of test cases, and then click on the case to get more details.
-Other resources
+.. figure:: ../../images/TestcaseCatalog.png
+ :align: center
+ :alt: Testing group testcase catalog
+
+Test Dashboards
===============
+The Test Dashboard is used to provide a consistent view of the results collected in CI.
+The results shown on the dashboard are post processed from the Database, which only
+contains raw results.
+The dashboard can be used in addition to the reporting page (high level view) to allow
+the creation of specific graphs according to what the test owner wants to show.
+
+In Brahmaputra, a basic dashboard was created in Functest.
+In Colorado, Yardstick used Grafana (time based graphs) and ELK (complex
+graphs).
+Since Danube, the OPNFV testing community decided to adopt the ELK framework and to
+use Bitergia for creating highly flexible dashboards `[TST5]`_.
+
+.. figure:: ../../images/DashboardBitergia.png
+ :align: center
+ :alt: Testing group testcase catalog
+
+
+.. include:: ./energy-monitoring.rst
+
+
+OPNFV Test Group Information
+============================
+
+For more information or to participate in the OPNFV test community please see the
+following:
+
wiki: https://wiki.opnfv.org/testing
mailing list: test-wg@lists.opnfv.org
-IRC chan: #opnfv-testperf
+IRC channel: #opnfv-testperf
weekly meeting (https://wiki.opnfv.org/display/meetings/TestPerf):
* Usual time: Every Thursday 15:00-16:00 UTC / 7:00-8:00 PST
* APAC time: 2nd Wednesday of the month 8:00-9:00 UTC
+
=======================
-Reference documentation
+Reference Documentation
=======================
+----------------+---------------------------------------------------------+
| Project | Documentation links |
+================+=========================================================+
-| Bottlenecks | https://wiki.opnfv.org/display/bottlenecks/Bottlenecks |
+| Bottlenecks | https://wiki.opnfv.org/display/bottlenecks/Bottlenecks |
+----------------+---------------------------------------------------------+
-| CPerf | https://wiki.opnfv.org/display/meetings/CPerf |
+| CPerf | https://wiki.opnfv.org/display/cperf |
+----------------+---------------------------------------------------------+
-| Dovetail | https://wiki.opnfv.org/display/dovetail/Dovetail |
+| Dovetail | https://wiki.opnfv.org/display/dovetail |
+----------------+---------------------------------------------------------+
| Functest | https://wiki.opnfv.org/display/functest/ |
+----------------+---------------------------------------------------------+
-| Qtip | https://wiki.opnfv.org/display/qtip |
+| NFVbench | https://wiki.opnfv.org/display/nfvbench/ |
++----------------+---------------------------------------------------------+
+| QTIP | https://wiki.opnfv.org/display/qtip |
+----------------+---------------------------------------------------------+
-| Storperf | https://wiki.opnfv.org/display/storperf/Storperf |
+| StorPerf | https://wiki.opnfv.org/display/storperf/Storperf |
+----------------+---------------------------------------------------------+
-| VSperf | https://wiki.opnfv.org/display/vsperf/VSperf |
+| VSPERF | https://wiki.opnfv.org/display/vsperf |
+----------------+---------------------------------------------------------+
| Yardstick | https://wiki.opnfv.org/display/yardstick/Yardstick |
+----------------+---------------------------------------------------------+
+
+
+`[TST1]`_: OPNFV web site
+
+`[TST2]`_: TestAPI code repository link in releng-testresults
+
+`[TST3]`_: TestAPI autogenerated documentation
+
+`[TST4]`_: Testcase catalog
+
+`[TST5]`_: Testing group dashboard
+
+.. _`[TST1]`: http://www.opnfv.org
+.. _`[TST2]`: https://git.opnfv.org/releng-testresults
+.. _`[TST3]`: http://artifacts.opnfv.org/releng/docs/testapi.html
+.. _`[TST4]`: http://testresults.opnfv.org/testing/index.html#!/select/visual
+.. _`[TST5]`: https://opnfv.biterg.io:443/goto/283dba93ca18e95964f852c63af1d1ba