-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB and others.
+=======
+License
+=======
+OPNFV Colorado release note for Yardstick Docs
+are licensed under a Creative Commons Attribution 4.0 International License.
+You should have received a copy of the license along with this.
+If not, see <http://creativecommons.org/licenses/by/4.0/>.
-============================================
+The *Yardstick framework*, the *Yardstick test cases* and the *ApexLake*
+experimental framework are opensource software, licensed under the terms of the
+Apache License, Version 2.0.
+
+=========================================
OPNFV Colorado Release Note for Yardstick
-============================================
+=========================================
.. toctree::
:maxdepth: 2
This document describes the release note of Yardstick project.
-License
-=======
-
-OPNFV Colorado release note for Yardstick Docs
-are licensed under a Creative Commons Attribution 4.0 International License.
-You should have received a copy of the license along with this.
-If not, see <http://creativecommons.org/licenses/by/4.0/>.
-
-The *Yardstick framework*, the *Yardstick test cases* and the *ApexLake*
-experimental framework are opensource software, licensed under the terms of the
-Apache License, Version 2.0.
-
-
Version History
===============
independent.
-Summary
-=======
+OPNFV Colorado Release
+======================
This Colorado release provides *Yardstick* as a framework for NFVI testing
and OPNFV feature testing, automated in the OPNFV CI pipeline, including:
Deliverables
============
+Documents
+---------
+
+ - User Guide: http://artifacts.opnfv.org/yardstick/colorado/docs/userguide/index.html
+
+ - Test Results: http://artifacts.opnfv.org/yardstick/colorado/docs/results/overview.html
+
+
Software Deliverables
---------------------
This is the third tracked release of Yardstick. It is based on following
upstream versions:
+- ONOS Goldeneye
+
- OpenStack Mitaka
- OpenDaylight Beryllium
verified scenarios and limitations
-Reason for Version
-==================
-* TODO *
-
Feature additions
-----------------
-* TODO *
+ - Yardstick plugin
+
+
+Scenario Matrix
+===============
+
+For Colorado 1.0, Yardstick was tested on the following scenarios:
+
++-------------------------+---------+---------+---------+---------+
+| Scenario | Apex | Compass | Fuel | Joid |
++=========================+=========+=========+=========+=========+
+| os-nosdn-nofeature-noha | | | | X |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-nofeature-ha | X | | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-nofeature-ha | X | X | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-nofeature-noha| | X | | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l3-nofeature-ha | X | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l3-nofeature-ha | | X | | |
++-------------------------+---------+---------+---------+---------+
+| os-onos-sfc-ha | X | | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-onos-nofeature-ha | X | | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-onos-nofeature-noha | | X | | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-sfc-ha | | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-sfc-noha | X | X | | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-bgpvpn-ha | X | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-bgpvpn-noha | | X | | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-kvm-ha | | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-kvm-noha | | X | | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-ovs-ha | | | | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-ovs-noha | X | X | | |
++-------------------------+---------+---------+---------+---------+
+| os-ocl-nofeature-ha | | | | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-lxd-ha | | | | X |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-lxd-noha | | | | X |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-fdio-noha | X | | | |
++-------------------------+---------+---------+---------+---------+
+
+
+Test results
+============
+
+Test results are available in:
+
+ - jenkins logs on CI: https://build.opnfv.org/ci/view/yardstick/
+
+The reporting pages can be found at:
+
+ * apex: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-apex.html
+ * compass: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-compass.html
+ * fuel: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-fuel.html
+ * joid: http://testresults.opnfv.org/reporting/yardstick/release/colorado/index-status-joid.html
+
+You can get additional details through test logs on http://artifacts.opnfv.org/.
+As no search engine is available on the OPNFV artifact web site you must
+retrieve the pod identifier on which the tests have been executed (see
+field pod in any of the results) then click on the selected POD and look
+for the date of the test you are interested in.
-Corrected Faults
-----------------
-* TODO *
Known Issues/Faults
------------
- - IPv6 support
- Boot up VM failed in joid-os-nosdn-lxd-ha and joid-os-nosdn-lxd-noha scenarios
- Yardstick CI job timeout in fuel-os-onos-nofeature-ha scenario
- - SSH timeout in apex-os-onos-sfc-ha, apex-os-onos-nofeature-ha and apex-os-odl_l3-nofeature-ha scenarios
+ - SSH timeout in apex-os-onos-sfc-ha, apex-os-onos-nofeature-ha scenarios
+ - Floating IP not supported in apex-os-odl_l3-nofeature-ha scenario
- Scp /home/stack/overcloudrc failed in apex-os-nosdn-ovs-noha and apex-os-odl_l2-sfc-noha scenarios
.. note:: The faults not related to *Yardstick* framework, addressing scenarios
notes.
+Corrected Faults
+----------------
+* TODO *
+
+
Colorado known restrictions/issues
==================================
+-----------+-----------+----------------------------------------------+
| Installer | Scenario | Issue |
+===========+===========+==============================================+
-| any | *-bgpvpn | floating ips not supported. Some Test cases |
+| any | *-bgpvpn | Floating ips not supported. Some Test cases |
| | | related to floating ips are excluded. |
+-----------+-----------+----------------------------------------------+
-* TODO *
-
-
-Test results
-============
-
-Test results are available in:
-
- - jenkins logs on CI: https://build.opnfv.org/ci/view/yardstick/
+| any | odl_l3-* | Some test cases related to using floating IP |
+| | | addresses fail because of a known ODL bug. |
+| | | https://jira.opnfv.org/browse/APEX-112 |
++-----------+-----------+----------------------------------------------+
+| apex | *-fdio | Due to late integration, fdio scenarios' |
+| | | test suite file is not provided. |
++-----------+-----------+----------------------------------------------+
+| joid | *-lxd | In the LXD scenarios, nova-lxd does not |
+| | | support qcow2 Images. |
+| | | https://jira.opnfv.org/browse/YARDSTICK-325 |
++-----------+-----------+----------------------------------------------+
Open JIRA tickets
+------------------+-----------------------------------------------+
| JIRA | Description |
+==================+===============================================+
-+------------------+-----------------------------------------------+
-+------------------+-----------------------------------------------+
-+------------------+-----------------------------------------------+
-+------------------+-----------------------------------------------+
+| `YARDSTICK-325`_ | Add imge format support for LXD scenario |
+| | |
+------------------+-----------------------------------------------+
- Yardstick IRC chanel: #opnfv-yardstick
+.. _`YARDSTICK-325` : https://jira.opnfv.org/browse/YARDSTICK-325
+
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-============================================
-Test Results for apex-os-odl_l2-nofeature-ha
-============================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. _Dashboard: http://130.211.154.108/grafana/dashboard/db/yardstick-main
-.. _POD1: https://wiki.opnfv.org/pharos_rls_b_labs
-
-Overview of test results
-------------------------
-
-See Dashboard_ for viewing test result metrics for each respective test case.
-
-All of the test case results below are based on scenario test runs on the
-LF POD1, between February 19 and February 24.
-
-TC002
------
-
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping.
-
-The results for the observed period show minimum 0.37ms, maximum 0.49ms,
-average 0.45ms.
-SLA set to 10 ms, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC005
------
-
-The IO read bandwidth for the observed period show average between 124KB/s and
-129 KB/s, with a minimum 372KB/s and maximum 448KB/s.
-
-SLA set to 400KB/s, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC010
------
-
-The measurements for memory latency for various sizes and strides are shown in
-Dashboard_. For 48MB, the minimum is 22.75 and maximum 30.77 ns.
-
-SLA set to 30 ns, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC011
------
-
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3.
-
-The mimimum packet delay variation measured is 2.5us and the maximum 8.6us.
-
-TC012
------
-
-See Dashboard_ for results.
-
-SLA set to 15 GB/s, only used as a reference, no value has yet been defined by
-OPNFV.
-
-TC014
------
-
-The Unixbench processor single and parallel speed scores show scores between
-3625 and 3660.
-
-No SLA set.
-
-TC037
------
-
-See Dashboard_ for results.
-
-Detailed test results
----------------------
-
-The scenario was run on LF POD1_ with:
-Apex
-ODL Beryllium
-
-
-Rationale for decisions
------------------------
-
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-
-Execute tests over a longer period of time, with time reference to versions of
-components, for allowing better understanding of the behavior of the system.
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-======================================
-Test Results for apex-os-odl_l2-sfc-ha
-======================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-Detailed test results
----------------------
-
-.. info on lab, installer, scenario
-
-Rationale for decisions
------------------------
-.. result analysis, pass/fail
-
-Conclusions and recommendations
--------------------------------
-
-.. did the expected behavior occured?
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-============================================
-Test Results for apex-os-odl_l3-nofeature-ha
-============================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-Detailed test results
----------------------
-
-.. info on lab, installer, scenario
-
-Rationale for decisions
------------------------
-.. result analysis, pass/fail
-
-Conclusions and recommendations
--------------------------------
-
-.. did the expected behavior occured?
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-==========================================
-Test Results for apex-os-onos-nofeature-ha
-==========================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-Detailed test results
----------------------
-
-.. info on lab, installer, scenario
-
-Rationale for decisions
------------------------
-.. result analysis, pass/fail
-
-Conclusions and recommendations
--------------------------------
-
-.. did the expected behavior occured?
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-==============================================
-Test Results for compass-os-nosdn-nofeature-ha
-==============================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. _Grafana: http://130.211.154.108/grafana/dashboard/db/yardstick-main
-.. _SC_POD: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 5 consecutive scenario test
-runs, each run on the Huawei SC_POD_ between February 13 and 18 in 2016. The
-best would be to have more runs to draw better conclusions from, but these are
-the only runs available at the time of OPNFV R2 release
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. The measurements are on average varying between 1.95 and 2.23 ms
-with a first 2 - 3.27 ms RTT spike in the beginning of each run (This could be
-because of normal ARP handling).SLA set to 10 ms. The SLA value is used as a
-reference, it has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth look similar between different test runs, with an
-average at approx. 145-162 MB/s. Within each run the results vary much,
-minimum 2MB/s and maximum 712MB/s on the totality.
-SLA set to 400KB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are consistent among test runs and results
-in approx. 1.2 ns. The variations between runs are similar, between
-1.215 and 1.278 ns. SLA set to 30 ns. The SLA value is used as
-a reference, it has not been defined by OPNFV.
-
-TC011
------
-For this scenario no results are available to report on. Probable reason is
-an integer/floating point issue regarding how InfluxDB is populated with
-result data from the test runs.
-
-TC012
------
-The average measurements for memory bandwidth are consistent among most of the
-different test runs at 12.98 - 16.73 GB/s. The last test run averages at
-16.67 GB/s. Within each run the results vary, with minimal BW of 16.59
-GB/s and maximum of 16.71 GB/s of the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor single and parallel speed scores show similar results
-at approx. 3000. The runs vary between scores 2499 and 3105.
-No SLA set.
-
-TC027
------
-The round-trip-time (RTT) between VM1 with ipv6 router on different blades is
-measured using ping6. The measurements are consistent at approx. 4 ms.
-SLA set to 30 ms.The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs are typically affected by
-the amount of flows set up and result in higher RTT and less PPS
-throughput.
-
-When running with less than 10000 flows the results are flat and consistent.
-RTT is then approx. 30 ms and the number of PPS remains flat at approx.
-230000 PPS. Beyond approx. 10000 flows and up to 1000000 (one million) there
-is an even drop in RTT and PPS performance, eventually ending up at approx.
-105-113 ms and 100000 PPS respectively.
-
-TC040
------
-test purpose is to verify the function of Yang-to-Tosca in Parse, and this test
-case is a weekly task, so it was triggered by manually, the result whether the
-output is same with expected outcome is success
-No SLA set.
-
-Detailed test results
----------------------
-
-The scenario was run on Huawei SC_POD_ with:
-Compass 1.0
-OpenStack Liberty
-OVS 2.4.0
-
-No SDN controller installed
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collects (apart from TC011_).
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all (30 ms
-compared to 1 ms or less, which is more than a 3000 percentage different
-in RTT results). The larger amounts of flows in TC037 generate worse
-RTT results, in the magnitude of several hundreds of milliseconds. It would
-be interesting to also make and compare all these measurements to completely
-(optimized) bare metal machines running native Linux with all other relevant
-tools available, e.g. lmbench, pktgen etc.
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-===============================================
-Test Results for compass-os-odl_l2-nofeature-ha
-===============================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. _Dashboard: http://130.211.154.108/grafana/dashboard/db/yardstick-main
-.. _Sclara: https://wiki.opnfv.org/pharos_rls_b_labs
-
-
-Overview of test results
-------------------------
-
-See Dashboard_ for viewing test result metrics for each respective test case.
-
-All of the test case results below are based on scenario test runs on the
-Huawei Sclara_.
-
-TC002
------
-
-See Dashboard_ for results.
-SLA set to 10 ms, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC005
------
-
-See Dashboard_ for results.
-SLA set to 400KB/s, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC010
------
-
-See Dashboard_ for results.
-SLA set to 30ns, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC011
------
-
-See Dashboard_ for results.
-
-
-TC012
------
-
-See Dashboard_ for results.
-SLA set to 15 GB/s, only used as a reference; no value has yet been defined by
-OPNFV.
-
-
-TC014
------
-
-See Dashboard_ for results.
-No SLA set.
-
-
-TC037
------
-
-See Dashboard_ for results.
-
-
-Detailed test results
----------------------
-
-The scenario was run on Huawei Sclara_ POD with:
-Compass
-ODL Beryllium
-
-Rationale for decisions
------------------------
-
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
-Conclusions and recommendations
--------------------------------
-
-Execute tests over a longer period of time, with time reference to versions of
-components, for allowing better understanding of the behavior of the system.
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-=============================================
-Test Results for compass-os-onos-nofeature-ha
-=============================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. _Dashboard: http://130.211.154.108/grafana/dashboard/db/yardstick-main
-.. _Sclara: https://wiki.opnfv.org/pharos_rls_b_labs
-
-
-verview of test results
-------------------------
-
-See Dashboard_ for viewing test result metrics for each respective test case.
-
-All of the test case results below are based on scenario test runs on the
-Huawei Sclara_.
-
-TC002
------
-
-See Dashboard_ for results.
-SLA set to 10 ms, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC005
------
-
-See Dashboard_ for results.
-SLA set to 400KB/s, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC010
------
-
-See Dashboard_ for results.
-SLA set to 30ns, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC011
------
-
-See Dashboard_ for results.
-
-
-TC012
------
-
-See Dashboard_ for results.
-SLA set to 15 GB/s, only used as a reference; no value has yet been defined by
-OPNFV.
-
-
-TC014
------
-
-See Dashboard_ for results.
-No SLA set.
-
-
-TC037
------
-
-See Dashboard_ for results.
-
-
-Detailed test results
----------------------
-
-The scenario was run on Huawei Sclara_ POD with:
-Compass
-ONOS
-
-Rationale for decisions
------------------------
-
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
-Conclusions and recommendations
--------------------------------
-
-Execute tests over a longer period of time, with time reference to versions of
-components, for allowing better understanding of the behavior of the system.
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-===========================================
-Test Results for fuel-os-nosdn-nofeature-ha
-===========================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. _Grafana: http://130.211.154.108/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 5 consecutive scenario test
-runs, each run on the Ericsson POD2_ between February 13 and 18 in 2016. The
-best would be to have more runs to draw better conclusions from, but these are
-the only runs available at the time of OPNFV R2 release.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. The measurements are on average varying between 0.5 and 1.1 ms
-with a first 2 - 2.5 ms RTT spike in the beginning of each run (This could be
-because of normal ARP handling). The 2 last runs are very similar in their
-results. But, to be able to draw any further conclusions more runs should be
-made. There is one measurement taken on February 16 that does not have the
-first RTT spike, and less variations to the RTT. The reason for this is
-unknown. There is a discussion on another test measurement made Feb. 16 in
-TC037_.
-SLA set to 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth look similar between different test runs, with an
-average at approx. 160-170 MB/s. Within each run the results vary much,
-minimum 2 MB/s and maximum 630 MB/s on the totality. Most runs have a
-minimum of 3 MB/s (one run at 2 MB/s). The maximum BW varies much more in
-absolute numbers, between 566 and 630 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are consistent among test runs and results
-in approx. 1.2 ns. The variations between runs are similar, between
-1.215 and 1.219 ns. One exception is February 16, where the varation is
-greater, between 1.22 and 1.28 ns. SLA set to 30 ns. The SLA value is used as
-a reference, it has not been defined by OPNFV.
-
-TC011
------
-For this scenario no results are available to report on. Probable reason is
-an integer/floating point issue regarding how InfluxDB is populated with
-result data from the test runs.
-
-TC012
------
-The average measurements for memory bandwidth are consistent among most of the
-different test runs at 17.2 - 17.3 GB/s. The very first test run averages at
-17.7 GB/s. Within each run the results vary, with a minimal BW of 15.4
-GB/s and maximum of 18.2 GB/s of the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor single and parallel speed scores show similar results
-at approx. 3200. The runs vary between scores 3160 and 3240.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs are typically affected by
-the amount of flows set up and result in higher RTT and less PPS
-throughput.
-
-When running with less than 10000 flows the results are flat and consistent.
-RTT is then approx. 30 ms and the number of PPS remains flat at approx.
-250000 PPS. Beyond approx. 10000 flows and up to 1000000 (one million) there
-is an even drop in RTT and PPS performance, eventually ending up at approx.
-150-250 ms and 40000 PPS respectively.
-
-There is one measurement made February 16 that has slightly worse results
-compared to the other 4 measurements. The reason for this is unknown. For
-instance anyone being logged onto the POD can be of relevance for such a
-disturbance.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ with:
-Fuel 8.0
-OpenStack Liberty
-OVS 2.3.1
-
-No SDN controller installed
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collects (apart from TC011_).
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all (30 ms
-compared to 1 ms or less, which is more than a 3000 percentage different
-in RTT results). The larger amounts of flows in TC037 generate worse
-RTT results, in the magnitude of several hundreds of milliseconds. It would
-be interesting to also make and compare all these measurements to completely
-(optimized) bare metal machines running native Linux with all other relevant
-tools available, e.g. lmbench, pktgen etc.
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-=====================================
-Test Results for fuel-os-nosdn-ovs-ha
-=====================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-Detailed test results
----------------------
-
-.. info on lab, installer, scenario
-
-Rationale for decisions
------------------------
-.. result analysis, pass/fail
-
-Conclusions and recommendations
--------------------------------
-
-.. did the expected behavior occured?
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-==========================================
-Test Results for fuel-os-onos-nofeature-ha
-==========================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. _Grafana: http://130.211.154.108/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 7 scenario test
-runs, each run on the Ericsson POD2_ between February 13 and 21 in 2016. Test
-case TC011_ is not reported on due to an InfluxDB issue.
-The best would be to have more runs to draw better conclusions from, but these
-are the only runs available at the time of OPNFV R2 release.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. The majority (5) of the test run measurements result in an average
-between 0.4 and 0.5 ms. The other 2 dates stick out with an RTT average of 0.9
-to 1 ms.
-The majority of the runs start with a 1 - 1.5 ms RTT spike (This could be
-because of normal ARP handling). One test run has a greater RTT spike of 4 ms,
-which is the same one with the 1 ms RTT average. The other runs have no similar
-spike at all. To be able to draw conclusions more runs should be made.
-SLA set to 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 170 and 185 MB/s. Within each test run the results
-vary, with a minimum of 2 MB/s and maximum of 690MB/s on the totality. Most
-runs have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies
-more in absolute numbers between the dates, between 560 and 690 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in a little less average than 1.22 ns. The variations within each test run are
-similar, between 1.213 and 1.226 ns. One exception is the first date, where the
-average is 1.223 and varies between 1.215 and 1.275 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-For this scenario no results are available to report on. Reason is an
-integer/floating point issue regarding how InfluxDB is populated with
-result data from the test runs. The issue was fixed but not in time to produce
-input for this report.
-
-TC012
------
-Between test dates the average measurements for memory bandwidth vary between
-17.1 and 18.1 GB/s. Within each test run the results vary more, with a minimal
-BW of 15.5 GB/s and maximum of 18.2 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 3100 and 3260,
-one result each date. The average score on the total is 3170.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-There seems to be mainly two result types. One type a high and flatter
-PPS throughput not very much affected by the number of flows. Here also the
-average RTT is stable around 13 ms throughout all the test runs.
-
-The second type starts with a slightly lower PPS in the beginning than type
-one, and decreases even further when passing approx. 10000 flows. Here also the
-average RTT tends to start at approx. 15 ms ending with an average of 17 to 18
-ms with the maximum amount of flows running.
-
-Result type one can with the maximum amount of flows have a greater PPS than
-the second type with the minimum amount of flows.
-
-For result type one the average PPS throughput in the different runs varies
-between 399000 and 447000 PPS. The total amount of packets in each test run
-is between approx. 7000000 and 10200000 packets.
-The second result type has a PPS average of between 602000 and 621000 PPS and a
-total packet amount between 10900000 and 13500000 packets.
-
-There are lost packets reported in many of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases. Some case is in the range of one million lost packets.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ with:
-Fuel 8.0
-OpenStack Liberty
-OpenVirtualSwitch 2.3.1
-OpenNetworkOperatingSystem Drake
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all. Approx.
-15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
-difference in RTT results.
-Especially RTT and throughput come out with better results than for instance
-the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
-probably be further analyzed and understood. Also of interest could be
-to make further analyzes to find patterns and reasons for lost traffic.
-Also of interest could be to see why there are variations in some test cases,
-especially visible in TC037.
-
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-===========================================
-Test Results for joid-os-nosdn-nofeature-ha
-===========================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-Detailed test results
----------------------
-
-.. info on lab, installer, scenario
-
-Rationale for decisions
------------------------
-.. result analysis, pass/fail
-
-Conclusions and recommendations
--------------------------------
-
-.. did the expected behavior occured?
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-=============================================
-Test Results for joid-os-nosdn-nofeature-noha
-=============================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-Detailed test results
----------------------
-
-.. info on lab, installer, scenario
-
-Rationale for decisions
------------------------
-.. result analysis, pass/fail
-
-Conclusions and recommendations
--------------------------------
-
-.. did the expected behavior occured?
-
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-============================================
-Test Results for joid-os-odl_l2-nofeature-ha
-============================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. _Dashboard: http://130.211.154.108/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos_rls_b_labs
-
-
-Overview of test results
-------------------------
-
-See Dashboard_ for viewing test result metrics for each respective test case.
-
-All of the test case results below are based on scenario test runs on the
-Orange POD2, between February 23 and February 24.
-
-TC002
------
-
-See Dashboard_ for results.
-SLA set to 10 ms, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC005
------
-
-See Dashboard_ for results.
-SLA set to 400KB/s, only used as a reference; no value has yet been defined by
-OPNFV.
-
-TC010
------
-
-Not executed, missing in the test suite used in the POD during the observed
-period.
-
-TC011
------
-
-Not executed, missing in the test suite used in the POD during the observed
-period.
-
-
-TC012
------
-
-Not executed, missing in the test suite used in the POD during the observed
-period.
-
-
-TC014
------
-
-Not executed, missing in the test suite used in the POD during the observed
-period.
-
-
-TC037
------
-
-See Dashboard_ for results.
-
-
-Detailed test results
----------------------
-
-The scenario was run on Orange POD2_ with:
-Joid
-ODL Beryllium
-
-Rationale for decisions
------------------------
-
-Pass
-
-Most tests were successfully executed and metrics collected, the non-execution
-of above-mentioned tests was due to test cases missing in the Jenkins Job used
-in the POD, during the observed period.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-
-Execute tests over a longer period of time, with time reference to versions of
-components, for allowing better understanding of the behavior of the system.
+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-==========================================
-Test Results for joid-os-onos-nofeature-ha
-==========================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-Detailed test results
----------------------
-
-.. info on lab, installer, scenario
-
-Rationale for decisions
------------------------
-.. result analysis, pass/fail
-
-Conclusions and recommendations
--------------------------------
-
-.. did the expected behavior occured?
:maxdepth: 2
+apex
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the LF POD1_ between September 14 and 17 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.49 ms and 0.60 ms.
+Only one test run has reached greatest RTT spike of 0.93 ms. Meanwhile, the
+smallest network latency is 0.33 ms, which is obtained on Sep. 14th.
+SLA set to be 10 ms. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 416
+MB/s. The IO read bandwidth of all four runs looks similar, with an average
+between 128 and 131 MB/s. One of the runs has a minimum BW of 497 KB/s. The SLA
+of read bandwidth sets to be 400 MB/s, which is used as a reference, and it has
+not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value at 1k per
+second, and meanwhile, the minimum result is only 45 times per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.0859 ns and
+1.0869 ns on average. The variations within each test run are quite different,
+some vary from a large range and others have a small change. For example, the
+largest change is on September 14th, the memory read latency of which is ranging
+from 1.091 ns to 1.086 ns. However.
+The SLA sets to be 30 ns. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC011
+-----
+Packet delay variation between 2 VMs on different blades is measured using
+Iperf3. On the first two test runs the reported packet delay variation varies between
+0.0037 and 0.0740 ms, with an average delay variation between 0.0096 ms and 0.0321.
+On the second date the delay variation varies between 0.0063 and 0.0096 ms, with
+an average delay variation of 0.0124 - 0.0141 ms.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the trend of
+three memory bandwidth almost look similar, which all have a narrow range, and
+the average result is 19.88 GB/s. Here SLA set to be 15 GB/s. The SLA value is
+used as a reference, it has not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3754k to 3831k, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is between 307.3 kpps and
+447.1 kpps, of which the result of the third run is the highest. The RTT
+results of all the test runs keep flat at approx. 15 ms. It is obvious that the
+PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 418.1 kpps and the
+minimum throughput is 326.5 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is approx. 15 ms.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and nine percent respectively. And the best result is obtained on Sep.
+1, with an CPU load of nine percent. But on the whole, the CPU load is very
+poor, since the average value is quite small.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 28.2 GB/s to 29.5 GB/s and then to 29.2 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
+look similar with a minimal BW of 25.8 GB/s and peak value of 28.3 GB/s. And
+then with the block size becoming larger, the memory write bandwidth tends to
+decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
+tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
+the average RTTs of the five runs keep flat and the network latency is
+relatively short.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 267 MiB for the
+four runs. In general, the four test runs have very large memory utilization,
+which can reach 257 MiB on average. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 233 MiB to 241 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+305.3 kpps to 447.1 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding average packet
+throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
+seem consistent. Within each test run of the four runs, when number of flows
+becomes larger, the packet throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 42
+ms and the average RTT is usually approx. 15 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 212 MiB, which is same for the four runs, and the smallest cache
+size is 75 MiB. On the whole, the average cache size of the four runs look the
+same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
+size keep flat, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB, with an average value of about 7.9 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
+flows in these tests is 240k, and each run has a minimum number of flows of 2
+and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
+packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
+of the four runs, when number of flows becomes larger, the packet throughput
+seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for three test runs, whose values change a
+lot from 10 pps to 501 kpps. While results of the rest test run seem the same
+and keep stable with the average number of packets transmitted per second of 10
+pps. However, the total number of packets received per second of the four runs
+look similar, which have a large wide range of 2 pps to 815 kpps.
+
+In some test runs when running with less than approx. 251000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 251000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on LF POD1_ with:
+Apex
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
+
+
fuel
====
is possible to chose which specific scenarios to look at, and then to zoom in
on the details of each run test scenario as well.
-All of the test case results below are based on 4 scenario test
-runs, each run on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in
-2016.
+All of the test case results below are based on 4 scenario test runs, each run
+on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in 2016.
TC002
-----
measured by fio and the greatest IO read bandwidth of the four runs is 183.65
MB/s. The IO read bandwidth of the three runs looks similar, with an average
between 62.9 and 64.3 MB/s, except one on Sep. 1, for its maximum storage
-throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KM/s and
+throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KB/s and
other has a maximum BW of 183.6 MB/s. The SLA of read bandwidth sets to be
400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
-------------------------------
Tests were successfully executed and metrics collected.
No SLA was verified. To be decided on in next release of OPNFV.
+
.. toctree::
:maxdepth: 1
- os-odl_l2-nofeature-ha.rst
os-nosdn-nofeature-ha.rst
- os-nosdn-kvm-ha.rst
- os-odl_l2-bgpvpn-ha.rst
os-nosdn-nofeature-noha.rst
+ os-odl_l2-nofeature-ha.rst
+ os-odl_l2-bgpvpn-ha.rst
+ os-odl_l2-sfc-ha.rst
+ os-nosdn-kvm-ha.rst
os-onos-nofeature-h.rst
os-onos-sfc-ha.rst
- os-odl_l2-sfc-ha.rst
Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_.
::
yardstick task start samples/ping-template.yaml
- --task-args'{"packetsize":"200"}'
+ --task-args '{"packetsize":"200"}'
2.Refer to a file that specifies the argument values (JSON/YAML):