1 .. This work is licensed under a Creative Commons Attribution 4.0 International
3 .. http://creativecommons.org/licenses/by/4.0
4 .. (c) 2016 Huawei Technologies Co.,Ltd and others
12 This chapter describes the yardstick framework software architecture. we will introduce it from Use-Case View,
13 Logical View, Process View and Deployment View. More technical details will be introduced in this chapter.
20 Yardstick is mainly written in Python, and test configurations are made
21 in YAML. Documentation is written in reStructuredText format, i.e. .rst
22 files. Yardstick is inspired by Rally. Yardstick is intended to run on a
23 computer with access and credentials to a cloud. The test case is described
24 in a configuration file given as an argument.
26 How it works: the benchmark task configuration file is parsed and converted into
27 an internal model. The context part of the model is converted into a Heat
28 template and deployed into a stack. Each scenario is run using a runner, either
29 serially or in parallel. Each runner runs in its own subprocess executing
30 commands in a VM using SSH. The output of each scenario is written as json
31 records to a file or influxdb or http server, we use influxdb as the backend,
32 the test result will be shown with grafana.
37 **Benchmark** - assess the relative performance of something
39 **Benchmark** configuration file - describes a single test case in yaml format
41 **Context** - The set of Cloud resources used by a scenario, such as user
42 names, image names, affinity rules and network configurations. A context is
43 converted into a simplified Heat template, which is used to deploy onto the
44 Openstack environment.
46 **Data** - Output produced by running a benchmark, written to a file in json format
48 **Runner** - Logic that determines how a test scenario is run and reported, for
49 example the number of test iterations, input value stepping and test duration.
50 Predefined runner types exist for re-usage, see `Runner types`_.
52 **Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...)
54 **SLA** - Relates to what result boundary a test case must meet to pass. For
55 example a latency limit, amount or ratio of lost packets and so on. Action
56 based on :term:`SLA` can be configured, either just to log (monitor) or to stop
57 further testing (assert). The :term:`SLA` criteria is set in the benchmark
58 configuration file and evaluated by the runner.
64 There exists several predefined runner types to choose between when designing
68 Every test run arithmetically steps the specified input value(s) in the
69 test scenario, adding a value to the previous input value. It is also possible
70 to combine several input values for the same test case in different
73 Snippet of an Arithmetic runner configuration:
87 The test runs for a specific period of time before completed.
89 Snippet of a Duration runner configuration:
98 The test changes a specified input value to the scenario. The input values
99 to the sequence are specified in a list in the benchmark configuration file.
101 Snippet of a Sequence runner configuration:
107 scenario_option_name: packetsize
115 Tests are run a specified number of times before completed.
117 Snippet of an Iteration runner configuration:
130 Yardstick Use-Case View shows two kinds of users. One is the Tester who will
131 do testing in cloud, the other is the User who is more concerned with test result
134 For testers, they will run a single test case or test case suite to verify
135 infrastructure compliance or bencnmark their own infrastructure performance.
136 Test result will be stored by dispatcher module, three kinds of store method
137 (file, influxdb and http) can be configured. The detail information of
138 scenarios and runners can be queried with CLI by testers.
140 For users, they would check test result with four ways.
142 If dispatcher module is configured as file(default), there are two ways to
143 check test result. One is to get result from yardstick.out ( default path:
144 /tmp/yardstick.out), the other is to get plot of test result, it will be shown
145 if users execute command "yardstick-plot".
147 If dispatcher module is configured as influxdb, users will check test
148 result on Grafana which is most commonly used for visualizing time series data.
150 If dispatcher module is configured as http, users will check test result
151 on OPNFV testing dashboard which use MongoDB as backend.
153 .. image:: images/Use_case.png
155 :alt: Yardstick Use-Case View
161 Process View (Test execution flow)
162 ==================================
180 Yardstick Directory structure
181 =============================
183 **yardstick/** - Yardstick main directory.
185 *ci/* - Used for continuous integration of Yardstick at different PODs and
186 with support for different installers.
188 *docs/* - All documentation is stored here, such as configuration guides,
189 user guides and Yardstick descriptions.
191 *etc/* - Used for test cases requiring specific POD configurations.
193 *samples/* - test case samples are stored here, most of all scenario and
194 feature's samples are shown in this directory.
196 *tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as
197 well as the test cases run to verify the NFVI (*opnfv/*) are stored.
198 Also configurations of what to run daily and weekly at the different
199 PODs is located here.
201 *tools/* - Currently contains tools to build image for VMs which are deployed
202 by Heat. Currently contains how to build the yardstick-trusty-server
203 image with the different tools that are needed from within the image.
205 *vTC/* - Contains the files for running the virtual Traffic Classifier tests.
207 *yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts,
208 CLI parsing, keys, plotting tools, dispatcher and so on.