4 Yardstick is a project dealing with performance testing. Yardstick produces its own test cases but can also be considered as a framework to support feature project testing.
6 Yardstick developed a test API that can be used by any OPNFV project. Therefore there are many ways to contribute to Yardstick.
10 * Develop new test cases
12 * Develop Yardstick API / framework
13 * Develop Yardstick grafana dashboards and Yardstick reporting page
14 * Write Yardstick documentation
16 This developer guide describes how to interact with the Yardstick project.
17 The first section details the main working areas of the project. The Second
18 part is a list of “How to” to help you to join the Yardstick family whatever
19 your field of interest is.
21 Where can I find some help to start?
22 --------------------------------------
24 .. _`user guide`: http://artifacts.opnfv.org/yardstick/danube/1.0/docs/stesting_user_userguide/index.html
25 .. _`wiki page`: https://wiki.opnfv.org/display/yardstick/
27 This guide is made for you. You can have a look at the `user guide`_.
28 There are also references on documentation, video tutorials, tips in the
29 project `wiki page`_. You can also directly contact us by mail with [Yardstick] prefix in the title at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan #opnfv-yardstick.
32 Yardstick developer areas
33 ==========================
38 Yardstick can be considered as a framework. Yardstick is release as a docker
39 file, including tools, scripts and a CLI to prepare the environement and run
40 tests. It simplifies the integration of external test suites in CI pipeline
41 and provide commodity tools to collect and display results.
43 Since Danube, test categories also known as tiers have been created to group
44 similar tests, provide consistant sub-lists and at the end optimize test
45 duration for CI (see How To section).
47 The definition of the tiers has been agreed by the testing working group.
64 The installation and configuration of the Yardstick is described in the `user guide`_.
66 How to work with test cases?
67 ----------------------------
72 Yardstick provides many sample test cases which are located at "samples" directory of repo.
74 Sample test cases are designed as following goals:
76 1. Helping user better understand yardstick features(including new feature and new test capacity).
78 2. Helping developer to debug his new feature and test case before it is offical released.
80 3. Helping other developers understand and verify the new patch before the patch merged.
82 So developers should upload your sample test case as well when they are trying to upload a new patch which is about the yardstick new test case or new feature.
85 **OPNFV Release Test cases**
87 OPNFV Release test cases which are located at "tests/opnfv/test_cases" of repo.
88 those test cases are runing by OPNFV CI jobs, It means those test cases should be more mature than sample test cases.
89 OPNFV scenario owners can select related test cases and add them into the test suites which is represent the scenario.
92 **Test case Description File**
94 This section will introduce the meaning of the Test case description file.
95 we will use ping.yaml as a example to show you how to understand the test case description file.
96 In this Yaml file, you can easily find it consists of two sections. One is “Scenarios”, the other is “Context”.::
99 # Sample benchmark task config file
100 # measure network latency using ping
102 schema: "yardstick:task:0.1"
104 {% set provider = provider or none %}
105 {% set physical_network = physical_network or 'physnet1' %}
106 {% set segmentation_id = segmentation_id or none %}
126 image: yardstick-image
127 flavor: yardstick-flavor
132 policy: "availability"
144 {% if provider == "vlan" %}
145 provider: {{provider}}
146 physical_network: {{physical_network}}
147 {% if segmentation_id %}
148 segmentation_id: {{segmentation_id}}
153 "Contexts" section is the description of pre-condition of testing. As ping.yaml shown, you can configure the image, flavor , name ,affinity and network of Test VM(servers), with this section, you will get a pre-condition env for Testing.
154 Yardstick will automatic setup the stack which are described in this section.
155 In fact, yardstick use convert this section to heat template and setup the VMs by heat-client (Meanwhile, yardstick can support to convert this section to Kubernetes template to setup containers).
157 Two Test VMs(athena and ares) are configured by keyword "servers".
158 "flavor" will determine how many vCPU, how much memory for test VMs.
159 As "yardstick-flavor" is a basic flavor which will be automatically created when you run command "yardstick env prepare". "yardstick-flavor" is "1 vCPU 1G RAM,3G Disk".
160 "image" is the image name of test VMs. if you use cirros.3.5.0, you need fill the username of this image into "user". the "policy" of placement of Test VMs have two values (affinity and availability).
161 "availability" means anti-affinity. In "network" section, you can configure which provide network and physical_network you want Test VMs use.
162 you may need to configure segmentation_id when your network is vlan.
164 Moreover, you can configure your specific flavor as below, yardstick will setup the stack for you. ::
167 name: yardstick-new-flavor
173 Besides default heat stack, yardstick also allow you to setup other two types stack. they are "Node" and "Kubernetes". ::
187 "Scenarios" section is the description of testing step, you can orchestrate the complex testing step through orchestrate scenarios.
189 Each scenario will do one testing step, In one scenario, you can configure the type of scenario(operation), runner type and SLA of the scenario.
191 For TC002, We only have one step , that is Ping from host VM to target VM. In this step, we also have some detail operation implement ( such as ssh to VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result).
193 If you want to get this detail implement , you can check with the scenario.py file. For Ping scenario, you can find it in yardstick repo ( yardstick / yardstick / benchmark / scenarios / networking / ping.py)
195 after you select the type of scenario( such as Ping), you will select one type of runner, there are 4 types of runner. Usually, we use the "Iteration" and "Duration". and Default is "Iteration".
196 For Iteration, you can specify the iteration number and interval of iteration. ::
203 That means yardstick will iterate the 10 times of Ping test and the interval of each iteration is one second.
205 For Duration, you can specify the duration of this scenario and the interval of each ping test. ::
212 That means yardstick will run the ping test as loop until the total time of this scenario reach the 60s and the interval of each loop is ten seconds.
215 SLA is the criterion of this scenario. that depends on the scenario. different scenario can have different SLA metric.
218 **How to write a new test case**
220 Yardstick already provide a library of testing step. that means yardstick provide lots of type scenario.
222 Basiclly, What you need to do is to orchestrate the scenario from the library.
224 Here, We will show two cases. One is how to write a simple test case, the other is how to write a quite complex test case.
227 Write a new simple test case
229 First, you can image a basic test case description as below.
231 +-----------------------------------------------------------------------------+
232 |Storage Performance |
234 +--------------+--------------------------------------------------------------+
235 |metric | IOPS (Average IOs performed per second), |
236 | | Throughput (Average disk read/write bandwidth rate), |
237 | | Latency (Average disk read/write latency) |
239 +--------------+--------------------------------------------------------------+
240 |test purpose | The purpose of TC005 is to evaluate the IaaS storage |
241 | | performance with regards to IOPS, throughput and latency. |
243 +--------------+--------------------------------------------------------------+
244 |test | fio test is invoked in a host VM on a compute blade, a job |
245 |description | file as well as parameters are passed to fio and fio will |
246 | | start doing what the job file tells it to do. |
248 +--------------+--------------------------------------------------------------+
249 |configuration | file: opnfv_yardstick_tc005.yaml |
251 | | IO types is set to read, write, randwrite, randread, rw. |
252 | | IO block size is set to 4KB, 64KB, 1024KB. |
253 | | fio is run for each IO type and IO block size scheme, |
254 | | each iteration runs for 30 seconds (10 for ramp time, 20 for |
257 | | For SLA, minimum read/write iops is set to 100, |
258 | | minimum read/write throughput is set to 400 KB/s, |
259 | | and maximum read/write latency is set to 20000 usec. |
261 +--------------+--------------------------------------------------------------+
262 |applicability | This test case can be configured with different: |
265 | | * IO block size; |
268 | | * test duration. |
270 | | Default values exist. |
272 | | SLA is optional. The SLA in this test case serves as an |
273 | | example. Considerably higher throughput and lower latency |
274 | | are expected. However, to cover most configurations, both |
275 | | baremetal and fully virtualized ones, this value should be |
276 | | possible to achieve and acceptable for black box testing. |
277 | | Many heavy IO applications start to suffer badly if the |
278 | | read/write bandwidths are lower than this. |
280 +--------------+--------------------------------------------------------------+
281 |pre-test | The test case image needs to be installed into Glance |
282 |conditions | with fio included in it. |
284 | | No POD specific requirements have been identified. |
286 +--------------+--------------------------------------------------------------+
287 |test sequence | description and expected result |
289 +--------------+--------------------------------------------------------------+
290 |step 1 | A host VM with fio installed is booted. |
292 +--------------+--------------------------------------------------------------+
293 |step 2 | Yardstick is connected with the host VM by using ssh. |
294 | | 'fio_benchmark' bash script is copyied from Jump Host to |
295 | | the host VM via the ssh tunnel. |
297 +--------------+--------------------------------------------------------------+
298 |step 3 | 'fio_benchmark' script is invoked. Simulated IO operations |
299 | | are started. IOPS, disk read/write bandwidth and latency are |
300 | | recorded and checked against the SLA. Logs are produced and |
303 | | Result: Logs are stored. |
305 +--------------+--------------------------------------------------------------+
306 |step 4 | The host VM is deleted. |
308 +--------------+--------------------------------------------------------------+
309 |test verdict | Fails only if SLA is not passed, or if there is a test case |
310 | | execution problem. |
312 +--------------+--------------------------------------------------------------+
316 How can I contribute to Yardstick?
317 -----------------------------------
319 If you are already a contributor of any OPNFV project, you can contribute to
320 Yardstick. If you are totally new to OPNFV, you must first create your Linux
321 Foundation account, then contact us in order to declare you in the repository
324 We distinguish 2 levels of contributors:
326 * the standard contributor can push patch and vote +1/0/-1 on any Yardstick patch
327 * The commitor can vote -2/-1/0/+1/+2 and merge
329 Yardstick commitors are promoted by the Yardstick contributors.
331 Gerrit & JIRA introduction
332 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
334 .. _Gerrit: https://www.gerritcodereview.com/
335 .. _`OPNFV Gerrit`: http://gerrit.opnfv.org/
336 .. _link: https://identity.linuxfoundation.org/
337 .. _JIRA: https://jira.opnfv.org/secure/Dashboard.jspa
339 OPNFV uses Gerrit_ for web based code review and repository management for the
340 Git Version Control System. You can access `OPNFV Gerrit`_. Please note that
341 you need to have Linux Foundation ID in order to use OPNFV Gerrit. You can get one from this link_.
343 OPNFV uses JIRA_ for issue management. An important principle of change
344 management is to have two-way trace-ability between issue management
345 (i.e. JIRA_) and the code repository (via Gerrit_). In this way, individual
346 commits can be traced to JIRA issues and we also know which commits were used
347 to resolve a JIRA issue.
349 If you want to contribute to Yardstick, you can pick a issue from Yardstick's
350 JIRA dashboard or you can create you own issue and submit it to JIRA.
352 Install Git and Git-reviews
353 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
355 Installing and configuring Git and Git-Review is necessary in order to submit
356 code to Gerrit. The `Getting to the code <https://wiki.opnfv.org/display/DEV/Developer+Getting+Started>`_ page will provide you with some help for that.
359 Verify your patch locally before submitting
360 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
362 Once you finish a patch, you can submit it to Gerrit for code review. A
363 developer sends a new patch to Gerrit will trigger patch verify job on Jenkins
364 CI. The yardstick patch verify job includes python pylint check, unit test and
365 code coverage test. Before you submit your patch, it is recommended to run the
366 patch verification in your local environment first.
368 Open a terminal window and set the project's directory to the working
369 directory using the ``cd`` command. Assume that ``YARDSTICK_REPO_DIR`` is the path to the Yardstick project folder on your computer::
371 cd $YARDSTICK_REPO_DIR
377 It is used in CI but also by the CLI.
379 Submit the code with Git
380 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
382 Tell Git which files you would like to take into account for the next commit.
383 This is called 'staging' the files, by placing them into the staging area,
384 using the ``git add`` command (or the synonym ``git stage`` command)::
386 git add $YARDSTICK_REPO_DIR/samples/sample.yaml
388 Alternatively, you can choose to stage all files that have been modified (that
389 is the files you have worked on) since the last time you generated a commit,
390 by using the `-a` argument::
394 Git won't let you push (upload) any code to Gerrit if you haven't pulled the
395 latest changes first. So the next step is to pull (download) the latest
396 changes made to the project by other collaborators using the ``pull`` command::
400 Now that you have the latest version of the project and you have staged the
401 files you wish to push, it is time to actually commit your work to your local
404 git commit --signoff -m "Title of change"
406 Test of change that describes in high level what was done. There is a lot of
407 documentation in code so you do not need to repeat it here.
411 .. _`this document`: http://chris.beams.io/posts/git-commit/
413 The message that is required for the commit should follow a specific set of
414 rules. This practice allows to standardize the description messages attached
415 to the commits, and eventually navigate among the latter more easily.
417 `This document`_ happened to be very clear and useful to get started with that.
419 Push the code to Gerrit for review
420 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
422 Now that the code has been comitted into your local Git repository the
423 following step is to push it online to Gerrit for it to be reviewed. The
424 command we will use is ``git review``::
428 This will automatically push your local commit into Gerrit. You can add
429 Yardstick committers and contributors to review your codes.
431 .. image:: images/review.PNG
433 :alt: Gerrit for code review
435 You can find a list Yardstick people `here <https://wiki.opnfv.org/display/yardstick/People>`_,
436 or use the ``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit.
438 Modify the code under review in Gerrit
439 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
441 At the same time the code is being reviewed in Gerrit, you may need to edit it
442 to make some changes and then send it back for review. The following steps go
443 through the procedure.
445 Once you have modified/edited your code files under your IDE, you will have to
446 stage them. The 'status' command is very helpful at this point as it provides
447 an overview of Git's current state::
451 The output of the command provides us with the files that have been modified
452 after the latest commit.
454 You can now stage the files that have been modified as part of the Gerrit code
455 review edition/modification/improvement using ``git add`` command. It is now
456 time to commit the newly modified files, but the objective here is not to
457 create a new commit, we simply want to inject the new changes into the
458 previous commit. You can achieve that with the '--amend' option on the
459 ``git commit`` command::
463 If the commit was successful, the ``git status`` command should not return the
464 updated files as about to be commited.
466 The final step consists in pushing the newly modified commit to Gerrit::
474 For information about Yardstick plugins, refer to the chapter **Installing a plug-in into Yardstick** in the `user guide`_.