2 Licensed under the Apache License, Version 2.0 (the "License"); you may
3 not use this file except in compliance with the License. You may obtain
4 a copy of the License at
6 http://www.apache.org/licenses/LICENSE-2.0
8 Unless required by applicable law or agreed to in writing, software
9 distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
10 WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
11 License for the specific language governing permissions and limitations
14 Convention for heading levels in Yardstick documentation:
16 ======= Heading 0 (reserved for the title in a document)
22 Avoid deeper levels because they do not render well.
27 Yardstick is a project dealing with performance testing. Yardstick produces
28 its own test cases but can also be considered as a framework to support feature
31 Yardstick developed a test API that can be used by any OPNFV project. Therefore
32 there are many ways to contribute to Yardstick.
36 * Develop new test cases
38 * Develop Yardstick API / framework
39 * Develop Yardstick grafana dashboards and Yardstick reporting page
40 * Write Yardstick documentation
42 This developer guide describes how to interact with the Yardstick project.
43 The first section details the main working areas of the project. The Second
44 part is a list of “How to” to help you to join the Yardstick family whatever
45 your field of interest is.
47 Where can I find some help to start?
48 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
50 .. _`user guide`: http://artifacts.opnfv.org/yardstick/danube/1.0/docs/stesting_user_userguide/index.html
51 .. _`wiki page`: https://wiki.opnfv.org/display/yardstick/
53 This guide is made for you. You can have a look at the `user guide`_.
54 There are also references on documentation, video tutorials, tips in the
55 project `wiki page`_. You can also directly contact us by mail with [Yardstick]
56 prefix in the subject at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan
60 Yardstick developer areas
61 -------------------------
66 Yardstick can be considered as a framework. Yardstick is released as a docker
67 file, including tools, scripts and a CLI to prepare the environement and run
68 tests. It simplifies the integration of external test suites in CI pipelines
69 and provides commodity tools to collect and display results.
71 Since Danube, test categories (also known as tiers) have been created to group
72 similar tests, provide consistant sub-lists and at the end optimize test
73 duration for CI (see How To section).
75 The definition of the tiers has been agreed by the testing working group.
92 The installation and configuration of the Yardstick is described in the `user guide`_.
94 How to work with test cases?
95 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
100 Yardstick provides many sample test cases which are located at ``samples`` directory of repo.
102 Sample test cases are designed with the following goals:
104 1. Helping user better understand Yardstick features (including new feature and
107 2. Helping developer to debug a new feature and test case before it is
110 3. Helping other developers understand and verify the new patch before the
113 Developers should upload their sample test cases as well when they are
114 uploading a new patch which is about the Yardstick new test case or new feature.
117 OPNFV Release Test cases
118 ++++++++++++++++++++++++
120 OPNFV Release test cases are located at ``yardstick/tests/opnfv/test_cases``.
121 These test cases are run by OPNFV CI jobs, which means these test cases should
122 be more mature than sample test cases.
123 OPNFV scenario owners can select related test cases and add them into the test
124 suites which represent their scenario.
127 Test case Description File
128 ++++++++++++++++++++++++++
130 This section will introduce the meaning of the Test case description file.
131 we will use ping.yaml as a example to show you how to understand the test case
133 This ``yaml`` file consists of two sections. One is ``scenarios``, the other
137 # Sample benchmark task config file
138 # measure network latency using ping
140 schema: "yardstick:task:0.1"
142 {% set provider = provider or none %}
143 {% set physical_network = physical_network or 'physnet1' %}
144 {% set segmentation_id = segmentation_id or none %}
164 image: yardstick-image
165 flavor: yardstick-flavor
170 policy: "availability"
182 {% if provider == "vlan" %}
183 provider: {{provider}}
184 physical_network: {{physical_network}}
185 {% if segmentation_id %}
186 segmentation_id: {{segmentation_id}}
191 The ``contexts`` section is the description of pre-condition of testing. As
192 ``ping.yaml`` shows, you can configure the image, flavor, name, affinity and
193 network of Test VM (servers), with this section, you will get a pre-condition
195 Yardstick will automatically setup the stack which are described in this
197 Yardstick converts this section to heat template and sets up the VMs with
198 heat-client (Yardstick can also support to convert this section to Kubernetes
199 template to setup containers).
201 In the examples above, two Test VMs (athena and ares) are configured by
203 ``flavor`` will determine how many vCPU, how much memory for test VMs.
204 As ``yardstick-flavor`` is a basic flavor which will be automatically created
205 when you run command ``yardstick env prepare``. ``yardstick-flavor`` is
206 ``1 vCPU 1G RAM,3G Disk``.
207 ``image`` is the image name of test VMs. If you use ``cirros.3.5.0``, you need
208 fill the username of this image into ``user``.
209 The ``policy`` of placement of Test VMs have two values (``affinity`` and
210 ``availability``). ``availability`` means anti-affinity.
211 In the ``network`` section, you can configure which ``provider`` network and
212 ``physical_network`` you want Test VMs to use.
213 You may need to configure ``segmentation_id`` when your network is vlan.
215 Moreover, you can configure your specific flavor as below, Yardstick will setup
216 the stack for you. ::
219 name: yardstick-new-flavor
225 Besides default ``Heat`` context, Yardstick also allows you to setup two other
226 types of context. They are ``Node`` and ``Kubernetes``. ::
239 The ``scenarios`` section is the description of testing steps, you can
240 orchestrate the complex testing step through scenarios.
242 Each scenario will do one testing step.
243 In one scenario, you can configure the type of scenario (operation), ``runner``
244 type and ``sla`` of the scenario.
246 For TC002, We only have one step, which is Ping from host VM to target VM. In
247 this step, we also have some detailed operations implemented (such as ssh to
248 VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result).
250 If you want to get this implementation details implement, you can check with
251 the scenario.py file. For Ping scenario, you can find it in Yardstick repo
252 (``yardstick/yardstick/benchmark/scenarios/networking/ping.py``).
254 After you select the type of scenario (such as Ping), you will select one type
255 of ``runner``, there are 4 types of runner. ``Iteration`` and ``Duration`` are
256 the most commonly used, and the default is ``Iteration``.
258 For ``Iteration``, you can specify the iteration number and interval of iteration. ::
265 That means Yardstick will repeat the Ping test 10 times and the interval of
266 each iteration is one second.
268 For ``Duration``, you can specify the duration of this scenario and the
269 interval of each ping test. ::
276 That means Yardstick will run the ping test as loop until the total time of
277 this scenario reaches 60s and the interval of each loop is ten seconds.
280 SLA is the criterion of this scenario. This depends on the scenario. Different
281 scenarios can have different SLA metric.
284 How to write a new test case
285 ++++++++++++++++++++++++++++
287 Yardstick already provides a library of testing steps (i.e. different types of
290 Basically, what you need to do is to orchestrate the scenario from the library.
292 Here, we will show two cases. One is how to write a simple test case, the other
293 is how to write a quite complex test case.
295 Write a new simple test case
296 ''''''''''''''''''''''''''''
298 First, you can image a basic test case description as below.
300 +-----------------------------------------------------------------------------+
301 |Storage Performance |
303 +--------------+--------------------------------------------------------------+
304 |metric | IOPS (Average IOs performed per second), |
305 | | Throughput (Average disk read/write bandwidth rate), |
306 | | Latency (Average disk read/write latency) |
308 +--------------+--------------------------------------------------------------+
309 |test purpose | The purpose of TC005 is to evaluate the IaaS storage |
310 | | performance with regards to IOPS, throughput and latency. |
312 +--------------+--------------------------------------------------------------+
313 |test | fio test is invoked in a host VM on a compute blade, a job |
314 |description | file as well as parameters are passed to fio and fio will |
315 | | start doing what the job file tells it to do. |
317 +--------------+--------------------------------------------------------------+
318 |configuration | file: opnfv_yardstick_tc005.yaml |
320 | | IO types is set to read, write, randwrite, randread, rw. |
321 | | IO block size is set to 4KB, 64KB, 1024KB. |
322 | | fio is run for each IO type and IO block size scheme, |
323 | | each iteration runs for 30 seconds (10 for ramp time, 20 for |
326 | | For SLA, minimum read/write iops is set to 100, |
327 | | minimum read/write throughput is set to 400 KB/s, |
328 | | and maximum read/write latency is set to 20000 usec. |
330 +--------------+--------------------------------------------------------------+
331 |applicability | This test case can be configured with different: |
334 | | * IO block size; |
337 | | * test duration. |
339 | | Default values exist. |
341 | | SLA is optional. The SLA in this test case serves as an |
342 | | example. Considerably higher throughput and lower latency |
343 | | are expected. However, to cover most configurations, both |
344 | | baremetal and fully virtualized ones, this value should be |
345 | | possible to achieve and acceptable for black box testing. |
346 | | Many heavy IO applications start to suffer badly if the |
347 | | read/write bandwidths are lower than this. |
349 +--------------+--------------------------------------------------------------+
350 |pre-test | The test case image needs to be installed into Glance |
351 |conditions | with fio included in it. |
353 | | No POD specific requirements have been identified. |
355 +--------------+--------------------------------------------------------------+
356 |test sequence | description and expected result |
358 +--------------+--------------------------------------------------------------+
359 |step 1 | A host VM with fio installed is booted. |
361 +--------------+--------------------------------------------------------------+
362 |step 2 | Yardstick is connected with the host VM by using ssh. |
363 | | 'fio_benchmark' bash script is copyied from Jump Host to |
364 | | the host VM via the ssh tunnel. |
366 +--------------+--------------------------------------------------------------+
367 |step 3 | 'fio_benchmark' script is invoked. Simulated IO operations |
368 | | are started. IOPS, disk read/write bandwidth and latency are |
369 | | recorded and checked against the SLA. Logs are produced and |
372 | | Result: Logs are stored. |
374 +--------------+--------------------------------------------------------------+
375 |step 4 | The host VM is deleted. |
377 +--------------+--------------------------------------------------------------+
378 |test verdict | Fails only if SLA is not passed, or if there is a test case |
379 | | execution problem. |
381 +--------------+--------------------------------------------------------------+
385 How can I contribute to Yardstick?
386 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
388 If you are already a contributor of any OPNFV project, you can contribute to
389 Yardstick. If you are totally new to OPNFV, you must first create your Linux
390 Foundation account, then contact us in order to declare you in the repository
393 We distinguish 2 levels of contributors:
395 * the standard contributor can push patch and vote +1/0/-1 on any Yardstick patch
396 * The commitor can vote -2/-1/0/+1/+2 and merge
398 Yardstick commitors are promoted by the Yardstick contributors.
400 Gerrit & JIRA introduction
401 ++++++++++++++++++++++++++
403 .. _Gerrit: https://www.gerritcodereview.com/
404 .. _`OPNFV Gerrit`: http://gerrit.opnfv.org/
405 .. _link: https://identity.linuxfoundation.org/
406 .. _JIRA: https://jira.opnfv.org/secure/Dashboard.jspa
408 OPNFV uses Gerrit_ for web based code review and repository management for the
409 Git Version Control System. You can access `OPNFV Gerrit`_. Please note that
410 you need to have Linux Foundation ID in order to use OPNFV Gerrit. You can get
413 OPNFV uses JIRA_ for issue management. An important principle of change
414 management is to have two-way trace-ability between issue management
415 (i.e. JIRA_) and the code repository (via Gerrit_). In this way, individual
416 commits can be traced to JIRA issues and we also know which commits were used
417 to resolve a JIRA issue.
419 If you want to contribute to Yardstick, you can pick a issue from Yardstick's
420 JIRA dashboard or you can create you own issue and submit it to JIRA.
422 Install Git and Git-reviews
423 +++++++++++++++++++++++++++
425 Installing and configuring Git and Git-Review is necessary in order to submit
427 `Getting to the code <https://wiki.opnfv.org/display/DEV/Developer+Getting+Started>`_
428 page will provide you with some help for that.
431 Verify your patch locally before submitting
432 +++++++++++++++++++++++++++++++++++++++++++
434 Once you finish a patch, you can submit it to Gerrit for code review. A
435 developer sends a new patch to Gerrit will trigger patch verify job on Jenkins
436 CI. The yardstick patch verify job includes python pylint check, unit test and
437 code coverage test. Before you submit your patch, it is recommended to run the
438 patch verification in your local environment first.
440 Open a terminal window and set the project's directory to the working
441 directory using the ``cd`` command. Assume that ``YARDSTICK_REPO_DIR`` is the
442 path to the Yardstick project folder on your computer::
444 cd $YARDSTICK_REPO_DIR
450 It is used in CI but also by the CLI.
452 Submit the code with Git
453 ++++++++++++++++++++++++
455 Tell Git which files you would like to take into account for the next commit.
456 This is called 'staging' the files, by placing them into the staging area,
457 using the ``git add`` command (or the synonym ``git stage`` command)::
459 git add $YARDSTICK_REPO_DIR/samples/sample.yaml
461 Alternatively, you can choose to stage all files that have been modified (that
462 is the files you have worked on) since the last time you generated a commit,
463 by using the `-a` argument::
467 Git won't let you push (upload) any code to Gerrit if you haven't pulled the
468 latest changes first. So the next step is to pull (download) the latest
469 changes made to the project by other collaborators using the ``pull`` command::
473 Now that you have the latest version of the project and you have staged the
474 files you wish to push, it is time to actually commit your work to your local
477 git commit --signoff -m "Title of change"
479 Test of change that describes in high level what was done. There is a lot of
480 documentation in code so you do not need to repeat it here.
484 .. _`this document`: http://chris.beams.io/posts/git-commit/
486 The message that is required for the commit should follow a specific set of
487 rules. This practice allows to standardize the description messages attached
488 to the commits, and eventually navigate among the latter more easily.
490 `This document`_ happened to be very clear and useful to get started with that.
492 Push the code to Gerrit for review
493 ++++++++++++++++++++++++++++++++++
495 Now that the code has been comitted into your local Git repository the
496 following step is to push it online to Gerrit for it to be reviewed. The
497 command we will use is ``git review``::
501 This will automatically push your local commit into Gerrit. You can add
502 Yardstick committers and contributors to review your codes.
504 .. image:: images/review.PNG
506 :alt: Gerrit for code review
508 You can find a list Yardstick people
509 `here <https://wiki.opnfv.org/display/yardstick/People>`_, or use the
510 ``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit.
512 Modify the code under review in Gerrit
513 ++++++++++++++++++++++++++++++++++++++
515 At the same time the code is being reviewed in Gerrit, you may need to edit it
516 to make some changes and then send it back for review. The following steps go
517 through the procedure.
519 Once you have modified/edited your code files under your IDE, you will have to
520 stage them. The ``git status`` command is very helpful at this point as it
521 provides an overview of Git's current state::
525 This command lists the files that have been modified since the last commit.
527 You can now stage the files that have been modified as part of the Gerrit code
528 review addition/modification/improvement using ``git add`` command. It is now
529 time to commit the newly modified files, but the objective here is not to
530 create a new commit, we simply want to inject the new changes into the
531 previous commit. You can achieve that with the '--amend' option on the
532 ``git commit`` command::
536 If the commit was successful, the ``git status`` command should not return the
537 updated files as about to be commited.
539 The final step consists in pushing the newly modified commit to Gerrit::
547 For information about Yardstick plugins, refer to the chapter
548 **Installing a plug-in into Yardstick** in the `user guide`_.