1 ============================================
2 Contributing to Ceph: A Guide for Developers
3 ============================================
7 :License: Creative Commons Attribution-ShareAlike (CC BY-SA)
9 .. note:: The old (pre-2016) developer documentation has been moved to :doc:`/dev/index-old`.
17 This guide has two aims. First, it should lower the barrier to entry for
18 software developers who wish to get involved in the Ceph project. Second,
19 it should serve as a reference for Ceph developers.
21 We assume that readers are already familiar with Ceph (the distributed
22 object store and file system designed to provide excellent performance,
23 reliability and scalability). If not, please refer to the `project website`_
24 and especially the `publications list`_.
26 .. _`project website`: http://ceph.com
27 .. _`publications list`: https://ceph.com/resources/publications/
29 Since this document is to be consumed by developers, who are assumed to
30 have Internet access, topics covered elsewhere, either within the Ceph
31 documentation or elsewhere on the web, are treated by linking. If you
32 notice that a link is broken or if you know of a better link, please
33 `report it as a bug`_.
35 .. _`report it as a bug`: http://tracker.ceph.com/projects/ceph/issues/new
40 This chapter presents essential information that every Ceph developer needs
46 The Ceph project is led by Sage Weil. In addition, each major project
47 component has its own lead. The following table shows all the leads and
48 their nicks on `GitHub`_:
50 .. _github: https://github.com/
52 ========= ================ =============
53 Scope Lead GitHub nick
54 ========= ================ =============
55 Ceph Sage Weil liewegas
56 RADOS Samuel Just athanatos
57 RGW Yehuda Sadeh yehudasa
58 RBD Jason Dillaman dillaman
59 CephFS Patrick Donnelly batrick
60 Build/Ops Ken Dreyer ktdreyer
61 ========= ================ =============
63 The Ceph-specific acronyms in the table are explained in
69 See the `History chapter of the Wikipedia article`_.
71 .. _`History chapter of the Wikipedia article`: https://en.wikipedia.org/wiki/Ceph_%28software%29#History
76 Ceph is free software.
78 Unless stated otherwise, the Ceph source code is distributed under the terms of
79 the LGPL2.1. For full details, see `the file COPYING in the top-level
80 directory of the source-code tree`_.
82 .. _`the file COPYING in the top-level directory of the source-code tree`:
83 https://github.com/ceph/ceph/blob/master/COPYING
85 Source code repositories
86 ------------------------
88 The source code of Ceph lives on `GitHub`_ in a number of repositories below
89 the `Ceph "organization"`_.
91 .. _`Ceph "organization"`: https://github.com/ceph
93 To make a meaningful contribution to the project as a developer, a working
94 knowledge of git_ is essential.
96 .. _git: https://git-scm.com/documentation
98 Although the `Ceph "organization"`_ includes several software repositories,
99 this document covers only one: https://github.com/ceph/ceph.
101 Redmine issue tracker
102 ---------------------
104 Although `GitHub`_ is used for code, Ceph-related issues (Bugs, Features,
105 Backports, Documentation, etc.) are tracked at http://tracker.ceph.com,
106 which is powered by `Redmine`_.
108 .. _Redmine: http://www.redmine.org
110 The tracker has a Ceph project with a number of subprojects loosely
111 corresponding to the various architectural components (see
112 :doc:`/architecture`).
114 Mere `registration`_ in the tracker automatically grants permissions
115 sufficient to open new issues and comment on existing ones.
117 .. _registration: http://tracker.ceph.com/account/register
119 To report a bug or propose a new feature, `jump to the Ceph project`_ and
120 click on `New issue`_.
122 .. _`jump to the Ceph project`: http://tracker.ceph.com/projects/ceph
123 .. _`New issue`: http://tracker.ceph.com/projects/ceph/issues/new
128 Ceph development email discussions take place on the mailing list
129 ``ceph-devel@vger.kernel.org``. The list is open to all. Subscribe by
130 sending a message to ``majordomo@vger.kernel.org`` with the line: ::
134 in the body of the message.
136 There are also `other Ceph-related mailing lists`_.
138 .. _`other Ceph-related mailing lists`: https://ceph.com/irc/
143 In addition to mailing lists, the Ceph community also communicates in real
144 time using `Internet Relay Chat`_.
146 .. _`Internet Relay Chat`: http://www.irchelp.org/
148 See https://ceph.com/irc/ for how to set up your IRC
149 client and a list of channels.
154 The canonical instructions for submitting patches are contained in the
155 `the file CONTRIBUTING.rst in the top-level directory of the source-code
156 tree`_. There may be some overlap between this guide and that file.
158 .. _`the file CONTRIBUTING.rst in the top-level directory of the source-code tree`:
159 https://github.com/ceph/ceph/blob/master/CONTRIBUTING.rst
161 All newcomers are encouraged to read that file carefully.
166 See instructions at :doc:`/install/build-ceph`.
168 Using ccache to speed up local builds
169 -------------------------------------
171 Rebuilds of the ceph source tree can benefit significantly from use of `ccache`_.
172 Many a times while switching branches and such, one might see build failures for
173 certain older branches mostly due to older build artifacts. These rebuilds can
174 significantly benefit the use of ccache. For a full clean source tree, one could
179 # note the following will nuke everything in the source tree that
180 # isn't tracked by git, so make sure to backup any log files /conf options
182 $ git clean -fdx; git submodule foreach git clean -fdx
184 ccache is available as a package in most distros. To build ceph with ccache one
187 $ cmake -DWITH_CCACHE=ON ..
189 ccache can also be used for speeding up all builds in the system. for more
190 details refer to the `run modes`_ of the ccache manual. The default settings of
191 ``ccache`` can be displayed with ``ccache -s``.
193 .. note: It is recommended to override the ``max_size``, which is the size of
194 cache, defaulting to 10G, to a larger size like 25G or so. Refer to the
195 `configuration`_ section of ccache manual.
197 .. _`ccache`: https://ccache.samba.org/
198 .. _`run modes`: https://ccache.samba.org/manual.html#_run_modes
199 .. _`configuration`: https://ccache.samba.org/manual.html#_configuration
201 Development-mode cluster
202 ------------------------
204 See :doc:`/dev/quick_guide`.
209 All bugfixes should be merged to the ``master`` branch before being backported.
210 To flag a bugfix for backporting, make sure it has a `tracker issue`_
211 associated with it and set the ``Backport`` field to a comma-separated list of
212 previous releases (e.g. "hammer,jewel") that you think need the backport.
213 The rest (including the actual backporting) will be taken care of by the
214 `Stable Releases and Backports`_ team.
216 .. _`tracker issue`: http://tracker.ceph.com/
217 .. _`Stable Releases and Backports`: http://tracker.ceph.com/projects/ceph-releases/wiki
219 Guidance for use of cluster log
220 -------------------------------
222 If your patches emit messages to the Ceph cluster log, please consult
223 this guidance: :doc:`/dev/logging`.
226 What is merged where and when ?
227 ===============================
229 Commits are merged into branches according to criteria that change
230 during the lifecycle of a Ceph release. This chapter is the inventory
231 of what can be merged in which branch at a given point in time.
233 Development releases (i.e. x.0.z)
234 ---------------------------------
245 Features are merged to the master branch. Bug fixes should be merged
246 to the corresponding named branch (e.g. "jewel" for 10.0.z, "kraken"
247 for 11.0.z, etc.). However, this is not mandatory - bug fixes can be
248 merged to the master branch as well, since the master branch is
249 periodically merged to the named branch during the development
250 releases phase. In either case, if the bugfix is important it can also
251 be flagged for backport to one or more previous stable releases.
256 After the stable release candidates of the previous release enters
257 phase 2 (see below). For example: the "jewel" named branch was
258 created when the infernalis release candidates entered phase 2. From
259 this point on, master was no longer associated with infernalis. As
260 soon as the named branch of the next stable release is created, master
261 starts getting periodically merged into it.
266 * The branch of the stable release is merged periodically into master.
267 * The master branch is merged periodically into the branch of the
269 * The master is merged into the branch of the stable release
270 immediately after each development x.0.z release.
272 Stable release candidates (i.e. x.1.z) phase 1
273 ----------------------------------------------
283 The branch of the stable release (e.g. "jewel" for 10.0.z, "kraken"
284 for 11.0.z, etc.) or master. Bug fixes should be merged to the named
285 branch corresponding to the stable release candidate (e.g. "jewel" for
286 10.1.z) or to master. During this phase, all commits to master will be
287 merged to the named branch, and vice versa. In other words, it makes
288 no difference whether a commit is merged to the named branch or to
289 master - it will make it into the next release candidate either way.
294 After the first stable release candidate is published, i.e. after the
295 x.1.0 tag is set in the release branch.
300 * The branch of the stable release is merged periodically into master.
301 * The master branch is merged periodically into the branch of the
303 * The master is merged into the branch of the stable release
304 immediately after each x.1.z release candidate.
306 Stable release candidates (i.e. x.1.z) phase 2
307 ----------------------------------------------
317 The branch of the stable release (e.g. "jewel" for 10.0.z, "kraken"
318 for 11.0.z, etc.). During this phase, all commits to the named branch
319 will be merged into master. Cherry-picking to the named branch during
320 release candidate phase 2 is done manually since the official
321 backporting process only begins when the release is pronounced
327 After Sage Weil decides it is time for phase 2 to happen.
332 * The branch of the stable release is merged periodically into master.
334 Stable releases (i.e. x.2.z)
335 ----------------------------
341 * features are sometime accepted
342 * commits should be cherry-picked from master when possible
343 * commits that are not cherry-picked from master must be about a bug unique to the stable release
344 * see also `the backport HOWTO`_
346 .. _`the backport HOWTO`:
347 http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO#HOWTO
352 The branch of the stable release (hammer for 0.94.x, infernalis for 9.2.x, etc.)
357 After the stable release is published, i.e. after the "vx.2.0" tag is
358 set in the release branch.
368 See `Redmine issue tracker`_ for a brief introduction to the Ceph Issue Tracker.
370 Ceph developers use the issue tracker to
372 1. keep track of issues - bugs, fix requests, feature requests, backport
375 2. communicate with other developers and keep them informed as work
376 on the issues progresses.
378 Issue tracker conventions
379 -------------------------
381 When you start working on an existing issue, it's nice to let the other
382 developers know this - to avoid duplication of labor. Typically, this is
383 done by changing the :code:`Assignee` field (to yourself) and changing the
384 :code:`Status` to *In progress*. Newcomers to the Ceph community typically do not
385 have sufficient privileges to update these fields, however: they can
386 simply update the issue with a brief note.
388 .. table:: Meanings of some commonly used statuses
390 ================ ===========================================
392 ================ ===========================================
394 In Progress Somebody is working on it
395 Need Review Pull request is open with a fix
396 Pending Backport Fix has been merged, backport(s) pending
397 Resolved Fix and backports (if any) have been merged
398 ================ ===========================================
403 The following chart illustrates basic development workflow:
407 Upstream Code Your Local Environment
409 /----------\ git clone /-------------\
410 | Ceph | -------------------------> | ceph/master |
411 \----------/ \-------------/
416 /----------------\ git commit --amend /-------------\
417 | make check |---------------------> | ceph/fix_1 |
418 | ceph--qa--suite| \-------------/
422 | review | git commit
425 /--------------\ /-------------\
426 | github |<---------------------- | ceph/fix_1 |
427 | pull request | git push \-------------/
430 Below we present an explanation of this chart. The explanation is written
431 with the assumption that you, the reader, are a beginning developer who
432 has an idea for a bugfix, but do not know exactly how to proceed.
437 Before you start, you should know the `Issue tracker`_ number of the bug
438 you intend to fix. If there is no tracker issue, now is the time to create
441 The tracker is there to explain the issue (bug) to your fellow Ceph
442 developers and keep them informed as you make progress toward resolution.
443 To this end, then, provide a descriptive title as well as sufficient
444 information and details in the description.
446 If you have sufficient tracker permissions, assign the bug to yourself by
447 changing the ``Assignee`` field. If your tracker permissions have not yet
448 been elevated, simply add a comment to the issue with a short message like
449 "I am working on this issue".
454 This section, and the ones that follow, correspond to the nodes in the
457 The upstream code lives in https://github.com/ceph/ceph.git, which is
458 sometimes referred to as the "upstream repo", or simply "upstream". As the
459 chart illustrates, we will make a local copy of this code, modify it, test
460 our modifications, and submit the modifications back to the upstream repo
463 A local copy of the upstream code is made by
465 1. forking the upstream repo on GitHub, and
466 2. cloning your fork to make a local working copy
468 See the `the GitHub documentation
469 <https://help.github.com/articles/fork-a-repo/#platform-linux>`_ for
470 detailed instructions on forking. In short, if your GitHub username is
471 "mygithubaccount", your fork of the upstream repo will show up at
472 https://github.com/mygithubaccount/ceph. Once you have created your fork,
473 you clone it by doing:
477 $ git clone https://github.com/mygithubaccount/ceph
479 While it is possible to clone the upstream repo directly, in this case you
480 must fork it first. Forking is what enables us to open a `GitHub pull
483 For more information on using GitHub, refer to `GitHub Help
484 <https://help.github.com/>`_.
489 In the local environment created in the previous step, you now have a
490 copy of the ``master`` branch in ``remotes/origin/master``. Since the fork
491 (https://github.com/mygithubaccount/ceph.git) is frozen in time and the
492 upstream repo (https://github.com/ceph/ceph.git, typically abbreviated to
493 ``ceph/ceph.git``) is updated frequently by other developers, you will need
494 to sync your fork periodically. To do this, first add the upstream repo as
495 a "remote" and fetch it::
497 $ git remote add ceph https://github.com/ceph/ceph.git
500 Fetching downloads all objects (commits, branches) that were added since
501 the last sync. After running these commands, all the branches from
502 ``ceph/ceph.git`` are downloaded to the local git repo as
503 ``remotes/ceph/$BRANCH_NAME`` and can be referenced as
504 ``ceph/$BRANCH_NAME`` in certain git commands.
506 For example, your local ``master`` branch can be reset to the upstream Ceph
507 ``master`` branch by doing::
510 $ git checkout master
511 $ git reset --hard ceph/master
513 Finally, the ``master`` branch of your fork can then be synced to upstream
516 $ git push -u origin master
521 Next, create a branch for the bugfix:
525 $ git checkout master
526 $ git checkout -b fix_1
527 $ git push -u origin fix_1
529 This creates a ``fix_1`` branch locally and in our GitHub fork. At this
530 point, the ``fix_1`` branch is identical to the ``master`` branch, but not
531 for long! You are now ready to modify the code.
536 At this point, change the status of the tracker issue to "In progress" to
537 communicate to the other Ceph developers that you have begun working on a
538 fix. If you don't have permission to change that field, your comment that
539 you are working on the issue is sufficient.
541 Possibly, your fix is very simple and requires only minimal testing.
542 More likely, it will be an iterative process involving trial and error, not
543 to mention skill. An explanation of how to fix bugs is beyond the
544 scope of this document. Instead, we focus on the mechanics of the process
545 in the context of the Ceph project.
547 A detailed discussion of the tools available for validating your bugfixes,
548 see the `Testing`_ chapter.
550 For now, let us just assume that you have finished work on the bugfix and
551 that you have tested it and believe it works. Commit the changes to your local
552 branch using the ``--signoff`` option::
556 and push the changes to your fork::
558 $ git push origin fix_1
563 The next step is to open a GitHub pull request. The purpose of this step is
564 to make your bugfix available to the community of Ceph developers. They
565 will review it and may do additional testing on it.
567 In short, this is the point where you "go public" with your modifications.
568 Psychologically, you should be prepared to receive suggestions and
569 constructive criticism. Don't worry! In our experience, the Ceph project is
572 If you are uncertain how to use pull requests, you may read
573 `this GitHub pull request tutorial`_.
575 .. _`this GitHub pull request tutorial`:
576 https://help.github.com/articles/using-pull-requests/
578 For some ideas on what constitutes a "good" pull request, see
579 the `Git Commit Good Practice`_ article at the `OpenStack Project Wiki`_.
581 .. _`Git Commit Good Practice`: https://wiki.openstack.org/wiki/GitCommitMessages
582 .. _`OpenStack Project Wiki`: https://wiki.openstack.org/wiki/Main_Page
584 Once your pull request (PR) is opened, update the `Issue tracker`_ by
585 adding a comment to the bug pointing the other developers to your PR. The
586 update can be as simple as::
588 *PR*: https://github.com/ceph/ceph/pull/$NUMBER_OF_YOUR_PULL_REQUEST
590 Automated PR validation
591 -----------------------
593 When your PR hits GitHub, the Ceph project's `Continuous Integration (CI)
594 <https://en.wikipedia.org/wiki/Continuous_integration>`_
595 infrastructure will test it automatically. At the time of this writing
596 (March 2016), the automated CI testing included a test to check that the
597 commits in the PR are properly signed (see `Submitting patches`_) and a
600 The latter, `make check`_, builds the PR and runs it through a battery of
601 tests. These tests run on machines operated by the Ceph Continuous
602 Integration (CI) team. When the tests complete, the result will be shown
603 on GitHub in the pull request itself.
605 You can (and should) also test your modifications before you open a PR.
606 Refer to the `Testing`_ chapter for details.
608 Notes on PR make check test
609 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
611 The GitHub `make check`_ test is driven by a Jenkins instance.
613 Jenkins merges the PR branch into the latest version of the base branch before
614 starting the build, so you don't have to rebase the PR to pick up any fixes.
616 You can trigger the PR tests at any time by adding a comment to the PR - the
617 comment should contain the string "test this please". Since a human subscribed
618 to the PR might interpret that as a request for him or her to test the PR, it's
619 good to write the request as "Jenkins, test this please".
621 The `make check`_ log is the place to go if there is a failure and you're not
622 sure what caused it. To reach it, first click on "details" (next to the `make
623 check`_ test in the PR) to get into the Jenkins web GUI, and then click on
624 "Console Output" (on the left).
626 Jenkins is set up to grep the log for strings known to have been associated
627 with `make check`_ failures in the past. However, there is no guarantee that
628 the strings are associated with any given `make check`_ failure. You have to
629 dig into the log to be sure.
631 Integration tests AKA ceph-qa-suite
632 -----------------------------------
634 Since Ceph is a complex beast, it may also be necessary to test your fix to
635 see how it behaves on real clusters running either on real or virtual
636 hardware. Tests designed for this purpose live in the `ceph/qa
637 sub-directory`_ and are run via the `teuthology framework`_.
639 .. _`ceph/qa sub-directory`: https://github.com/ceph/ceph/tree/master/qa/
640 .. _`teuthology repository`: https://github.com/ceph/teuthology
641 .. _`teuthology framework`: https://github.com/ceph/teuthology
643 If you have access to an OpenStack tenant, you are encouraged to run the
644 integration tests yourself using `ceph-workbench ceph-qa-suite`_,
645 and to post the test results to the PR.
647 .. _`ceph-workbench ceph-qa-suite`: http://ceph-workbench.readthedocs.org/
649 The Ceph community has access to the `Sepia lab
650 <http://ceph.github.io/sepia/>`_ where integration tests can be run on
651 real hardware. Other developers may add tags like "needs-qa" to your PR.
652 This allows PRs that need testing to be merged into a single branch and
653 tested all at the same time. Since teuthology suites can take hours
654 (even days in some cases) to run, this can save a lot of time.
656 Integration testing is discussed in more detail in the `Testing`_ chapter.
661 Once your bugfix has been thoroughly tested, or even during this process,
662 it will be subjected to code review by other developers. This typically
663 takes the form of correspondence in the PR itself, but can be supplemented
664 by discussions on `IRC`_ and the `Mailing list`_.
669 While your PR is going through `Testing`_ and `Code review`_, you can
670 modify it at any time by editing files in your local branch.
672 After the changes are committed locally (to the ``fix_1`` branch in our
673 example), they need to be pushed to GitHub so they appear in the PR.
675 Modifying the PR is done by adding commits to the ``fix_1`` branch upon
676 which it is based, often followed by rebasing to modify the branch's git
677 history. See `this tutorial
678 <https://www.atlassian.com/git/tutorials/rewriting-history>`_ for a good
679 introduction to rebasing. When you are done with your modifications, you
680 will need to force push your branch with:
684 $ git push --force origin fix_1
689 The bugfixing process culminates when one of the project leads decides to
692 When this happens, it is a signal for you (or the lead who merged the PR)
693 to change the `Issue tracker`_ status to "Resolved". Some issues may be
694 flagged for backporting, in which case the status should be changed to
695 "Pending Backport" (see the `Backporting`_ chapter for details).
701 Ceph has two types of tests: `make check`_ tests and integration tests.
702 The former are run via `GNU Make <https://www.gnu.org/software/make/>`,
703 and the latter are run via the `teuthology framework`_. The following two
704 chapters examine the `make check`_ and integration tests in detail.
711 After compiling Ceph, the `make check`_ command can be used to run the
712 code through a battery of tests covering various aspects of Ceph. For
713 inclusion in `make check`_, a test must:
715 * bind ports that do not conflict with other tests
716 * not require root access
717 * not require more than one machine to run
718 * complete within a few minutes
720 While it is possible to run `make check`_ directly, it can be tricky to
721 correctly set up your environment. Fortunately, a script is provided to
722 make it easier run `make check`_ on your code. It can be run from the
723 top-level directory of the Ceph source tree by doing::
725 $ ./run-make-check.sh
727 You will need a minimum of 8GB of RAM and 32GB of free disk space for this
728 command to complete successfully on x86_64 (other architectures may have
729 different constraints). Depending on your hardware, it can take from 20
730 minutes to three hours to complete, but it's worth the wait.
735 1. Unlike the various Ceph daemons and ``ceph-fuse``, the `make check`_ tests
736 are linked against the default memory allocator (glibc) unless explicitly
737 linked against something else. This enables tools like valgrind to be used
740 Testing - integration tests
741 ===========================
743 When a test requires multiple machines, root access or lasts for a
744 longer time (for example, to simulate a realistic Ceph deployment), it
745 is deemed to be an integration test. Integration tests are organized into
746 "suites", which are defined in the `ceph/qa sub-directory`_ and run with
747 the ``teuthology-suite`` command.
749 The ``teuthology-suite`` command is part of the `teuthology framework`_.
750 In the sections that follow we attempt to provide a detailed introduction
751 to that framework from the perspective of a beginning Ceph developer.
753 Teuthology consumes packages
754 ----------------------------
756 It may take some time to understand the significance of this fact, but it
757 is `very` significant. It means that automated tests can be conducted on
758 multiple platforms using the same packages (RPM, DEB) that can be
759 installed on any machine running those platforms.
761 Teuthology has a `list of platforms that it supports
762 <https://github.com/ceph/ceph/tree/master/qa/distros/supported>`_ (as
763 of March 2016 the list consisted of "CentOS 7.2" and "Ubuntu 14.04"). It
764 expects to be provided pre-built Ceph packages for these platforms.
765 Teuthology deploys these platforms on machines (bare-metal or
766 cloud-provisioned), installs the packages on them, and deploys Ceph
767 clusters on them - all as called for by the test.
772 A number of integration tests are run on a regular basis in the `Sepia
773 lab`_ against the official Ceph repositories (on the ``master`` development
774 branch and the stable branches). Traditionally, these tests are called "the
775 nightlies" because the Ceph core developers used to live and work in
776 the same time zone and from their perspective the tests were run overnight.
778 The results of the nightlies are published at http://pulpito.ceph.com/ and
779 http://pulpito.ovh.sepia.ceph.com:8081/. The developer nick shows in the
780 test results URL and in the first column of the Pulpito dashboard. The
781 results are also reported on the `ceph-qa mailing list
782 <https://ceph.com/irc/>`_ for analysis.
787 The ``suites`` directory of the `ceph/qa sub-directory`_ contains
788 all the integration tests, for all the Ceph components.
790 `ceph-deploy <https://github.com/ceph/ceph/tree/master/qa/suites/ceph-deploy>`_
791 install a Ceph cluster with ``ceph-deploy`` (`ceph-deploy man page`_)
793 `ceph-disk <https://github.com/ceph/ceph/tree/master/qa/suites/ceph-disk>`_
794 verify init scripts (upstart etc.) and udev integration with
795 ``ceph-disk`` (`ceph-disk man page`_), with and without `dmcrypt
796 <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt>`_ support.
798 `dummy <https://github.com/ceph/ceph/tree/master/qa/suites/dummy>`_
799 get a machine, do nothing and return success (commonly used to
800 verify the integration testing infrastructure works as expected)
802 `fs <https://github.com/ceph/ceph/tree/master/qa/suites/fs>`_
805 `kcephfs <https://github.com/ceph/ceph/tree/master/qa/suites/kcephfs>`_
806 test the CephFS kernel module
808 `krbd <https://github.com/ceph/ceph/tree/master/qa/suites/krbd>`_
809 test the RBD kernel module
811 `powercycle <https://github.com/ceph/ceph/tree/master/qa/suites/powercycle>`_
812 verify the Ceph cluster behaves when machines are powered off
815 `rados <https://github.com/ceph/ceph/tree/master/qa/suites/rados>`_
816 run Ceph clusters including OSDs and MONs, under various conditions of
819 `rbd <https://github.com/ceph/ceph/tree/master/qa/suites/rbd>`_
820 run RBD tests using actual Ceph clusters, with and without qemu
822 `rgw <https://github.com/ceph/ceph/tree/master/qa/suites/rgw>`_
823 run RGW tests using actual Ceph clusters
825 `smoke <https://github.com/ceph/ceph/tree/master/qa/suites/smoke>`_
826 run tests that exercise the Ceph API with an actual Ceph cluster
828 `teuthology <https://github.com/ceph/ceph/tree/master/qa/suites/teuthology>`_
829 verify that teuthology can run integration tests, with and without OpenStack
831 `upgrade <https://github.com/ceph/ceph/tree/master/qa/suites/upgrade>`_
832 for various versions of Ceph, verify that upgrades can happen
833 without disrupting an ongoing workload
835 .. _`ceph-deploy man page`: ../../man/8/ceph-deploy
836 .. _`ceph-disk man page`: ../../man/8/ceph-disk
838 teuthology-describe-tests
839 -------------------------
841 In February 2016, a new feature called ``teuthology-describe-tests`` was
842 added to the `teuthology framework`_ to facilitate documentation and better
843 understanding of integration tests (`feature announcement
844 <http://article.gmane.org/gmane.comp.file-systems.ceph.devel/29287>`_).
846 The upshot is that tests can be documented by embedding ``meta:``
847 annotations in the yaml files used to define the tests. The results can be
848 seen in the `ceph-qa-suite wiki
849 <http://tracker.ceph.com/projects/ceph-qa-suite/wiki/>`_.
851 Since this is a new feature, many yaml files have yet to be annotated.
852 Developers are encouraged to improve the documentation, in terms of both
853 coverage and quality.
855 How integration tests are run
856 -----------------------------
858 Given that - as a new Ceph developer - you will typically not have access
859 to the `Sepia lab`_, you may rightly ask how you can run the integration
860 tests in your own environment.
862 One option is to set up a teuthology cluster on bare metal. Though this is
863 a non-trivial task, it `is` possible. Here are `some notes
864 <http://docs.ceph.com/teuthology/docs/LAB_SETUP.html>`_ to get you started
865 if you decide to go this route.
867 If you have access to an OpenStack tenant, you have another option: the
868 `teuthology framework`_ has an OpenStack backend, which is documented `here
869 <https://github.com/dachary/teuthology/tree/openstack#openstack-backend>`__.
870 This OpenStack backend can build packages from a given git commit or
871 branch, provision VMs, install the packages and run integration tests
872 on those VMs. This process is controlled using a tool called
873 `ceph-workbench ceph-qa-suite`_. This tool also automates publishing of
874 test results at http://teuthology-logs.public.ceph.com.
876 Running integration tests on your code contributions and publishing the
877 results allows reviewers to verify that changes to the code base do not
878 cause regressions, or to analyze test failures when they do occur.
880 Every teuthology cluster, whether bare-metal or cloud-provisioned, has a
881 so-called "teuthology machine" from which tests suites are triggered using the
882 ``teuthology-suite`` command.
884 A detailed and up-to-date description of each `teuthology-suite`_ option is
885 available by running the following command on the teuthology machine::
887 $ teuthology-suite --help
889 .. _teuthology-suite: http://docs.ceph.com/teuthology/docs/teuthology.suite.html
891 How integration tests are defined
892 ---------------------------------
894 Integration tests are defined by yaml files found in the ``suites``
895 subdirectory of the `ceph/qa sub-directory`_ and implemented by python
896 code found in the ``tasks`` subdirectory. Some tests ("standalone tests")
897 are defined in a single yaml file, while other tests are defined by a
898 directory tree containing yaml files that are combined, at runtime, into a
901 Reading a standalone test
902 -------------------------
904 Let us first examine a standalone test, or "singleton".
906 Here is a commented example using the integration test
907 `rados/singleton/all/admin-socket.yaml
908 <https://github.com/ceph/ceph/blob/master/qa/suites/rados/singleton/all/admin-socket.yaml>`_
924 config set filestore_dump_file /tmp/foo:
928 The ``roles`` array determines the composition of the cluster (how
929 many MONs, OSDs, etc.) on which this test is designed to run, as well
930 as how these roles will be distributed over the machines in the
931 testing cluster. In this case, there is only one element in the
932 top-level array: therefore, only one machine is allocated to the
933 test. The nested array declares that this machine shall run a MON with
934 id ``a`` (that is the ``mon.a`` in the list of roles) and two OSDs
935 (``osd.0`` and ``osd.1``).
937 The body of the test is in the ``tasks`` array: each element is
938 evaluated in order, causing the corresponding python file found in the
939 ``tasks`` subdirectory of the `teuthology repository`_ or
940 `ceph/qa sub-directory`_ to be run. "Running" in this case means calling
941 the ``task()`` function defined in that file.
943 In this case, the `install
944 <https://github.com/ceph/teuthology/blob/master/teuthology/task/install/__init__.py>`_
945 task comes first. It installs the Ceph packages on each machine (as
946 defined by the ``roles`` array). A full description of the ``install``
947 task is `found in the python file
948 <https://github.com/ceph/teuthology/blob/master/teuthology/task/install/__init__.py>`_
949 (search for "def task").
951 The ``ceph`` task, which is documented `here
952 <https://github.com/ceph/ceph/blob/master/qa/tasks/ceph.py>`__ (again,
953 search for "def task"), starts OSDs and MONs (and possibly MDSs as well)
954 as required by the ``roles`` array. In this example, it will start one MON
955 (``mon.a``) and two OSDs (``osd.0`` and ``osd.1``), all on the same
956 machine. Control moves to the next task when the Ceph cluster reaches
959 The next task is ``admin_socket`` (`source code
960 <https://github.com/ceph/ceph/blob/master/qa/tasks/admin_socket.py>`_).
961 The parameter of the ``admin_socket`` task (and any other task) is a
962 structure which is interpreted as documented in the task. In this example
963 the parameter is a set of commands to be sent to the admin socket of
964 ``osd.0``. The task verifies that each of them returns on success (i.e.
967 This test can be run with::
969 $ teuthology-suite --suite rados/singleton/all/admin-socket.yaml fs/ext4.yaml
974 Each test has a "test description", which is similar to a directory path,
975 but not the same. In the case of a standalone test, like the one in
976 `Reading a standalone test`_, the test description is identical to the
977 relative path (starting from the ``suites/`` directory of the
978 `ceph/qa sub-directory`_) of the yaml file defining the test.
980 Much more commonly, tests are defined not by a single yaml file, but by a
981 `directory tree of yaml files`. At runtime, the tree is walked and all yaml
982 files (facets) are combined into larger yaml "programs" that define the
983 tests. A full listing of the yaml defining the test is included at the
984 beginning of every test log.
986 In these cases, the description of each test consists of the
987 subdirectory under `suites/
988 <https://github.com/ceph/ceph/tree/master/qa/suites>`_ containing the
989 yaml facets, followed by an expression in curly braces (``{}``) consisting of
990 a list of yaml facets in order of concatenation. For instance the
993 ceph-disk/basic/{distros/centos_7.0.yaml tasks/ceph-disk.yaml}
995 signifies the concatenation of two files:
997 * ceph-disk/basic/distros/centos_7.0.yaml
998 * ceph-disk/basic/tasks/ceph-disk.yaml
1000 How are tests built from directories?
1001 -------------------------------------
1003 As noted in the previous section, most tests are not defined in a single
1004 yaml file, but rather as a `combination` of files collected from a
1005 directory tree within the ``suites/`` subdirectory of the `ceph/qa sub-directory`_.
1007 The set of all tests defined by a given subdirectory of ``suites/`` is
1008 called an "integration test suite", or a "teuthology suite".
1010 Combination of yaml facets is controlled by special files (``%`` and
1011 ``+``) that are placed within the directory tree and can be thought of as
1012 operators. The ``%`` file is the "convolution" operator and ``+``
1013 signifies concatenation.
1015 Convolution operator
1016 --------------------
1018 The convolution operator, implemented as an empty file called ``%``, tells
1019 teuthology to construct a test matrix from yaml facets found in
1020 subdirectories below the directory containing the operator.
1022 For example, the `ceph-disk suite
1023 <https://github.com/ceph/ceph/tree/jewel/qa/suites/ceph-disk/>`_ is
1024 defined by the ``suites/ceph-disk/`` tree, which consists of the files and
1025 subdirectories in the following structure::
1027 directory: ceph-disk/basic
1030 file: centos_7.0.yaml
1031 file: ubuntu_14.04.yaml
1033 file: ceph-disk.yaml
1035 This is interpreted as a 2x1 matrix consisting of two tests:
1037 1. ceph-disk/basic/{distros/centos_7.0.yaml tasks/ceph-disk.yaml}
1038 2. ceph-disk/basic/{distros/ubuntu_14.04.yaml tasks/ceph-disk.yaml}
1040 i.e. the concatenation of centos_7.0.yaml and ceph-disk.yaml and
1041 the concatenation of ubuntu_14.04.yaml and ceph-disk.yaml, respectively.
1042 In human terms, this means that the task found in ``ceph-disk.yaml`` is
1043 intended to run on both CentOS 7.0 and Ubuntu 14.04.
1045 Without the file percent, the ``ceph-disk`` tree would be interpreted as
1046 three standalone tests:
1048 * ceph-disk/basic/distros/centos_7.0.yaml
1049 * ceph-disk/basic/distros/ubuntu_14.04.yaml
1050 * ceph-disk/basic/tasks/ceph-disk.yaml
1052 (which would of course be wrong in this case).
1054 Referring to the `ceph/qa sub-directory`_, you will notice that the
1055 ``centos_7.0.yaml`` and ``ubuntu_14.04.yaml`` files in the
1056 ``suites/ceph-disk/basic/distros/`` directory are implemented as symlinks.
1057 By using symlinks instead of copying, a single file can appear in multiple
1058 suites. This eases the maintenance of the test framework as a whole.
1060 All the tests generated from the ``suites/ceph-disk/`` directory tree
1061 (also known as the "ceph-disk suite") can be run with::
1063 $ teuthology-suite --suite ceph-disk
1065 An individual test from the `ceph-disk suite`_ can be run by adding the
1066 ``--filter`` option::
1068 $ teuthology-suite \
1069 --suite ceph-disk/basic \
1070 --filter 'ceph-disk/basic/{distros/ubuntu_14.04.yaml tasks/ceph-disk.yaml}'
1072 .. note: To run a standalone test like the one in `Reading a standalone
1073 test`_, ``--suite`` alone is sufficient. If you want to run a single
1074 test from a suite that is defined as a directory tree, ``--suite`` must
1075 be combined with ``--filter``. This is because the ``--suite`` option
1076 understands POSIX relative paths only.
1078 Concatenation operator
1079 ----------------------
1081 For even greater flexibility in sharing yaml files between suites, the
1082 special file plus (``+``) can be used to concatenate files within a
1083 directory. For instance, consider the `suites/rbd/thrash
1084 <https://github.com/ceph/ceph/tree/master/qa/suites/rbd/thrash>`_
1087 directory: rbd/thrash
1092 file: openstack.yaml
1093 directory: workloads
1094 file: rbd_api_tests_copy_on_read.yaml
1095 file: rbd_api_tests.yaml
1097 This creates two tests:
1099 * rbd/thrash/{clusters/fixed-2.yaml clusters/openstack.yaml workloads/rbd_api_tests_copy_on_read.yaml}
1100 * rbd/thrash/{clusters/fixed-2.yaml clusters/openstack.yaml workloads/rbd_api_tests.yaml}
1102 Because the ``clusters/`` subdirectory contains the special file plus
1103 (``+``), all the other files in that subdirectory (``fixed-2.yaml`` and
1104 ``openstack.yaml`` in this case) are concatenated together
1105 and treated as a single file. Without the special file plus, they would
1106 have been convolved with the files from the workloads directory to create
1109 * rbd/thrash/{clusters/openstack.yaml workloads/rbd_api_tests_copy_on_read.yaml}
1110 * rbd/thrash/{clusters/openstack.yaml workloads/rbd_api_tests.yaml}
1111 * rbd/thrash/{clusters/fixed-2.yaml workloads/rbd_api_tests_copy_on_read.yaml}
1112 * rbd/thrash/{clusters/fixed-2.yaml workloads/rbd_api_tests.yaml}
1114 The ``clusters/fixed-2.yaml`` file is shared among many suites to
1115 define the following ``roles``::
1118 - [mon.a, mon.c, osd.0, osd.1, osd.2, client.0]
1119 - [mon.b, osd.3, osd.4, osd.5, client.1]
1121 The ``rbd/thrash`` suite as defined above, consisting of two tests,
1124 $ teuthology-suite --suite rbd/thrash
1126 A single test from the rbd/thrash suite can be run by adding the
1127 ``--filter`` option::
1129 $ teuthology-suite \
1130 --suite rbd/thrash \
1131 --filter 'rbd/thrash/{clusters/fixed-2.yaml clusters/openstack.yaml workloads/rbd_api_tests_copy_on_read.yaml}'
1133 Filtering tests by their description
1134 ------------------------------------
1136 When a few jobs fail and need to be run again, the ``--filter`` option
1137 can be used to select tests with a matching description. For instance, if the
1138 ``rados`` suite fails the `all/peer.yaml <https://github.com/ceph/ceph/blob/master/qa/suites/rados/singleton/all/peer.yaml>`_ test, the following will only run the tests that contain this file::
1140 teuthology-suite --suite rados --filter all/peer.yaml
1142 The ``--filter-out`` option does the opposite (it matches tests that do
1143 `not` contain a given string), and can be combined with the ``--filter``
1146 Both ``--filter`` and ``--filter-out`` take a comma-separated list of strings (which
1147 means the comma character is implicitly forbidden in filenames found in the
1148 `ceph/qa sub-directory`_). For instance::
1150 teuthology-suite --suite rados --filter all/peer.yaml,all/rest-api.yaml
1152 will run tests that contain either
1153 `all/peer.yaml <https://github.com/ceph/ceph/blob/master/qa/suites/rados/singleton/all/peer.yaml>`_
1155 `all/rest-api.yaml <https://github.com/ceph/ceph/blob/master/qa/suites/rados/singleton/all/rest-api.yaml>`_
1157 Each string is looked up anywhere in the test description and has to
1158 be an exact match: they are not regular expressions.
1160 Reducing the number of tests
1161 ----------------------------
1163 The ``rados`` suite generates thousands of tests out of a few hundred
1164 files. This happens because teuthology constructs test matrices from
1165 subdirectories wherever it encounters a file named ``%``. For instance,
1166 all tests in the `rados/basic suite
1167 <https://github.com/ceph/ceph/tree/master/qa/suites/rados/basic>`_
1168 run with different messenger types: ``simple``, ``async`` and
1169 ``random``, because they are combined (via the special file ``%``) with
1171 <https://github.com/ceph/ceph/tree/master/qa/suites/rados/basic/msgr>`_
1173 All integration tests are required to be run before a Ceph release is published.
1174 When merely verifying whether a contribution can be merged without
1175 risking a trivial regression, it is enough to run a subset. The ``--subset`` option can be used to
1176 reduce the number of tests that are triggered. For instance::
1178 teuthology-suite --suite rados --subset 0/4000
1180 will run as few tests as possible. The tradeoff in this case is that
1181 not all combinations of test variations will together,
1182 but no matter how small a ratio is provided in the ``--subset``,
1183 teuthology will still ensure that all files in the suite are in at
1184 least one test. Understanding the actual logic that drives this
1185 requires reading the teuthology source code.
1187 The ``--limit`` option only runs the first ``N`` tests in the suite:
1188 this is rarely useful, however, because there is no way to control which
1191 Testing in the cloud
1192 ====================
1194 In this chapter, we will explain in detail how use an OpenStack
1195 tenant as an environment for Ceph integration testing.
1197 Assumptions and caveat
1198 ----------------------
1202 1. you are the only person using the tenant
1203 2. you have the credentials
1204 3. the tenant supports the ``nova`` and ``cinder`` APIs
1206 Caveat: be aware that, as of this writing (July 2016), testing in
1207 OpenStack clouds is a new feature. Things may not work as advertised.
1208 If you run into trouble, ask for help on `IRC`_ or the `Mailing list`_, or
1209 open a bug report at the `ceph-workbench bug tracker`_.
1211 .. _`ceph-workbench bug tracker`: http://ceph-workbench.dachary.org/root/ceph-workbench/issues
1216 If you have not tried to use ``ceph-workbench`` with this tenant before,
1217 proceed to the next step.
1219 To start with a clean slate, login to your tenant via the Horizon dashboard and:
1221 * terminate the ``teuthology`` and ``packages-repository`` instances, if any
1222 * delete the ``teuthology`` and ``teuthology-worker`` security groups, if any
1223 * delete the ``teuthology`` and ``teuthology-myself`` key pairs, if any
1225 Also do the above if you ever get key-related errors ("invalid key", etc.) when
1226 trying to schedule suites.
1228 Getting ceph-workbench
1229 ----------------------
1231 Since testing in the cloud is done using the `ceph-workbench
1232 ceph-qa-suite`_ tool, you will need to install that first. It is designed
1233 to be installed via Docker, so if you don't have Docker running on your
1234 development machine, take care of that first. You can follow `the official
1235 tutorial <https://docs.docker.com/engine/installation/>`_ to install if
1236 you have not installed yet.
1238 Once Docker is up and running, install ``ceph-workbench`` by following the
1239 `Installation instructions in the ceph-workbench documentation
1240 <http://ceph-workbench.readthedocs.org/en/latest/#installation>`_.
1242 Linking ceph-workbench with your OpenStack tenant
1243 -------------------------------------------------
1245 Before you can trigger your first teuthology suite, you will need to link
1246 ``ceph-workbench`` with your OpenStack account.
1248 First, download a ``openrc.sh`` file by clicking on the "Download OpenStack
1249 RC File" button, which can be found in the "API Access" tab of the "Access
1250 & Security" dialog of the OpenStack Horizon dashboard.
1252 Second, create a ``~/.ceph-workbench`` directory, set its permissions to
1253 700, and move the ``openrc.sh`` file into it. Make sure that the filename
1254 is exactly ``~/.ceph-workbench/openrc.sh``.
1256 Third, edit the file so it does not ask for your OpenStack password
1257 interactively. Comment out the relevant lines and replace them with
1260 export OS_PASSWORD="aiVeth0aejee3eep8rogho3eep7Pha6ek"
1262 When `ceph-workbench ceph-qa-suite`_ connects to your OpenStack tenant for
1263 the first time, it will generate two keypairs: ``teuthology-myself`` and
1266 .. If this is not the first time you have tried to use
1267 .. `ceph-workbench ceph-qa-suite`_ with this tenant, make sure to delete any
1268 .. stale keypairs with these names!
1273 You are now ready to take your OpenStack teuthology setup for a test
1276 $ ceph-workbench ceph-qa-suite --suite dummy
1278 Be forewarned that the first run of `ceph-workbench ceph-qa-suite`_ on a
1279 pristine tenant will take a long time to complete because it downloads a VM
1280 image and during this time the command may not produce any output.
1282 The images are cached in OpenStack, so they are only downloaded once.
1283 Subsequent runs of the same command will complete faster.
1285 Although ``dummy`` suite does not run any tests, in all other respects it
1286 behaves just like a teuthology suite and produces some of the same
1289 The last bit of output should look something like this::
1291 pulpito web interface: http://149.202.168.201:8081/
1292 ssh access : ssh -i /home/smithfarm/.ceph-workbench/teuthology-myself.pem ubuntu@149.202.168.201 # logs in /usr/share/nginx/html
1294 What this means is that `ceph-workbench ceph-qa-suite`_ triggered the test
1295 suite run. It does not mean that the suite run has completed. To monitor
1296 progress of the run, check the Pulpito web interface URL periodically, or
1297 if you are impatient, ssh to the teuthology machine using the ssh command
1300 $ tail -f /var/log/teuthology.*
1302 The `/usr/share/nginx/html` directory contains the complete logs of the
1303 test suite. If we had provided the ``--upload`` option to the
1304 `ceph-workbench ceph-qa-suite`_ command, these logs would have been
1305 uploaded to http://teuthology-logs.public.ceph.com.
1307 Run a standalone test
1308 ---------------------
1310 The standalone test explained in `Reading a standalone test`_ can be run
1311 with the following command::
1313 $ ceph-workbench ceph-qa-suite --suite rados/singleton/all/admin-socket.yaml
1315 This will run the suite shown on the current ``master`` branch of
1316 ``ceph/ceph.git``. You can specify a different branch with the ``--ceph``
1317 option, and even a different git repo with the ``--ceph-git-url`` option. (Run
1318 ``ceph-workbench ceph-qa-suite --help`` for an up-to-date list of available
1321 The first run of a suite will also take a long time, because ceph packages
1322 have to be built, first. Again, the packages so built are cached and
1323 `ceph-workbench ceph-qa-suite`_ will not build identical packages a second
1326 Interrupt a running suite
1327 -------------------------
1329 Teuthology suites take time to run. From time to time one may wish to
1330 interrupt a running suite. One obvious way to do this is::
1332 ceph-workbench ceph-qa-suite --teardown
1334 This destroys all VMs created by `ceph-workbench ceph-qa-suite`_ and
1335 returns the OpenStack tenant to a "clean slate".
1337 Sometimes you may wish to interrupt the running suite, but keep the logs,
1338 the teuthology VM, the packages-repository VM, etc. To do this, you can
1339 ``ssh`` to the teuthology VM (using the ``ssh access`` command reported
1340 when you triggered the suite -- see `Run the dummy suite`_) and, once
1343 sudo /etc/init.d/teuthology restart
1345 This will keep the teuthology machine, the logs and the packages-repository
1346 instance but nuke everything else.
1348 Upload logs to archive server
1349 -----------------------------
1351 Since the teuthology instance in OpenStack is only semi-permanent, with limited
1352 space for storing logs, ``teuthology-openstack`` provides an ``--upload``
1353 option which, if included in the ``ceph-workbench ceph-qa-suite`` command,
1354 will cause logs from all failed jobs to be uploaded to the log archive server
1355 maintained by the Ceph project. The logs will appear at the URL::
1357 http://teuthology-logs.public.ceph.com/$RUN
1359 where ``$RUN`` is the name of the run. It will be a string like this::
1361 ubuntu-2016-07-23_16:08:12-rados-hammer-backports---basic-openstack
1363 Even if you don't providing the ``--upload`` option, however, all the logs can
1364 still be found on the teuthology machine in the directory
1365 ``/usr/share/nginx/html``.
1367 Provision VMs ad hoc
1368 --------------------
1370 From the teuthology VM, it is possible to provision machines on an "ad hoc"
1371 basis, to use however you like. The magic incantation is::
1373 teuthology-lock --lock-many $NUMBER_OF_MACHINES \
1374 --os-type $OPERATING_SYSTEM \
1375 --os-version $OS_VERSION \
1376 --machine-type openstack \
1377 --owner $EMAIL_ADDRESS
1379 The command must be issued from the ``~/teuthology`` directory. The possible
1380 values for ``OPERATING_SYSTEM`` AND ``OS_VERSION`` can be found by examining
1381 the contents of the directory ``teuthology/openstack/``. For example::
1383 teuthology-lock --lock-many 1 --os-type ubuntu --os-version 16.04 \
1384 --machine-type openstack --owner foo@example.com
1386 When you are finished with the machine, find it in the list of machines::
1388 openstack server list
1390 to determine the name or ID, and then terminate it with::
1392 openstack server delete $NAME_OR_ID
1394 Deploy a cluster for manual testing
1395 -----------------------------------
1397 The `teuthology framework`_ and `ceph-workbench ceph-qa-suite`_ are
1398 versatile tools that automatically provision Ceph clusters in the cloud and
1399 run various tests on them in an automated fashion. This enables a single
1400 engineer, in a matter of hours, to perform thousands of tests that would
1401 keep dozens of human testers occupied for days or weeks if conducted
1404 However, there are times when the automated tests do not cover a particular
1405 scenario and manual testing is desired. It turns out that it is simple to
1406 adapt a test to stop and wait after the Ceph installation phase, and the
1407 engineer can then ssh into the running cluster. Simply add the following
1408 snippet in the desired place within the test YAML and schedule a run with the
1414 - sleep 1000000000 # forever
1416 (Make sure you have a ``client.0`` defined in your ``roles`` stanza or adapt
1419 The same effect can be achieved using the ``interactive`` task::
1424 By following the test log, you can determine when the test cluster has entered
1425 the "sleep forever" condition. At that point, you can ssh to the teuthology
1426 machine and from there to one of the target VMs (OpenStack) or teuthology
1427 worker machines machine (Sepia) where the test cluster is running.
1429 The VMs (or "instances" in OpenStack terminology) created by
1430 `ceph-workbench ceph-qa-suite`_ are named as follows:
1432 ``teuthology`` - the teuthology machine
1434 ``packages-repository`` - VM where packages are stored
1436 ``ceph-*`` - VM where packages are built
1438 ``target*`` - machines where tests are run
1440 The VMs named ``target*`` are used by tests. If you are monitoring the
1441 teuthology log for a given test, the hostnames of these target machines can
1442 be found out by searching for the string ``Locked targets``::
1444 2016-03-20T11:39:06.166 INFO:teuthology.task.internal:Locked targets:
1445 target149202171058.teuthology: null
1446 target149202171059.teuthology: null
1448 The IP addresses of the target machines can be found by running ``openstack
1449 server list`` on the teuthology machine, but the target VM hostnames (e.g.
1450 ``target149202171058.teuthology``) are resolvable within the teuthology
1454 Testing - how to run s3-tests locally
1455 =====================================
1457 RGW code can be tested by building Ceph locally from source, starting a vstart
1458 cluster, and running the "s3-tests" suite against it.
1460 The following instructions should work on jewel and above.
1465 Refer to :doc:`/install/build-ceph`.
1467 You can do step 2 separately while it is building.
1472 When the build completes, and still in the top-level directory of the git
1473 clone where you built Ceph, do the following, for cmake builds::
1476 RGW=1 ../vstart.sh -n
1478 This will produce a lot of output as the vstart cluster is started up. At the
1479 end you should see a message like::
1481 started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output.
1483 This means the cluster is running.
1486 Step 3 - run s3-tests
1487 ---------------------
1489 To run the s3tests suite do the following::
1491 $ ../qa/workunits/rgw/run-s3tests.sh
1496 .. Building RPM packages
1497 .. ---------------------
1499 .. Ceph is regularly built and packaged for a number of major Linux
1500 .. distributions. At the time of this writing, these included CentOS, Debian,
1501 .. Fedora, openSUSE, and Ubuntu.
1506 .. Ceph is a collection of components built on top of RADOS and provide
1507 .. services (RBD, RGW, CephFS) and APIs (S3, Swift, POSIX) for the user to
1508 .. store and retrieve data.
1510 .. See :doc:`/architecture` for an overview of Ceph architecture. The
1511 .. following sections treat each of the major architectural components
1512 .. in more detail, with links to code and tests.
1514 .. FIXME The following are just stubs. These need to be developed into
1515 .. detailed descriptions of the various high-level components (RADOS, RGW,
1516 .. etc.) with breakdowns of their respective subcomponents.
1518 .. FIXME Later, in the Testing chapter I would like to take another look
1519 .. at these components/subcomponents with a focus on how they are tested.
1524 .. RADOS stands for "Reliable, Autonomic Distributed Object Store". In a Ceph
1525 .. cluster, all data are stored in objects, and RADOS is the component responsible
1528 .. RADOS itself can be further broken down into Monitors, Object Storage Daemons
1529 .. (OSDs), and client APIs (librados). Monitors and OSDs are introduced at
1530 .. :doc:`/start/intro`. The client library is explained at
1531 .. :doc:`/rados/api/index`.
1536 .. RGW stands for RADOS Gateway. Using the embedded HTTP server civetweb_ or
1537 .. Apache FastCGI, RGW provides a REST interface to RADOS objects.
1539 .. .. _civetweb: https://github.com/civetweb/civetweb
1541 .. A more thorough introduction to RGW can be found at :doc:`/radosgw/index`.
1546 .. RBD stands for RADOS Block Device. It enables a Ceph cluster to store disk
1547 .. images, and includes in-kernel code enabling RBD images to be mounted.
1549 .. To delve further into RBD, see :doc:`/rbd/rbd`.
1554 .. CephFS is a distributed file system that enables a Ceph cluster to be used as a NAS.
1556 .. File system metadata is managed by Meta Data Server (MDS) daemons. The Ceph
1557 .. file system is explained in more detail at :doc:`/cephfs/index`.