5 Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as
6 "RADOS", "RBD," "RGW" and so forth require corresponding marketing terms
7 that explain what each component does. The terms in this glossary are
8 intended to complement the existing technical terminology.
10 Sometimes more than one term applies to a definition. Generally, the first
11 term reflects a term consistent with Ceph's marketing, and secondary terms
12 reflect either technical terms or legacy ways of referring to Ceph systems.
18 The aggregate term for the people, software, mission and infrastructure
22 The Ceph authentication protocol. Cephx operates like Kerberos, but it
23 has no single point of failure.
27 All Ceph software, which includes any piece of code hosted at
28 `http://github.com/ceph`_.
32 A collection of two or more components of Ceph.
37 Any single machine or server in a Ceph System.
43 Reliable Autonomic Distributed Object Store
44 The core set of storage software which stores the user's data (MON+OSD).
48 The set of maps comprising the monitor map, OSD map, PG map, MDS map and
49 CRUSH map. See `Cluster Map`_ for details.
52 The object storage "product", service or capabilities, which consists
53 essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
58 The S3/Swift gateway component of Ceph.
62 The block storage component of Ceph.
65 The block storage "product," service or capabilities when used in
66 conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a
67 hypervisor abstraction layer such as ``libvirt``.
72 The POSIX filesystem components of Ceph.
76 Third party cloud provisioning platforms such as OpenStack, CloudStack,
77 OpenNebula, ProxMox, etc.
81 A physical or logical storage unit (*e.g.*, LUN).
82 Sometimes, Ceph users use the
83 term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
84 proper term is "Ceph OSD".
89 The Ceph OSD software, which interacts with a logical
90 disk (:term:`OSD`). Sometimes, Ceph users use the
91 term "OSD" to refer to "Ceph OSD Daemon", though the
92 proper term is "Ceph OSD".
95 The integer that defines an OSD. It is generated by the monitors as part
96 of the creation of a new OSD.
99 This is a unique identifier used to further improve the uniqueness of an
100 OSD and it is found in the OSD path in a file called ``osd_fsid``. This
101 ``fsid`` term is used interchangeably with ``uuid``
104 Just like the OSD fsid, this is the OSD unique identifer and is used
105 interchangeably with ``fsid``
108 OSD BlueStore is a new back end for OSD daemons (kraken and newer
109 versions). Unlike :term:`filestore` it stores objects directly on the
110 Ceph block devices without any file system interface.
113 A back end for OSD daemons, where a Journal is needed and files are
114 written to the filesystem.
118 The Ceph monitor software.
122 The Ceph manager software, which collects all the state from the whole
123 cluster in one place.
127 The Ceph metadata software.
131 The collection of Ceph components which can access a Ceph Storage
132 Cluster. These include the Ceph Object Gateway, the Ceph Block Device,
133 the Ceph Filesystem, and their corresponding libraries, kernel modules,
137 The collection of kernel modules which can be used to interact with the
138 Ceph System (e.g,. ``ceph.ko``, ``rbd.ko``).
140 Ceph Client Libraries
141 The collection of libraries that can be used to interact with components
145 Any distinct numbered version of Ceph.
148 Any ad-hoc release that includes only bug or security fixes.
151 Versions of Ceph that have not yet been put through quality assurance
152 testing, but may contain new features.
154 Ceph Release Candidate
155 A major version of Ceph that has undergone initial quality assurance
156 testing and is ready for beta testers.
159 A major version of Ceph where all features from the preceding interim
160 releases have been put through quality assurance testing successfully.
164 The collection of software that performs scripted tests on Ceph.
167 Controlled Replication Under Scalable Hashing. It is the algorithm
168 Ceph uses to compute object storage locations.
171 A set of CRUSH data placement rules that applies to a particular pool(s).
175 Pools are logical partitions for storing objects.
178 A systemd ``type`` where a command is defined in ``ExecStart`` which will
179 exit upon completion (it is not intended to daemonize)
182 Extensible metadata for LVM volumes and groups. It is used to store
183 Ceph-specific information about devices and its relationship with
186 .. _http://github.com/ceph: http://github.com/ceph
187 .. _Cluster Map: ../architecture#cluster-map