1 ==========================
2 Placement Group Concepts
3 ==========================
5 When you execute commands like ``ceph -w``, ``ceph osd dump``, and other
6 commands related to placement groups, Ceph may return values using some
7 of the following terms:
10 The process of bringing all of the OSDs that store
11 a Placement Group (PG) into agreement about the state
12 of all of the objects (and their metadata) in that PG.
13 Note that agreeing on the state does not mean that
14 they all have the latest contents.
17 The ordered list of OSDs who are (or were as of some epoch)
18 responsible for a particular placement group.
21 The ordered list of OSDs responsible for a particular placement
22 group for a particular epoch according to CRUSH. Normally this
23 is the same as the *Acting Set*, except when the *Acting Set* has
24 been explicitly overridden via ``pg_temp`` in the OSD Map.
26 *Current Interval* or *Past Interval*
27 A sequence of OSD map epochs during which the *Acting Set* and *Up
28 Set* for particular placement group do not change.
31 The member (and by convention first) of the *Acting Set*,
32 that is responsible for coordination peering, and is
33 the only OSD that will accept client-initiated
34 writes to objects in a placement group.
37 A non-primary OSD in the *Acting Set* for a placement group
38 (and who has been recognized as such and *activated* by the primary).
41 An OSD that is not a member of the current *Acting Set*, but
42 has not yet been told that it can delete its copies of a
43 particular placement group.
46 Ensuring that copies of all of the objects in a placement group
47 are on all of the OSDs in the *Acting Set*. Once *Peering* has
48 been performed, the *Primary* can start accepting write operations,
49 and *Recovery* can proceed in the background.
52 Basic metadata about the placement group's creation epoch, the version
53 for the most recent write to the placement group, *last epoch started*,
54 *last epoch clean*, and the beginning of the *current interval*. Any
55 inter-OSD communication about placement groups includes the *PG Info*,
56 such that any OSD that knows a placement group exists (or once existed)
57 also has a lower bound on *last epoch clean* or *last epoch started*.
60 A list of recent updates made to objects in a placement group.
61 Note that these logs can be truncated after all OSDs
62 in the *Acting Set* have acknowledged up to a certain
66 Each OSD notes update log entries and if they imply updates to
67 the contents of an object, adds that object to a list of needed
68 updates. This list is called the *Missing Set* for that ``<OSD,PG>``.
70 *Authoritative History*
71 A complete, and fully ordered set of operations that, if
72 performed, would bring an OSD's copy of a placement group
76 A (monotonically increasing) OSD map version number
79 The last epoch at which all nodes in the *Acting Set*
80 for a particular placement group agreed on an
81 *Authoritative History*. At this point, *Peering* is
82 deemed to have been successful.
85 Before a *Primary* can successfully complete the *Peering* process,
86 it must inform a monitor that is alive through the current
87 OSD map *Epoch* by having the monitor set its *up_thru* in the osd
88 map. This helps *Peering* ignore previous *Acting Sets* for which
89 *Peering* never completed after certain sequences of failures, such as
90 the second interval below:
92 - *acting set* = [A,B]
94 - *acting set* = [] very shortly after (e.g., simultaneous failure, but staggered detection)
95 - *acting set* = [B] (B restarts, A does not)
98 The last *Epoch* at which all nodes in the *Acting set*
99 for a particular placement group were completely
100 up to date (both placement group logs and object contents).
101 At this point, *recovery* is deemed to have been