1 ======================================
2 Pool, PG and CRUSH Config Reference
3 ======================================
5 .. index:: pools; configuration
7 When you create pools and set the number of placement groups for the pool, Ceph
8 uses default values when you don't specifically override the defaults. **We
9 recommend** overridding some of the defaults. Specifically, we recommend setting
10 a pool's replica size and overriding the default number of placement groups. You
11 can specifically set these values when running `pool`_ commands. You can also
12 override the defaults by adding new ones in the ``[global]`` section of your
13 Ceph configuration file.
16 .. literalinclude:: pool-pg.conf
21 ``mon max pool pg num``
23 :Description: The maximum number of placement groups per pool.
28 ``mon pg create interval``
30 :Description: Number of seconds between PG creation in the same
37 ``mon pg stuck threshold``
39 :Description: Number of seconds after which PGs can be considered as
45 ``mon pg min inactive``
47 :Description: Issue a ``HEALTH_ERR`` in cluster log if the number of PGs stay
48 inactive longer than ``mon_pg_stuck_threshold`` exceeds this
49 setting. A non-positive number means disabled, never go into ERR.
54 ``mon pg warn min per osd``
56 :Description: Issue a ``HEALTH_WARN`` in cluster log if the average number
57 of PGs per (in) OSD is under this number. (a non-positive number
63 ``mon pg warn max per osd``
65 :Description: Issue a ``HEALTH_WARN`` in cluster log if the average number
66 of PGs per (in) OSD is above this number. (a non-positive number
72 ``mon pg warn min objects``
74 :Description: Do not warn if the total number of objects in cluster is below
80 ``mon pg warn min pool objects``
82 :Description: Do not warn on pools whose object number is below this number
87 ``mon pg check down all threshold``
89 :Description: Threshold of down OSDs percentage after which we check all PGs
95 ``mon pg warn max object skew``
97 :Description: Issue a ``HEALTH_WARN`` in cluster log if the average object number
98 of a certain pool is greater than ``mon pg warn max object skew`` times
99 the average object number of the whole pool. (a non-positive number
105 ``mon delta reset interval``
107 :Description: Seconds of inactivity before we reset the pg delta to 0. We keep
108 track of the delta of the used space of each pool, so, for
109 example, it would be easier for us to understand the progress of
110 recovery or the performance of cache tier. But if there's no
111 activity reported for a certain pool, we just reset the history of
117 ``mon osd max op age``
119 :Description: Maximum op age before we get concerned (make it a power of 2).
120 A ``HEALTH_WARN`` will be issued if a request has been blocked longer
128 :Description: Placement group bits per Ceph OSD Daemon.
129 :Type: 32-bit Integer
135 :Description: The number of bits per Ceph OSD Daemon for PGPs.
136 :Type: 32-bit Integer
140 ``osd crush chooseleaf type``
142 :Description: The bucket type to use for ``chooseleaf`` in a CRUSH rule. Uses
143 ordinal rank rather than name.
145 :Type: 32-bit Integer
146 :Default: ``1``. Typically a host containing one or more Ceph OSD Daemons.
149 ``osd crush initial weight``
151 :Description: The initial crush weight for newly added osds into crushmap.
154 :Default: ``the size of newly added osd in TB``. By default, the initial crush
155 weight for the newly added osd is set to its volume size in TB.
156 See `Weighting Bucket Items`_ for details.
159 ``osd pool default crush replicated ruleset``
161 :Description: The default CRUSH ruleset to use when creating a replicated pool.
163 :Default: ``CEPH_DEFAULT_CRUSH_REPLICATED_RULESET``, which means "pick
164 a ruleset with the lowest numerical ID and use that". This is to
165 make pool creation work in the absence of ruleset 0.
168 ``osd pool erasure code stripe unit``
170 :Description: Sets the default size, in bytes, of a chunk of an object
171 stripe for erasure coded pools. Every object of size S
172 will be stored as N stripes, with each data chunk
173 receiving ``stripe unit`` bytes. Each stripe of ``N *
174 stripe unit`` bytes will be encoded/decoded
175 individually. This option can is overridden by the
176 ``stripe_unit`` setting in an erasure code profile.
178 :Type: Unsigned 32-bit Integer
182 ``osd pool default size``
184 :Description: Sets the number of replicas for objects in the pool. The default
186 ``ceph osd pool set {pool-name} size {size}``.
188 :Type: 32-bit Integer
192 ``osd pool default min size``
194 :Description: Sets the minimum number of written replicas for objects in the
195 pool in order to acknowledge a write operation to the client.
196 If minimum is not met, Ceph will not acknowledge the write to the
197 client. This setting ensures a minimum number of replicas when
198 operating in ``degraded`` mode.
200 :Type: 32-bit Integer
201 :Default: ``0``, which means no particular minimum. If ``0``,
202 minimum is ``size - (size / 2)``.
205 ``osd pool default pg num``
207 :Description: The default number of placement groups for a pool. The default
208 value is the same as ``pg_num`` with ``mkpool``.
210 :Type: 32-bit Integer
214 ``osd pool default pgp num``
216 :Description: The default number of placement groups for placement for a pool.
217 The default value is the same as ``pgp_num`` with ``mkpool``.
218 PG and PGP should be equal (for now).
220 :Type: 32-bit Integer
224 ``osd pool default flags``
226 :Description: The default flags for new pools.
227 :Type: 32-bit Integer
233 :Description: The maximum number of placement groups to list. A client
234 requesting a large number can tie up the Ceph OSD Daemon.
236 :Type: Unsigned 64-bit Integer
238 :Note: Default should be fine.
241 ``osd min pg log entries``
243 :Description: The minimum number of placement group logs to maintain
244 when trimming log files.
246 :Type: 32-bit Int Unsigned
250 ``osd default data pool replay window``
252 :Description: The time (in seconds) for an OSD to wait for a client to replay
255 :Type: 32-bit Integer
258 ``osd max pg per osd hard ratio``
260 :Description: The ratio of number of PGs per OSD allowed by the cluster before
261 OSD refuses to create new PGs. OSD stops creating new PGs if the number
262 of PGs it serves exceeds
263 ``osd max pg per osd hard ratio`` \* ``mon max pg per osd``.
268 .. _pool: ../../operations/pools
269 .. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
270 .. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems