1 ==========================
2 BlueStore Config Reference
3 ==========================
8 BlueStore manages either one, two, or (in certain cases) three storage
11 In the simplest case, BlueStore consumes a single (primary) storage
12 device. The storage device is normally partitioned into two parts:
14 #. A small partition is formatted with XFS and contains basic metadata
15 for the OSD. This *data directory* includes information about the
16 OSD (its identifier, which cluster it belongs to, and its private
19 #. The rest of the device is normally a large partition occupying the
20 rest of the device that is managed directly by BlueStore contains
21 all of the actual data. This *primary device* is normally identifed
22 by a ``block`` symlink in data directory.
24 It is also possible to deploy BlueStore across two additional devices:
26 * A *WAL device* can be used for BlueStore's internal journal or
27 write-ahead log. It is identified by the ``block.wal`` symlink in
28 the data directory. It is only useful to use a WAL device if the
29 device is faster than the primary device (e.g., when it is on an SSD
30 and the primary device is an HDD).
31 * A *DB device* can be used for storing BlueStore's internal metadata.
32 BlueStore (or rather, the embedded RocksDB) will put as much
33 metadata as it can on the DB device to improve performance. If the
34 DB device fills up, metadata will spill back onto the primary device
35 (where it would have been otherwise). Again, it is only helpful to
36 provision a DB device if it is faster than the primary device.
38 If there is only a small amount of fast storage available (e.g., less
39 than a gigabyte), we recommend using it as a WAL device. If there is
40 more, provisioning a DB device makes more sense. The BlueStore
41 journal will always be placed on the fastest device available, so
42 using a DB device will provide the same benefit that the WAL device
43 would while *also* allowing additional metadata to be stored there (if
46 A single-device BlueStore OSD can be provisioned with::
48 ceph-disk prepare --bluestore <device>
50 To specify a WAL device and/or DB device, ::
52 ceph-disk prepare --bluestore <device> --block.wal <wal-device> --block-db <db-device>
57 The amount of memory consumed by each OSD for BlueStore's cache is
58 determined by the ``bluestore_cache_size`` configuration option. If
59 that config option is not set (i.e., remains at 0), there is a
60 different default value that is used depending on whether an HDD or
61 SSD is used for the primary device (set by the
62 ``bluestore_cache_size_ssd`` and ``bluestore_cache_size_hdd`` config
65 BlueStore and the rest of the Ceph OSD does the best it can currently
66 to stick to the budgeted memory. Note that on top of the configured
67 cache size, there is also memory consumed by the OSD itself, and
68 generally some overhead due to memory fragmentation and other
71 The configured cache memory budget can be used in a few different ways:
73 * Key/Value metadata (i.e., RocksDB's internal cache)
75 * BlueStore data (i.e., recently read or written object data)
77 Cache memory usage is governed by the following options:
78 ``bluestore_cache_meta_ratio``, ``bluestore_cache_kv_ratio``, and
79 ``bluestore_cache_kv_max``. The fraction of the cache devoted to data
80 is 1.0 minus the meta and kv ratios. The memory devoted to kv
81 metadata (the RocksDB cache) is capped by ``bluestore_cache_kv_max``
82 since our testing indicates there are diminishing returns beyond a
85 ``bluestore_cache_size``
87 :Description: The amount of memory BlueStore will use for its cache. If zero, ``bluestore_cache_size_hdd`` or ``bluestore_cache_size_ssd`` will be used instead.
92 ``bluestore_cache_size_hdd``
94 :Description: The default amount of memory BlueStore will use for its cache when backed by an HDD.
97 :Default: ``1 * 1024 * 1024 * 1024`` (1 GB)
99 ``bluestore_cache_size_ssd``
101 :Description: The default amount of memory BlueStore will use for its cache when backed by an SSD.
104 :Default: ``3 * 1024 * 1024 * 1024`` (3 GB)
106 ``bluestore_cache_meta_ratio``
108 :Description: The ratio of cache devoted to metadata.
109 :Type: Floating point
113 ``bluestore_cache_kv_ratio``
115 :Description: The ratio of cache devoted to key/value data (rocksdb).
116 :Type: Floating point
120 ``bluestore_cache_kv_max``
122 :Description: The maximum amount of cache devoted to key/value data (rocksdb).
123 :Type: Floating point
125 :Default: ``512 * 1024*1024`` (512 MB)
131 BlueStore checksums all metadata and data written to disk. Metadata
132 checksumming is handled by RocksDB and uses `crc32c`. Data
133 checksumming is done by BlueStore and can make use of `crc32c`,
134 `xxhash32`, or `xxhash64`. The default is `crc32c` and should be
135 suitable for most purposes.
137 Full data checksumming does increase the amount of metadata that
138 BlueStore must store and manage. When possible, e.g., when clients
139 hint that data is written and read sequentially, BlueStore will
140 checksum larger blocks, but in many cases it must store a checksum
141 value (usually 4 bytes) for every 4 kilobyte block of data.
143 It is possible to use a smaller checksum value by truncating the
144 checksum to two or one byte, reducing the metadata overhead. The
145 trade-off is that the probability that a random error will not be
146 detected is higher with a smaller checksum, going from about one if
147 four billion with a 32-bit (4 byte) checksum to one is 65,536 for a
148 16-bit (2 byte) checksum or one in 256 for an 8-bit (1 byte) checksum.
149 The smaller checksum values can be used by selecting `crc32c_16` or
150 `crc32c_8` as the checksum algorithm.
152 The *checksum algorithm* can be set either via a per-pool
153 ``csum_type`` property or the global config option. For example, ::
155 ceph osd pool set <pool-name> csum_type <algorithm>
157 ``bluestore_csum_type``
159 :Description: The default checksum algorithm to use.
162 :Valid Settings: ``none``, ``crc32c``, ``crc32c_16``, ``crc32c_8``, ``xxhash32``, ``xxhash64``
169 BlueStore supports inline compression using `snappy`, `zlib`, or
170 `lz4`. Please note that the `lz4` compression plugin is not
171 distributed in the official release.
173 Whether data in BlueStore is compressed is determined by a combination
174 of the *compression mode* and any hints associated with a write
175 operation. The modes are:
177 * **none**: Never compress data.
178 * **passive**: Do not compress data unless the write operation as a
179 *compressible* hint set.
180 * **aggressive**: Compress data unless the write operation as an
181 *incompressible* hint set.
182 * **force**: Try to compress data no matter what.
184 For more information about the *compressible* and *incompressible* IO
185 hints, see :doc:`/api/librados/#rados_set_alloc_hint`.
187 Note that regardless of the mode, if the size of the data chunk is not
188 reduced sufficiently it will not be used and the original
189 (uncompressed) data will be stored. For example, if the ``bluestore
190 compression required ratio`` is set to ``.7`` then the compressed data
191 must be 70% of the size of the original (or smaller).
193 The *compression mode*, *compression algorithm*, *compression required
194 ratio*, *min blob size*, and *max blob size* can be set either via a
195 per-pool property or a global config option. Pool properties can be
198 ceph osd pool set <pool-name> compression_algorithm <algorithm>
199 ceph osd pool set <pool-name> compression_mode <mode>
200 ceph osd pool set <pool-name> compression_required_ratio <ratio>
201 ceph osd pool set <pool-name> compression_min_blob_size <size>
202 ceph osd pool set <pool-name> compression_max_blob_size <size>
204 ``bluestore compression algorithm``
206 :Description: The default compressor to use (if any) if the per-pool property
207 ``compression_algorithm`` is not set. Note that zstd is *not*
208 recommended for bluestore due to high CPU overhead when
209 compressing small amounts of data.
212 :Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``
215 ``bluestore compression mode``
217 :Description: The default policy for using compression if the per-pool property
218 ``compression_mode`` is not set. ``none`` means never use
219 compression. ``passive`` means use compression when
220 `clients hint`_ that data is compressible. ``aggressive`` means
221 use compression unless clients hint that data is not compressible.
222 ``force`` means use compression under all circumstances even if
223 the clients hint that the data is not compressible.
226 :Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``
229 ``bluestore compression required ratio``
231 :Description: The ratio of the size of the data chunk after
232 compression relative to the original size must be at
233 least this small in order to store the compressed
236 :Type: Floating point
240 ``bluestore compression min blob size``
242 :Description: Chunks smaller than this are never compressed.
243 The per-pool property ``compression_min_blob_size`` overrides
246 :Type: Unsigned Integer
250 ``bluestore compression min blob size hdd``
252 :Description: Default value of ``bluestore compression min blob size``
253 for rotational media.
255 :Type: Unsigned Integer
259 ``bluestore compression min blob size ssd``
261 :Description: Default value of ``bluestore compression min blob size``
262 for non-rotational (solid state) media.
264 :Type: Unsigned Integer
268 ``bluestore compression max blob size``
270 :Description: Chunks larger than this are broken into smaller blobs sizing
271 ``bluestore compression max blob size`` before being compressed.
272 The per-pool property ``compression_max_blob_size`` overrides
275 :Type: Unsigned Integer
279 ``bluestore compression max blob size hdd``
281 :Description: Default value of ``bluestore compression max blob size``
282 for rotational media.
284 :Type: Unsigned Integer
288 ``bluestore compression max blob size ssd``
290 :Description: Default value of ``bluestore compression max blob size``
291 for non-rotational (solid state) media.
293 :Type: Unsigned Integer
297 .. _clients hint: ../../api/librados/#rados_set_alloc_hint