3 ========================================
4 ceph-volume -- Ceph OSD deployment tool
5 ========================================
7 .. program:: ceph-volume
12 | **ceph-volume** [-h] [--cluster CLUSTER] [--log-level LOG_LEVEL]
13 | [--log-path LOG_PATH]
15 | **ceph-volume** **lvm** [ *trigger* | *create* | *activate* | *prepare* ]
20 :program:`ceph-volume` is a single purpose command line tool to deploy logical
21 volumes as OSDs, trying to maintain a similar API to ``ceph-disk`` when
22 preparing, activating, and creating OSDs.
24 It deviates from ``ceph-disk`` by not interacting or relying on the udev rules
25 that come installed for Ceph. These rules allow automatic detection of
26 previously setup devices that are in turn fed into ``ceph-disk`` to activate
36 By making use of LVM tags, the ``lvm`` sub-command is able to store and later
37 re-discover and query devices associated with OSDs so that they can later
43 Enables a systemd unit that persists the OSD ID and its UUID (also called
44 ``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is
45 enabled and needs to be mounted.
49 ceph-volume lvm activate --filestore <osd id> <osd fsid>
53 * [-h, --help] show the help message and exit
54 * [--bluestore] filestore objectstore (not yet implemented)
55 * [--filestore] filestore objectstore (current default)
59 Prepares a logical volume to be used as an OSD and journal using a ``filestore`` setup
60 (``bluestore`` support is planned). It will not create or modify the logical volumes
61 except for adding extra metadata.
65 ceph-volume lvm prepare --filestore --data <data lv> --journal <journal device>
69 * [-h, --help] show the help message and exit
70 * [--journal JOURNAL] A logical group name, path to a logical volume, or path to a device
71 * [--journal-size GB] Size (in GB) A logical group name or a path to a logical volume
72 * [--bluestore] Use the bluestore objectstore (not currently supported)
73 * [--filestore] Use the filestore objectstore (currently the only supported object store)
74 * [--osd-id OSD_ID] Reuse an existing OSD id
75 * [--osd-fsid OSD_FSID] Reuse an existing OSD fsid
79 * --data A logical group name or a path to a logical volume
82 Wraps the two-step process to provision a new osd (calling ``prepare`` first
83 and then ``activate``) into a single one. The reason to prefer ``prepare`` and
84 then ``activate`` is to gradually introduce new OSDs into a cluster, and
85 avoiding large amounts of data being rebalanced.
87 The single-call process unifies exactly what ``prepare`` and ``activate`` do,
88 with the convenience of doing it all at once. Flags and general usage are
89 equivalent to those of the ``prepare`` subcommand.
92 This subcommand is not meant to be used directly, and it is used by systemd so
93 that it proxies input to ``ceph-volume lvm activate`` by parsing the
94 input from systemd, detecting the UUID and ID associated with an OSD.
98 ceph-volume lvm trigger <SYSTEMD-DATA>
100 The systemd "data" is expected to be in the format of::
104 The lvs associated with the OSD need to have been prepared previously,
105 so that all needed tags and metadata exist.
107 Positional arguments:
109 * <SYSTEMD_DATA> Data from a systemd unit containing ID and UUID of the OSD.
114 :program:`ceph-volume` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
115 the documentation at http://docs.ceph.com/ for more information.
121 :doc:`ceph-osd <ceph-osd>`\(8),
122 :doc:`ceph-disk <ceph-disk>`\(8),