1 .. _ceph-volume-lvm-list:
5 This subcommand will list any devices (logical and physical) that may be
6 associated with a Ceph cluster, as long as they contain enough metadata to
7 allow for that discovery.
9 Output is grouped by the OSD ID associated with the devices, and unlike
10 ``ceph-disk`` it does not provide any information for devices that aren't
15 * ``--format`` Allows a ``json`` or ``pretty`` value. Defaults to ``pretty``
16 which will group the device information in a human-readable format.
20 When no positional arguments are used, a full reporting will be presented. This
21 means that all devices and logical volumes found in the system will be
24 Full ``pretty`` reporting for two OSDs, one with a lv as a journal, and another
25 one with a physical device may look similar to::
27 # ceph-volume lvm list
32 [journal] /dev/journals/journal1
34 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
36 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
38 osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
39 data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
40 journal device /dev/journals/journal1
41 data device /dev/test_group/data-lv2
43 [data] /dev/test_group/data-lv2
45 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
47 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
49 osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
50 data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
51 journal device /dev/journals/journal1
52 data device /dev/test_group/data-lv2
56 [data] /dev/test_group/data-lv1
58 journal uuid cd72bd28-002a-48da-bdf6-d5b993e84f3f
60 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
62 osd fsid 943949f0-ce37-47ca-a33c-3413d46ee9ec
63 data uuid TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00
64 journal device /dev/sdd1
65 data device /dev/test_group/data-lv1
69 PARTUUID cd72bd28-002a-48da-bdf6-d5b993e84f3f
71 .. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
72 as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
73 see :ref:`ceph-volume-lvm-tag-api`
77 Single reporting can consume both devices and logical volumes as input
78 (positional parameters). For logical volumes, it is required to use the group
79 name as well as the logical volume name.
81 For example the ``data-lv2`` logical volume, in the ``test_group`` volume group
82 can be listed in the following way::
84 # ceph-volume lvm list test_group/data-lv2
89 [data] /dev/test_group/data-lv2
91 journal uuid C65n7d-B1gy-cqX3-vZKY-ZoE0-IEYM-HnIJzs
93 cluster fsid ce454d91-d748-4751-a318-ff7f7aa18ffd
95 osd fsid 661b24f8-e062-482b-8110-826ffe7f13fa
96 data uuid SlEgHe-jX1H-QBQk-Sce0-RUls-8KlY-g8HgcZ
97 journal device /dev/journals/journal1
98 data device /dev/test_group/data-lv2
101 .. note:: Tags are displayed in a readable format. The ``osd id`` key is stored
102 as a ``ceph.osd_id`` tag. For more information on lvm tag conventions
103 see :ref:`ceph-volume-lvm-tag-api`
106 For plain disks, the full path to the device is required. For example, for
107 a device like ``/dev/sdd1`` it can look like::
110 # ceph-volume lvm list /dev/sdd1
117 PARTUUID cd72bd28-002a-48da-bdf6-d5b993e84f3f
123 All output using ``--format=json`` will show everything the system has stored
124 as metadata for the devices, including tags.
126 No changes for readability are done with ``json`` reporting, and all
127 information is presented as-is. Full output as well as single devices can be
130 For brevity, this is how a single logical volume would look with ``json``
131 output (note how tags aren't modified)::
133 # ceph-volume lvm list --format=json test_group/data-lv1
137 "lv_name": "data-lv1",
138 "lv_path": "/dev/test_group/data-lv1",
139 "lv_tags": "ceph.cluster_fsid=ce454d91-d748-4751-a318-ff7f7aa18ffd,ceph.data_device=/dev/test_group/data-lv1,ceph.data_uuid=TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00,ceph.journal_device=/dev/sdd1,ceph.journal_uuid=cd72bd28-002a-48da-bdf6-d5b993e84f3f,ceph.osd_fsid=943949f0-ce37-47ca-a33c-3413d46ee9ec,ceph.osd_id=0,ceph.type=data",
140 "lv_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
142 "path": "/dev/test_group/data-lv1",
144 "ceph.cluster_fsid": "ce454d91-d748-4751-a318-ff7f7aa18ffd",
145 "ceph.data_device": "/dev/test_group/data-lv1",
146 "ceph.data_uuid": "TUpfel-Q5ZT-eFph-bdGW-SiNW-l0ag-f5kh00",
147 "ceph.journal_device": "/dev/sdd1",
148 "ceph.journal_uuid": "cd72bd28-002a-48da-bdf6-d5b993e84f3f",
149 "ceph.osd_fsid": "943949f0-ce37-47ca-a33c-3413d46ee9ec",
154 "vg_name": "test_group"
160 Synchronized information
161 ------------------------
162 Before any listing type, the lvm API is queried to ensure that physical devices
163 that may be in use haven't changed naming. It is possible that non-persistent
164 devices like ``/dev/sda1`` could change to ``/dev/sdb1``.
166 The detection is possible because the ``PARTUUID`` is stored as part of the
167 metadata in the logical volume for the data lv. Even in the case of a journal
168 that is a physical device, this information is still stored on the data logical
169 volume associated with it.
171 If the name is no longer the same (as reported by ``blkid`` when using the
172 ``PARTUUID``), the tag will get updated and the report will use the newly
173 refreshed information.