3 ======================================================
4 osdmaptool -- ceph osd cluster map manipulation tool
5 ======================================================
7 .. program:: osdmaptool
12 | **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
13 [--pgbits *bitsperosd* ] ] [--clobber]
19 **osdmaptool** is a utility that lets you create, view, and manipulate
20 OSD cluster maps from the Ceph distributed storage system. Notably, it
21 lets you extract the embedded CRUSH map or import a new CRUSH map.
29 will simply make the tool print a plaintext dump of the map, after
30 any modifications are made.
34 will allow osdmaptool to overwrite mapfilename if changes are made.
36 .. option:: --import-crush mapfile
38 will load the CRUSH map from mapfile and embed it in the OSD map.
40 .. option:: --export-crush mapfile
42 will extract the CRUSH map from the OSD map and write it to
45 .. option:: --createsimple numosd [--pgbits bitsperosd]
47 will create a relatively generic OSD map with the numosd devices.
48 If --pgbits is specified, the initial placement group counts will
49 be set with bitsperosd bits per OSD. That is, the pg_num map
50 attribute will be set to numosd shifted by bitsperosd.
52 .. option:: --test-map-pgs [--pool poolid]
54 will print out the mappings from placement groups to OSDs.
56 .. option:: --test-map-pgs-dump [--pool poolid]
58 will print out the summary of all placement groups and the mappings
59 from them to the mapped OSDs.
65 To create a simple map with 16 devices::
67 osdmaptool --createsimple 16 osdmap --clobber
71 osdmaptool --print osdmap
73 To view the mappings of placement groups for pool 0::
75 osdmaptool --test-map-pgs-dump rbd --pool 0
86 #osd count first primary c wt wt
91 avg 8 stddev 0 (0x) (expected 2.3094 0.288675x))
100 #. pool 0 has 8 placement groups. And two tables follow:
101 #. A table for placement groups. Each row presents a placement group. With columns of:
103 * placement group id,
106 #. A table for all OSDs. Each row presents an OSD. With columns of:
108 * count of placement groups being mapped to this OSD,
109 * count of placement groups where this OSD is the first one in their acting sets,
110 * count of placement groups where this OSD is the primary of them,
111 * the CRUSH weight of this OSD, and
112 * the weight of this OSD.
113 #. Looking at the number of placement groups held by 3 OSDs. We have
115 * avarge, stddev, stddev/average, expected stddev, expected stddev / average
117 #. The number of placement groups mapping to n OSDs. In this case, all 8 placement
118 groups are mapping to 3 different OSDs.
120 In a less-balanced cluster, we could have following output for the statistics of
121 placement group distribution, whose standard deviation is 1.41421::
123 #osd count first primary c wt wt
128 #osd count first primary c wt wt
129 osd.0 33 9 9 0.0145874 1
130 osd.1 34 14 14 0.0145874 1
131 osd.2 31 7 7 0.0145874 1
132 osd.3 31 13 13 0.0145874 1
133 osd.4 30 14 14 0.0145874 1
134 osd.5 33 7 7 0.0145874 1
136 avg 32 stddev 1.41421 (0.0441942x) (expected 5.16398 0.161374x))
148 **osdmaptool** is part of Ceph, a massively scalable, open-source, distributed storage system. Please
149 refer to the Ceph documentation at http://ceph.com/docs for more
156 :doc:`ceph <ceph>`\(8),
157 :doc:`crushtool <crushtool>`\(8),