5 All Ceph clusters require at least one monitor, and at least as many OSDs as
6 copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
7 is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
8 sets important criteria for the entire cluster, such as the number of replicas
9 for pools, the number of placement groups per OSD, the heartbeat intervals,
10 whether authentication is required, etc. Most of these values are set by
11 default, so it's useful to know about them when setting up your cluster for
14 Following the same configuration as `Installation (Quick)`_, we will set up a
15 cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
21 /------------------\ /----------------\
22 | Admin Node | | node1 |
25 \---------+--------/ \----------------/
29 +----------------->+ |
35 +----------------->| |
43 Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
46 - **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
47 and stands for File System ID from the days when the Ceph Storage Cluster was
48 principally for the Ceph Filesystem. Ceph now supports native interfaces,
49 block devices, and object storage gateway interfaces too, so ``fsid`` is a
52 - **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
53 without spaces. The default cluster name is ``ceph``, but you may specify
54 a different cluster name. Overriding the default cluster name is
55 especially useful when you are working with multiple clusters and you need to
56 clearly understand which cluster your are working with.
58 For example, when you run multiple clusters in a `federated architecture`_,
59 the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
60 the current CLI session. **Note:** To identify the cluster name on the
61 command line interface, specify the Ceph configuration file with the
62 cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
63 Also see CLI usage (``ceph --cluster {cluster-name}``).
65 - **Monitor Name:** Each monitor instance within a cluster has a unique name.
66 In common practice, the Ceph Monitor name is the host name (we recommend one
67 Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
68 Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
70 - **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
71 generate a monitor map. The monitor map requires the ``fsid``, the cluster
72 name (or uses the default), and at least one host name and its IP address.
74 - **Monitor Keyring**: Monitors communicate with each other via a
75 secret key. You must generate a keyring with a monitor secret and provide
76 it when bootstrapping the initial monitor(s).
78 - **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
79 a ``client.admin`` user. So you must generate the admin user and keyring,
80 and you must also add the ``client.admin`` user to the monitor keyring.
82 The foregoing requirements do not imply the creation of a Ceph Configuration
83 file. However, as a best practice, we recommend creating a Ceph configuration
84 file and populating it with the ``fsid``, the ``mon initial members`` and the
85 ``mon host`` settings.
87 You can get and set all of the monitor settings at runtime as well. However,
88 a Ceph Configuration file may contain only those settings that override the
89 default values. When you add settings to a Ceph configuration file, these
90 settings override the default settings. Maintaining those settings in a
91 Ceph configuration file makes it easier to maintain your cluster.
93 The procedure is as follows:
96 #. Log in to the initial monitor node(s)::
105 #. Ensure you have a directory for the Ceph configuration file. By default,
106 Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
107 create the ``/etc/ceph`` directory automatically. ::
111 **Note:** Deployment tools may remove this directory when purging a
112 cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge
115 #. Create a Ceph configuration file. By default, Ceph uses
116 ``ceph.conf``, where ``ceph`` reflects the cluster name. ::
118 sudo vim /etc/ceph/ceph.conf
121 #. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
126 #. Add the unique ID to your Ceph configuration file. ::
132 fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
135 #. Add the initial monitor(s) to your Ceph configuration file. ::
137 mon initial members = {hostname}[,{hostname}]
141 mon initial members = node1
144 #. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
145 file and save the file. ::
147 mon host = {ip-address}[,{ip-address}]
151 mon host = 192.168.0.1
153 **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
154 you must set ``ms bind ipv6`` to ``true``. See `Network Configuration
155 Reference`_ for details about network configuration.
157 #. Create a keyring for your cluster and generate a monitor secret key. ::
159 ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
162 #. Generate an administrator keyring, generate a ``client.admin`` user and add
163 the user to the keyring. ::
165 sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
168 #. Add the ``client.admin`` key to the ``ceph.mon.keyring``. ::
170 ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
173 #. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
174 Save it as ``/tmp/monmap``::
176 monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
180 monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
183 #. Create a default data directory (or directories) on the monitor host(s). ::
185 sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
189 sudo mkdir /var/lib/ceph/mon/ceph-node1
191 See `Monitor Config Reference - Data`_ for details.
193 #. Populate the monitor daemon(s) with the monitor map and keyring. ::
195 sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
199 sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
202 #. Consider settings for a Ceph configuration file. Common settings include
207 mon initial members = {hostname}[, {hostname}]
208 mon host = {ip-address}[, {ip-address}]
209 public network = {network}[, {network}]
210 cluster network = {network}[, {network}]
211 auth cluster required = cephx
212 auth service required = cephx
213 auth client required = cephx
214 osd journal size = {n}
215 osd pool default size = {n} # Write an object n times.
216 osd pool default min size = {n} # Allow writing n copy in a degraded state.
217 osd pool default pg num = {n}
218 osd pool default pgp num = {n}
219 osd crush chooseleaf type = {n}
221 In the foregoing example, the ``[global]`` section of the configuration might
225 fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
226 mon initial members = node1
227 mon host = 192.168.0.1
228 public network = 192.168.0.0/24
229 auth cluster required = cephx
230 auth service required = cephx
231 auth client required = cephx
232 osd journal size = 1024
233 osd pool default size = 2
234 osd pool default min size = 1
235 osd pool default pg num = 333
236 osd pool default pgp num = 333
237 osd crush chooseleaf type = 1
239 #. Touch the ``done`` file.
241 Mark that the monitor is created and ready to be started::
243 sudo touch /var/lib/ceph/mon/ceph-node1/done
245 #. Start the monitor(s).
247 For Ubuntu, use Upstart::
249 sudo start ceph-mon id=node1 [cluster={cluster-name}]
251 In this case, to allow the start of the daemon at each reboot you
252 must create two empty files like this::
254 sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/upstart
258 sudo touch /var/lib/ceph/mon/ceph-node1/upstart
260 For Debian/CentOS/RHEL, use sysvinit::
262 sudo /etc/init.d/ceph start mon.node1
265 #. Verify that Ceph created the default pools. ::
269 You should see output like this::
271 0 data,1 metadata,2 rbd,
274 #. Verify that the monitor is running. ::
278 You should see output that the monitor you started is up and running, and
279 you should see a health error indicating that placement groups are stuck
280 inactive. It should look something like this::
282 cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
283 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
284 monmap e1: 1 mons at {node1=192.168.0.1:6789/0}, election epoch 1, quorum 0 node1
285 osdmap e1: 0 osds: 0 up, 0 in
286 pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
287 0 kB used, 0 kB / 0 kB avail
290 **Note:** Once you add OSDs and start them, the placement group health errors
291 should disappear. See the next section for details.
293 Manager daemon configuration
294 ============================
296 On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon.
298 See :ref:`mgr-administrator-guide`
303 Once you have your initial monitor(s) running, you should add OSDs. Your cluster
304 cannot reach an ``active + clean`` state until you have enough OSDs to handle the
305 number of copies of an object (e.g., ``osd pool default size = 2`` requires at
306 least two OSDs). After bootstrapping your monitor, your cluster has a default
307 CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
314 Ceph provides the ``ceph-disk`` utility, which can prepare a disk, partition or
315 directory for use with Ceph. The ``ceph-disk`` utility creates the OSD ID by
316 incrementing the index. Additionally, ``ceph-disk`` will add the new OSD to the
317 CRUSH map under the host for you. Execute ``ceph-disk -h`` for CLI details.
318 The ``ceph-disk`` utility automates the steps of the `Long Form`_ below. To
319 create the first two OSDs with the short form procedure, execute the following
320 on ``node2`` and ``node3``:
323 #. Prepare the OSD. ::
326 sudo ceph-disk prepare --cluster {cluster-name} --cluster-uuid {uuid} {data-path} [{journal-path}]
331 sudo ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635-d0aeaca0e993 --fs-type ext4 /dev/hdd1
334 #. Activate the OSD::
336 sudo ceph-disk activate {data-path} [--activate-key {path}]
340 sudo ceph-disk activate /dev/hdd1
342 **Note:** Use the ``--activate-key`` argument if you do not have a copy
343 of ``/var/lib/ceph/bootstrap-osd/{cluster}.keyring`` on the Ceph Node.
349 Without the benefit of any helper utilities, create an OSD and add it to the
350 cluster and CRUSH map with the following procedure. To create the first two
351 OSDs with the long form procedure, execute the following steps for each OSD.
353 .. note:: This procedure does not describe deployment on top of dm-crypt
354 making use of the dm-crypt 'lockbox'.
356 #. Connect to the OSD host and become root. ::
361 #. Generate a UUID for the OSD. ::
365 #. Generate a cephx key for the OSD. ::
367 OSD_SECRET=$(ceph-authtool --gen-print-key)
369 #. Create the OSD. Note that an OSD ID can be provided as an
370 additional argument to ``ceph osd new`` if you need to reuse a
371 previously-destroyed OSD id. We assume that the
372 ``client.bootstrap-osd`` key is present on the machine. You may
373 alternatively execute this command as ``client.admin`` on a
374 different host where that key is present.::
376 ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
377 ceph osd new $UUID -i - \
378 -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
380 #. Create the default directory on your new OSD. ::
382 mkdir /var/lib/ceph/osd/ceph-$ID
384 #. If the OSD is for a drive other than the OS drive, prepare it
385 for use with Ceph, and mount it to the directory you just created. ::
388 mount /dev/{DEV} /var/lib/ceph/osd/ceph-$ID
390 #. Write the secret to the OSD keyring file. ::
392 ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
393 --name osd.$ID --add-key $OSD_SECRET
395 #. Initialize the OSD data directory. ::
397 ceph-osd -i $ID --mkfs --osd-uuid $UUID
401 chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
403 #. After you add an OSD to Ceph, the OSD is in your configuration. However,
404 it is not yet running. You must start
405 your new OSD before it can begin receiving data.
407 For modern systemd distributions::
409 systemctl enable ceph-osd@$ID
410 systemctl start ceph-osd@$ID
414 systemctl enable ceph-osd@12
415 systemctl start ceph-osd@12
421 In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine.
423 #. Create the mds data directory.::
425 mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
427 #. Create a keyring.::
429 ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
431 #. Import the keyring and set caps.::
433 ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
435 #. Add to ceph.conf.::
440 #. Start the daemon the manual way.::
442 ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
444 #. Start the daemon the right way (using ceph.conf entry).::
448 #. If starting the daemon fails with this error::
450 mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument
452 Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output.
454 #. Now you are ready to `create a Ceph filesystem`_.
460 Once you have your monitor and two OSDs up and running, you can watch the
461 placement groups peer by executing the following::
465 To view the tree, execute the following::
469 You should see output that looks something like this::
471 # id weight type name up/down reweight
478 To add (or remove) additional monitors, see `Add/Remove Monitors`_.
479 To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
482 .. _federated architecture: ../../radosgw/federated-config
483 .. _Installation (Quick): ../../start
484 .. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
485 .. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
486 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
487 .. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data
488 .. _create a Ceph filesystem: ../../cephfs/createfs