1 =============================
2 Storage Cluster Quick Start
3 =============================
5 If you haven't completed your `Preflight Checklist`_, do that first. This
6 **Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
7 on your admin node. Create a three Ceph Node cluster so you can
8 explore Ceph functionality.
10 .. include:: quick-common.rst
12 As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
13 Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
14 by adding a fourth Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
15 For best results, create a directory on your admin node for maintaining the
16 configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
21 The ``ceph-deploy`` utility will output files to the current directory. Ensure you
22 are in this directory when executing ``ceph-deploy``.
24 .. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
25 if you are logged in as a different user, because it will not issue ``sudo``
26 commands needed on the remote host.
32 If at any point you run into trouble and you want to start over, execute
33 the following to purge the Ceph packages, and erase all its data and configuration::
35 ceph-deploy purge {ceph-node} [{ceph-node}]
36 ceph-deploy purgedata {ceph-node} [{ceph-node}]
37 ceph-deploy forgetkeys
40 If you execute ``purge``, you must re-install Ceph. The last ``rm``
41 command removes any files that were written out by ceph-deploy locally
42 during a previous installation.
48 On your admin node from the directory you created for holding your
49 configuration details, perform the following steps using ``ceph-deploy``.
51 #. Create the cluster. ::
53 ceph-deploy new {initial-monitor-node(s)}
55 Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
59 Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
60 current directory. You should see a Ceph configuration file
61 (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
62 and a log file for the new cluster. See `ceph-deploy new -h`_ for
65 #. If you have more than one network interface, add the ``public network``
66 setting under the ``[global]`` section of your Ceph configuration file.
67 See the `Network Configuration Reference`_ for details. ::
69 public network = {ip-address}/{bits}
73 public network = 10.1.2.0/24
75 to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
77 #. If you are deploying in an IPv6 environment, add the following to
78 ``ceph.conf`` in the local directory::
80 echo ms bind ipv6 = true >> ceph.conf
82 #. Install Ceph packages.::
84 ceph-deploy install {ceph-node} [...]
88 ceph-deploy install node1 node2 node3
90 The ``ceph-deploy`` utility will install Ceph on each node.
92 #. Deploy the initial monitor(s) and gather the keys::
94 ceph-deploy mon create-initial
96 Once you complete the process, your local directory should have the following
99 - ``ceph.client.admin.keyring``
100 - ``ceph.bootstrap-mgr.keyring``
101 - ``ceph.bootstrap-osd.keyring``
102 - ``ceph.bootstrap-mds.keyring``
103 - ``ceph.bootstrap-rgw.keyring``
104 - ``ceph.bootstrap-rbd.keyring``
106 .. note:: If this process fails with a message similar to "Unable to
107 find /etc/ceph/ceph.client.admin.keyring", please ensure that the
108 IP listed for the monitor node in ceph.conf is the Public IP, not
111 #. Use ``ceph-deploy`` to copy the configuration file and admin key to
112 your admin node and your Ceph Nodes so that you can use the ``ceph``
113 CLI without having to specify the monitor address and
114 ``ceph.client.admin.keyring`` each time you execute a command. ::
116 ceph-deploy admin {ceph-node(s)}
120 ceph-deploy admin node1 node2 node3
122 #. Deploy a manager daemon. (Required only for luminous+ builds)::
124 ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
126 #. Add three OSDs. For the purposes of these instructions, we assume you have an
127 unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.*
129 ceph-deploy osd create {ceph-node}:{device}
133 ceph-deploy osd create node1:vdb node2:vdb node3:vdb
135 #. Check your cluster's health. ::
137 ssh node1 sudo ceph health
139 Your cluster should report ``HEALTH_OK``. You can view a more complete
140 cluster status with::
142 ssh node1 sudo ceph -s
145 Expanding Your Cluster
146 ======================
148 Once you have a basic cluster up and running, the next step is to
149 expand cluster. Add a Ceph Metadata Server to ``node1``. Then add a
150 Ceph Monitor and Ceph Manager to ``node2`` and ``node3`` to improve reliability and availability.
153 /------------------\ /----------------\
154 | ceph-deploy | | node1 |
155 | Admin Node | | cCCC |
156 | +-------->+ mon.node1 |
160 \---------+--------/ \----------------/
165 +----------------->+ |
173 +----------------->+ |
178 Add a Metadata Server
179 ---------------------
181 To use CephFS, you need at least one metadata server. Execute the following to
182 create a metadata server::
184 ceph-deploy mds create {ceph-node}
188 ceph-deploy mds create node1
193 A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
194 Manager to run. For high availability, Ceph Storage Clusters typically
195 run multiple Ceph Monitors so that the failure of a single Ceph
196 Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
197 Paxos algorithm, which requires a majority of monitors (i.e., greather
198 than *N/2* where *N* is the number of monitors) to form a quorum.
199 Odd numbers of monitors tend to be better, although this is not required.
201 .. tip: If you did not define the ``public network`` option above then
202 the new monitor will not know which IP address to bind to on the
203 new hosts. You can add this line to your ``ceph.conf`` by editing
204 it now and then push it out to each node with
205 ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
207 Add two Ceph Monitors to your cluster::
209 ceph-deploy mon add {ceph-nodes}
213 ceph-deploy mon add node2 node3
215 Once you have added your new Ceph Monitors, Ceph will begin synchronizing
216 the monitors and form a quorum. You can check the quorum status by executing
219 ceph quorum_status --format json-pretty
222 .. tip:: When you run Ceph with multiple monitors, you SHOULD install and
223 configure NTP on each monitor host. Ensure that the
224 monitors are NTP peers.
229 The Ceph Manager daemons operate in an active/standby pattern. Deploying
230 additional manager daemons ensures that if one daemon or host fails, another
231 one can take over without interrupting service.
233 To deploy additional manager daemons::
235 ceph-deploy mgr create node2 node3
237 You should see the standby managers in the output from::
239 ssh node1 sudo ceph -s
245 To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
246 instance of :term:`RGW`. Execute the following to create an new instance of
249 ceph-deploy rgw create {gateway-node}
253 ceph-deploy rgw create node1
255 By default, the :term:`RGW` instance will listen on port 7480. This can be
256 changed by editing ceph.conf on the node running the :term:`RGW` as follows:
261 rgw frontends = civetweb port=80
263 To use an IPv6 address, use:
268 rgw frontends = civetweb port=[::]:80
272 Storing/Retrieving Object Data
273 ==============================
275 To store object data in the Ceph Storage Cluster, a Ceph client must:
277 #. Set an object name
280 The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
281 calculates how to map the object to a `placement group`_, and then calculates
282 how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
283 object location, all you need is the object name and the pool name. For
286 ceph osd map {poolname} {object-name}
288 .. topic:: Exercise: Locate an Object
290 As an exercise, lets create an object. Specify an object name, a path to
291 a test file containing some object data and a pool name using the
292 ``rados put`` command on the command line. For example::
294 echo {Test-data} > testfile.txt
295 ceph osd pool create mytest 8
296 rados put {object-name} {file-path} --pool=mytest
297 rados put test-object-1 testfile.txt --pool=mytest
299 To verify that the Ceph Storage Cluster stored the object, execute
304 Now, identify the object location::
306 ceph osd map {pool-name} {object-name}
307 ceph osd map mytest test-object-1
309 Ceph should output the object's location. For example::
311 osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
313 To remove the test object, simply delete it using the ``rados rm``
318 rados rm test-object-1 --pool=mytest
320 To delete the ``mytest`` pool::
322 ceph osd pool rm mytest
324 (For safety reasons you will need to supply additional arguments as
325 prompted; deleting pools destroys data.)
327 As the cluster evolves, the object location may change dynamically. One benefit
328 of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
329 data migration or balancing manually.
332 .. _Preflight Checklist: ../quick-start-preflight
333 .. _Ceph Deploy: ../../rados/deployment
334 .. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
335 .. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
336 .. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
337 .. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
338 .. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
339 .. _CRUSH Map: ../../rados/operations/crush-map
340 .. _pool: ../../rados/operations/pools
341 .. _placement group: ../../rados/operations/placement-groups
342 .. _Monitoring a Cluster: ../../rados/operations/monitoring
343 .. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
344 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
345 .. _User Management: ../../rados/operations/user-management