5 The ``ceph-deploy`` tool operates out of a directory on an admin
6 :term:`node`. Any host with network connectivity and a modern python
7 environment and ssh (such as Linux) should work.
9 In the descriptions below, :term:`Node` refers to a single machine.
11 .. include:: quick-common.rst
17 Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
23 For Debian and Ubuntu distributions, perform the following steps:
25 #. Add the release key::
27 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
29 #. Add the Ceph packages to your repository::
31 echo deb https://download.ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
33 The above URL contains the latest stable release of Ceph. If you
34 would like to select a specific release, use the command below and
35 replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
36 ``luminous``.) For example::
38 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
40 #. Update your repository and install ``ceph-deploy``::
43 sudo apt install ceph-deploy
45 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
51 For CentOS 7, perform the following steps:
53 #. On Red Hat Enterprise Linux 7, register the target machine with
54 ``subscription-manager``, verify your subscriptions, and enable the
55 "Extras" repository for package dependencies. For example::
57 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
59 #. Install and enable the Extra Packages for Enterprise Linux (EPEL)
62 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
64 Please see the `EPEL wiki`_ page for more information.
66 #. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command::
68 cat >/etc/yum.repos.d/ceph.repo
70 name=Ceph noarch packages
71 baseurl=https://download.ceph.com/rpm/el7/noarch
75 gpgkey=https://download.ceph.com/keys/release.asc
77 and then this *Control-D*. This will use the latest stable Ceph release. If you would like to install a different release, replace ``https://download.ceph.com/rpm/el7/noarch`` with ``https://download.ceph.com/rpm-{ceph-release}/el7/noarch`` where ``{ceph-release}`` is a release name like ``luminous``.
79 #. Update your repository and install ``ceph-deploy``::
82 sudo yum install ceph-deploy
84 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
90 The Ceph project does not currently publish release RPMs for openSUSE, but
91 a stable version of Ceph is included in the default update repository, so
92 installing it is just a matter of::
94 sudo zypper install ceph
95 sudo zypper install ceph-deploy
97 If the distro version is out-of-date, open a bug at
98 https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
99 the following repositories:
103 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
107 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
113 The admin node must be have password-less SSH access to Ceph nodes.
114 When ceph-deploy logs in to a Ceph node as a user, that particular
115 user must have passwordless ``sudo`` privileges.
121 We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
122 prevent issues arising from clock drift. See `Clock`_ for details.
124 On CentOS / RHEL, execute::
126 sudo yum install ntp ntpdate ntp-doc
128 On Debian / Ubuntu, execute::
132 Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
133 same NTP time server. See `NTP`_ for details.
139 For **ALL** Ceph Nodes perform the following steps:
141 #. Install an SSH server (if necessary) on each Ceph Node::
143 sudo apt install openssh-server
147 sudo yum install openssh-server
150 #. Ensure the SSH server is running on **ALL** Ceph Nodes.
153 Create a Ceph Deploy User
154 -------------------------
156 The ``ceph-deploy`` utility must login to a Ceph node as a user
157 that has passwordless ``sudo`` privileges, because it needs to install
158 software and configuration files without prompting for passwords.
160 Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
161 specify any user that has password-less ``sudo`` (including ``root``, although
162 this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
163 user you specify must have password-less SSH access to the Ceph node, as
164 ``ceph-deploy`` will not prompt you for a password.
166 We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
167 in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
168 name across the cluster may improve ease of use (not required), but you should
169 avoid obvious user names, because hackers typically use them with brute force
170 hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
171 substituting ``{username}`` for the user name you define, describes how to
172 create a user with passwordless ``sudo``.
174 .. note:: Starting with the `Infernalis release`_ the "ceph" user name is reserved
175 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
176 removing the user must be done before attempting an upgrade.
178 #. Create a new user on each Ceph Node. ::
181 sudo useradd -d /home/{username} -m {username}
182 sudo passwd {username}
184 #. For the new user you added to each Ceph node, ensure that the user has
185 ``sudo`` privileges. ::
187 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
188 sudo chmod 0440 /etc/sudoers.d/{username}
191 Enable Password-less SSH
192 ------------------------
194 Since ``ceph-deploy`` will not prompt for a password, you must generate
195 SSH keys on the admin node and distribute the public key to each Ceph
196 node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
199 #. Generate the SSH keys, but do not use ``sudo`` or the
200 ``root`` user. Leave the passphrase empty::
204 Generating public/private key pair.
205 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
206 Enter passphrase (empty for no passphrase):
207 Enter same passphrase again:
208 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
209 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
211 #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
212 you created with `Create a Ceph Deploy User`_. ::
214 ssh-copy-id {username}@node1
215 ssh-copy-id {username}@node2
216 ssh-copy-id {username}@node3
218 #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
219 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
220 created without requiring you to specify ``--username {username}`` each
221 time you execute ``ceph-deploy``. This has the added benefit of streamlining
222 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
236 Enable Networking On Bootup
237 ---------------------------
239 Ceph OSDs peer with each other and report to Ceph Monitors over the network.
240 If networking is ``off`` by default, the Ceph cluster cannot come online
241 during bootup until you enable networking.
243 The default configuration on some distributions (e.g., CentOS) has the
244 networking interface(s) off by default. Ensure that, during boot up, your
245 network interface(s) turn(s) on so that your Ceph daemons can communicate over
246 the network. For example, on Red Hat and CentOS, navigate to
247 ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
248 has ``ONBOOT`` set to ``yes``.
254 Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
255 Address hostname resolution issues as necessary.
257 .. note:: Hostnames should resolve to a network IP address, not to the
258 loopback IP address (e.g., hostnames should resolve to an IP address other
259 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
260 should also ensure that it resolves to its hostname and IP address
261 (i.e., not its loopback IP address).
267 Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
268 in a port range of ``6800:7300`` by default. See the `Network Configuration
269 Reference`_ for details. Ceph OSDs can use multiple network connections to
270 communicate with clients, monitors, other OSDs for replication, and other OSDs
273 On some distributions (e.g., RHEL), the default firewall configuration is fairly
274 strict. You may need to adjust your firewall settings allow inbound requests so
275 that clients in your network can communicate with daemons on your Ceph nodes.
277 For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
278 nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
279 ensure that you make the settings permanent so that they are enabled on reboot.
281 For example, on monitors::
283 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
285 and on OSDs and MDSs::
287 sudo firewall-cmd --zone=public --add-service=ceph --permanent
289 Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
291 sudo firewall-cmd --reload
293 For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
294 for Ceph OSDs. For example::
296 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
298 Once you have finished configuring ``iptables``, ensure that you make the
299 changes persistent on each node so that they will be in effect when your nodes
300 reboot. For example::
302 /sbin/service iptables save
307 On CentOS and RHEL, you may receive an error while trying to execute
308 ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
309 nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
310 requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
311 out to ensure that ``ceph-deploy`` can connect using the user you created with
312 `Create a Ceph Deploy User`_.
314 .. note:: If editing, ``/etc/sudoers``, ensure that you use
315 ``sudo visudo`` rather than a text editor.
321 On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
322 installation, we recommend setting SELinux to ``Permissive`` or disabling it
323 entirely and ensuring that your installation and cluster are working properly
324 before hardening your configuration. To set SELinux to ``Permissive``, execute the
329 To configure SELinux persistently (recommended if SELinux is an issue), modify
330 the configuration file at ``/etc/selinux/config``.
333 Priorities/Preferences
334 ----------------------
336 Ensure that your package manager has priority/preferences packages installed and
337 enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
338 enable optional repositories. ::
340 sudo yum install yum-plugin-priorities
342 For example, on RHEL 7 server, execute the following to install
343 ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
346 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
352 This completes the Quick Start Preflight. Proceed to the `Storage Cluster
355 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
356 .. _OS Recommendations: ../os-recommendations
357 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
358 .. _Clock: ../../rados/configuration/mon-config-ref#clock
359 .. _NTP: http://www.ntp.org/
360 .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
361 .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL