5 Each release of Ceph may have additional steps. Refer to the release-specific
6 sections in this document and the `release notes`_ document to identify
7 release-specific procedures for your cluster before using the upgrade
14 You can upgrade daemons in your Ceph cluster while the cluster is online and in
15 service! Certain types of daemons depend upon others. For example, Ceph Metadata
16 Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons.
17 We recommend upgrading in this order:
22 #. Ceph Metadata Servers
23 #. Ceph Object Gateways
25 As a general rule, we recommend upgrading all the daemons of a specific type
26 (e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that
27 they are all on the same release. We also recommend that you upgrade all the
28 daemons in your cluster before you try to exercise new functionality in a
31 The `Upgrade Procedures`_ are relatively simple, but please look at
32 distribution-specific sections before upgrading. The basic process involves
35 #. Use ``ceph-deploy`` on your admin node to upgrade the packages for
36 multiple hosts (using the ``ceph-deploy install`` command), or login to each
37 host and upgrade the Ceph package `manually`_. For example, when
38 `Upgrading Monitors`_, the ``ceph-deploy`` syntax might look like this::
40 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
41 ceph-deploy install --release firefly mon1 mon2 mon3
43 **Note:** The ``ceph-deploy install`` command will upgrade the packages
44 in the specified node(s) from the old release to the release you specify.
45 There is no ``ceph-deploy upgrade`` command.
47 #. Login in to each Ceph node and restart each Ceph daemon.
48 See `Operating a Cluster`_ for details.
50 #. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details.
52 .. important:: Once you upgrade a daemon, you cannot downgrade it.
58 Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::
60 sudo pip install -U ceph-deploy
64 sudo apt-get install ceph-deploy
68 sudo yum install ceph-deploy python-pushy
74 When upgrading from Argonaut to Bobtail, you need to be aware of several things:
76 #. Authentication now defaults to **ON**, but used to default to **OFF**.
77 #. Monitors use a new internal on-wire protocol.
78 #. RBD ``format2`` images require upgrading all OSDs before using it.
80 Ensure that you update package repository paths. For example::
82 sudo rm /etc/apt/sources.list.d/ceph.list
83 echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
85 See the following sections for additional details.
90 The Ceph Bobtail release enables authentication by default. Bobtail also has
91 finer-grained authentication configuration settings. In previous versions of
92 Ceph (i.e., actually v 0.55 and earlier), you could simply specify::
94 auth supported = [cephx | none]
96 This option still works, but is deprecated. New releases support
97 ``cluster``, ``service`` and ``client`` authentication settings as
100 auth cluster required = [cephx | none] # default cephx
101 auth service required = [cephx | none] # default cephx
102 auth client required = [cephx | none] # default cephx,none
104 .. important:: If your cluster does not currently have an ``auth
105 supported`` line that enables authentication, you must explicitly
106 turn it off in Bobtail using the settings below.::
108 auth cluster required = none
109 auth service required = none
111 This will disable authentication on the cluster, but still leave
112 clients with the default configuration where they can talk to a
113 cluster that does enable it, but do not require it.
115 .. important:: If your cluster already has an ``auth supported`` option defined in
116 the configuration file, no changes are necessary.
118 See `User Management - Backward Compatibility`_ for details.
121 Monitor On-wire Protocol
122 ------------------------
124 We recommend upgrading all monitors to Bobtail. A mixture of Bobtail and
125 Argonaut monitors will not be able to use the new on-wire protocol, as the
126 protocol requires all monitors to be Bobtail or greater. Upgrading only a
127 majority of the nodes (e.g., two out of three) may expose the cluster to a
128 situation where a single additional failure may compromise availability (because
129 the non-upgraded daemon cannot participate in the new protocol). We recommend
130 not waiting for an extended period of time between ``ceph-mon`` upgrades.
136 The Bobtail release supports ``format 2`` images! However, you should not create
137 or use ``format 2`` RBD images until after all ``ceph-osd`` daemons have been
138 upgraded. Note that ``format 1`` is still the default. You can use the new
139 ``ceph osd ls`` and ``ceph tell osd.N version`` commands to doublecheck your
140 cluster. ``ceph osd ls`` will give a list of all OSD IDs that are part of the
141 cluster, and you can use that to write a simple shell loop to display all the
142 OSD version strings: ::
144 for i in $(ceph osd ls); do
145 ceph tell osd.${i} version
149 Argonaut to Cuttlefish
150 ======================
152 To upgrade your cluster from Argonaut to Cuttlefish, please read this
153 section, and the sections on upgrading from Argonaut to Bobtail and
154 upgrading from Bobtail to Cuttlefish carefully. When upgrading from
155 Argonaut to Cuttlefish, **YOU MUST UPGRADE YOUR MONITORS FROM ARGONAUT
156 TO BOBTAIL v0.56.5 FIRST!!!**. All other Ceph daemons can upgrade from
157 Argonaut to Cuttlefish without the intermediate upgrade to Bobtail.
159 .. important:: Ensure that the repository specified points to Bobtail, not
164 sudo rm /etc/apt/sources.list.d/ceph.list
165 echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
167 We recommend upgrading all monitors to Bobtail before proceeding with the
168 upgrade of the monitors to Cuttlefish. A mixture of Bobtail and Argonaut
169 monitors will not be able to use the new on-wire protocol, as the protocol
170 requires all monitors to be Bobtail or greater. Upgrading only a majority of the
171 nodes (e.g., two out of three) may expose the cluster to a situation where a
172 single additional failure may compromise availability (because the non-upgraded
173 daemon cannot participate in the new protocol). We recommend not waiting for an
174 extended period of time between ``ceph-mon`` upgrades. See `Upgrading
175 Monitors`_ for details.
177 .. note:: See the `Authentication`_ section and the
178 `User Management - Backward Compatibility`_ for additional information
179 on authentication backward compatibility settings for Bobtail.
181 Once you complete the upgrade of your monitors from Argonaut to
182 Bobtail, and have restarted the monitor daemons, you must upgrade the
183 monitors from Bobtail to Cuttlefish. Ensure that you have a quorum
184 before beginning this upgrade procedure. Before upgrading, remember to
185 replace the reference to the Bobtail repository with a reference to
186 the Cuttlefish repository. For example::
188 sudo rm /etc/apt/sources.list.d/ceph.list
189 echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
191 See `Upgrading Monitors`_ for details.
193 The architecture of the monitors changed significantly from Argonaut to
194 Cuttlefish. See `Monitor Config Reference`_ and `Joao's blog post`_ for details.
195 Once you complete the monitor upgrade, you can upgrade the OSD daemons and the
196 MDS daemons using the generic procedures. See `Upgrading an OSD`_ and `Upgrading
197 a Metadata Server`_ for details.
200 Bobtail to Cuttlefish
201 =====================
203 Upgrading your cluster from Bobtail to Cuttlefish has a few important
204 considerations. First, the monitor uses a new architecture, so you should
205 upgrade the full set of monitors to use Cuttlefish. Second, if you run multiple
206 metadata servers in a cluster, ensure the metadata servers have unique names.
207 See the following sections for details.
209 Replace any ``apt`` reference to older repositories with a reference to the
210 Cuttlefish repository. For example::
212 sudo rm /etc/apt/sources.list.d/ceph.list
213 echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
219 The architecture of the monitors changed significantly from Bobtail to
220 Cuttlefish. See `Monitor Config Reference`_ and `Joao's blog post`_ for
221 details. This means that v0.59 and pre-v0.59 monitors do not talk to each other
222 (Cuttlefish is v.0.61). When you upgrade each monitor, it will convert its
223 local data store to the new format. Once you upgrade a majority of monitors,
224 the monitors form a quorum using the new protocol and the old monitors will be
225 blocked until they get upgraded. For this reason, we recommend upgrading the
226 monitors in immediate succession.
228 .. important:: Do not run a mixed-version cluster for an extended period.
234 The monitor now enforces that MDS names be unique. If you have multiple metadata
235 server daemons that start with the same ID (e.g., mds.a) the second
236 metadata server will implicitly mark the first metadata server as ``failed``.
237 Multi-MDS configurations with identical names must be adjusted accordingly to
238 give daemons unique names. If you run your cluster with one metadata server,
239 you can disregard this notice for now.
245 The ``ceph-deploy`` tool is now the preferred method of provisioning new clusters.
246 For existing clusters created via the obsolete ``mkcephfs`` tool that would like to transition to the
247 new tool, there is a migration path, documented at `Transitioning to ceph-deploy`_.
249 Cuttlefish to Dumpling
250 ======================
252 When upgrading from Cuttlefish (v0.61-v0.61.7) you may perform a rolling
253 upgrade. However, there are a few important considerations. First, you must
254 upgrade the ``ceph`` command line utility, because it has changed significantly.
255 Second, you must upgrade the full set of monitors to use Dumpling, because of a
258 Replace any reference to older repositories with a reference to the
259 Dumpling repository. For example, with ``apt`` perform the following::
261 sudo rm /etc/apt/sources.list.d/ceph.list
262 echo deb http://download.ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
264 With CentOS/Red Hat distributions, remove the old repository. ::
266 sudo rm /etc/yum.repos.d/ceph.repo
268 Then add a new ``ceph.repo`` repository entry with the following contents.
273 name=Ceph Packages and Backports $basearch
274 baseurl=http://download.ceph.com/rpm/el6/$basearch
277 gpgkey=https://download.ceph.com/keys/release.asc
280 .. note:: Ensure you use the correct URL for your distribution. Check the
281 http://download.ceph.com/rpm directory for your distribution.
283 .. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
284 the repository on Ceph Client nodes where you use the ``ceph`` command line
285 interface or the ``ceph-deploy`` tool.
291 When upgrading from Dumpling (v0.64) you may perform a rolling
294 Replace any reference to older repositories with a reference to the
295 Emperor repository. For example, with ``apt`` perform the following::
297 sudo rm /etc/apt/sources.list.d/ceph.list
298 echo deb http://download.ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
300 With CentOS/Red Hat distributions, remove the old repository. ::
302 sudo rm /etc/yum.repos.d/ceph.repo
304 Then add a new ``ceph.repo`` repository entry with the following contents and
305 replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``, etc).
310 name=Ceph Packages and Backports $basearch
311 baseurl=http://download.ceph.com/rpm-emperor/{distro}/$basearch
314 gpgkey=https://download.ceph.com/keys/release.asc
317 .. note:: Ensure you use the correct URL for your distribution. Check the
318 http://download.ceph.com/rpm directory for your distribution.
320 .. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
321 the repository on Ceph Client nodes where you use the ``ceph`` command line
322 interface or the ``ceph-deploy`` tool.
328 In V0.65, the ``ceph`` commandline interface (CLI) utility changed
329 significantly. You will not be able to use the old CLI with Dumpling. This means
330 that you must upgrade the ``ceph-common`` library on all nodes that access the
331 Ceph Storage Cluster with the ``ceph`` CLI before upgrading Ceph daemons. ::
333 sudo apt-get update && sudo apt-get install ceph-common
335 Ensure that you have the latest version (v0.67 or later). If you do not,
336 you may need to uninstall, auto remove dependencies and reinstall.
338 See `v0.65`_ for details on the new command line interface.
340 .. _v0.65: http://docs.ceph.com/docs/master/release-notes/#v0-65
346 Dumpling (v0.67) ``ceph-mon`` daemons have an internal protocol change. This
347 means that v0.67 daemons cannot talk to v0.66 or older daemons. Once you
348 upgrade a majority of monitors, the monitors form a quorum using the new
349 protocol and the old monitors will be blocked until they get upgraded. For this
350 reason, we recommend upgrading all monitors at once (or in relatively quick
351 succession) to minimize the possibility of downtime.
353 .. important:: Do not run a mixed-version cluster for an extended period.
360 If your existing cluster is running a version older than v0.67 Dumpling, please
361 first upgrade to the latest Dumpling release before upgrading to v0.80 Firefly.
367 Dumpling (v0.67) ``ceph-mon`` daemons have an internal protocol change. This
368 means that v0.67 daemons cannot talk to v0.66 or older daemons. Once you
369 upgrade a majority of monitors, the monitors form a quorum using the new
370 protocol and the old monitors will be blocked until they get upgraded. For this
371 reason, we recommend upgrading all monitors at once (or in relatively quick
372 succession) to minimize the possibility of downtime.
374 .. important:: Do not run a mixed-version cluster for an extended period.
377 Ceph Config File Changes
378 ------------------------
380 We recommand adding the following to the ``[mon]`` section of your
381 ``ceph.conf`` prior to upgrade::
383 mon warn on legacy crush tunables = false
385 This will prevent health warnings due to the use of legacy CRUSH placement.
386 Although it is possible to rebalance existing data across your cluster, we do
387 not normally recommend it for production environments as a large amount of data
388 will move and there is a significant performance impact from the rebalancing.
394 In V0.65, the ``ceph`` commandline interface (CLI) utility changed
395 significantly. You will not be able to use the old CLI with Firefly. This means
396 that you must upgrade the ``ceph-common`` library on all nodes that access the
397 Ceph Storage Cluster with the ``ceph`` CLI before upgrading Ceph daemons.
399 For Debian/Ubuntu, execute::
401 sudo apt-get update && sudo apt-get install ceph-common
403 For CentOS/RHEL, execute::
405 sudo yum install ceph-common
407 Ensure that you have the latest version. If you do not,
408 you may need to uninstall, auto remove dependencies and reinstall.
410 See `v0.65`_ for details on the new command line interface.
412 .. _v0.65: http://docs.ceph.com/docs/master/release-notes/#v0-65
418 Replace any reference to older repositories with a reference to the
419 Firely repository. For example, with ``apt`` perform the following::
421 sudo rm /etc/apt/sources.list.d/ceph.list
422 echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
424 With CentOS/Red Hat distributions, remove the old repository. ::
426 sudo rm /etc/yum.repos.d/ceph.repo
428 Then add a new ``ceph.repo`` repository entry with the following contents and
429 replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``,
435 name=Ceph Packages and Backports $basearch
436 baseurl=http://download.ceph.com/rpm-firefly/{distro}/$basearch
439 gpgkey=https://download.ceph.com/keys/release.asc
442 Upgrade daemons in the following order:
444 #. **Monitors:** If the ``ceph-mon`` daemons are not restarted prior to the
445 ``ceph-osd`` daemons, the monitors will not correctly register their new
446 capabilities with the cluster and new features may not be usable until
447 the monitors are restarted a second time.
451 #. **MDSs:** If the ``ceph-mds`` daemon is restarted first, it will wait until
452 all OSDs have been upgraded before finishing its startup sequence.
454 #. **Gateways:** Upgrade ``radosgw`` daemons together. There is a subtle change
455 in behavior for multipart uploads that prevents a multipart request that
456 was initiated with a new ``radosgw`` from being completed by an old
459 .. note:: Make sure you upgrade your **ALL** of your Ceph monitors **AND**
460 restart them **BEFORE** upgrading and restarting OSDs, MDSs, and gateways!
466 If your existing cluster is running a version older than v0.67 Dumpling, please
467 first upgrade to the latest Dumpling release before upgrading to v0.80 Firefly.
468 Please refer to `Cuttlefish to Dumpling`_ and the `Firefly release notes`_ for
469 details. To upgrade from a post-Emperor point release, see the `Firefly release
473 Ceph Config File Changes
474 ------------------------
476 We recommand adding the following to the ``[mon]`` section of your
477 ``ceph.conf`` prior to upgrade::
479 mon warn on legacy crush tunables = false
481 This will prevent health warnings due to the use of legacy CRUSH placement.
482 Although it is possible to rebalance existing data across your cluster, we do
483 not normally recommend it for production environments as a large amount of data
484 will move and there is a significant performance impact from the rebalancing.
490 Replace any reference to older repositories with a reference to the
491 Firefly repository. For example, with ``apt`` perform the following::
493 sudo rm /etc/apt/sources.list.d/ceph.list
494 echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
496 With CentOS/Red Hat distributions, remove the old repository. ::
498 sudo rm /etc/yum.repos.d/ceph.repo
500 Then add a new ``ceph.repo`` repository entry with the following contents, but
501 replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``,
507 name=Ceph Packages and Backports $basearch
508 baseurl=http://download.ceph.com/rpm/{distro}/$basearch
511 gpgkey=https://download.ceph.com/keys/release.asc
514 .. note:: Ensure you use the correct URL for your distribution. Check the
515 http://download.ceph.com/rpm directory for your distribution.
517 .. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add
518 the repository on Ceph Client nodes where you use the ``ceph`` command line
519 interface or the ``ceph-deploy`` tool.
522 Upgrade daemons in the following order:
524 #. **Monitors:** If the ``ceph-mon`` daemons are not restarted prior to the
525 ``ceph-osd`` daemons, the monitors will not correctly register their new
526 capabilities with the cluster and new features may not be usable until
527 the monitors are restarted a second time.
531 #. **MDSs:** If the ``ceph-mds`` daemon is restarted first, it will wait until
532 all OSDs have been upgraded before finishing its startup sequence.
534 #. **Gateways:** Upgrade ``radosgw`` daemons together. There is a subtle change
535 in behavior for multipart uploads that prevents a multipart request that
536 was initiated with a new ``radosgw`` from being completed by an old
543 The following sections describe the upgrade process.
545 .. important:: Each release of Ceph may have some additional steps. Refer to
546 release-specific sections for details **BEFORE** you begin upgrading daemons.
552 To upgrade monitors, perform the following steps:
554 #. Upgrade the Ceph package for each daemon instance.
556 You may use ``ceph-deploy`` to address all monitor nodes at once.
559 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
560 ceph-deploy install --release hammer mon1 mon2 mon3
562 You may also use the package manager for your Linux distribution on
563 each individual node. To upgrade packages manually on each Debian/Ubuntu
564 host, perform the following steps . ::
567 sudo apt-get update && sudo apt-get install ceph
569 On CentOS/Red Hat hosts, perform the following steps::
572 sudo yum update && sudo yum install ceph
575 #. Restart each monitor. For Ubuntu distributions, use::
577 sudo restart ceph-mon id={hostname}
579 For CentOS/Red Hat/Debian distributions, use::
581 sudo /etc/init.d/ceph restart {mon-id}
583 For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
584 the monitor ID is usually ``mon.{hostname}``.
586 #. Ensure each monitor has rejoined the quorum. ::
590 Ensure that you have completed the upgrade cycle for all of your Ceph Monitors.
596 To upgrade a Ceph OSD Daemon, perform the following steps:
598 #. Upgrade the Ceph OSD Daemon package.
600 You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
603 ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
604 ceph-deploy install --release hammer osd1 osd2 osd3
606 You may also use the package manager on each node to upgrade packages
607 manually. For Debian/Ubuntu hosts, perform the following steps on each
611 sudo apt-get update && sudo apt-get install ceph
613 For CentOS/Red Hat hosts, perform the following steps::
616 sudo yum update && sudo yum install ceph
619 #. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
621 sudo restart ceph-osd id=N
623 For multiple OSDs on a host, you may restart all of them with Upstart. ::
625 sudo restart ceph-osd-all
627 For CentOS/Red Hat/Debian distributions, use::
629 sudo /etc/init.d/ceph restart N
632 #. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::
636 Ensure that you have completed the upgrade cycle for all of your
640 Upgrading a Metadata Server
641 ---------------------------
643 To upgrade a Ceph Metadata Server, perform the following steps:
645 #. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
646 address all Ceph Metadata Server nodes at once, or use the package manager
647 on each node. For example::
649 ceph-deploy install --release {release-name} ceph-node1
650 ceph-deploy install --release hammer mds1
652 To upgrade packages manually, perform the following steps on each
653 Debian/Ubuntu host. ::
656 sudo apt-get update && sudo apt-get install ceph-mds
658 Or the following steps on CentOS/Red Hat hosts::
661 sudo yum update && sudo yum install ceph-mds
664 #. Restart the metadata server. For Ubuntu, use::
666 sudo restart ceph-mds id={hostname}
668 For CentOS/Red Hat/Debian distributions, use::
670 sudo /etc/init.d/ceph restart mds.{hostname}
672 For clusters deployed with ``ceph-deploy``, the name is usually either
673 the name you specified on creation or the hostname.
675 #. Ensure the metadata server is up and running::
683 Once you have upgraded the packages and restarted daemons on your Ceph
684 cluster, we recommend upgrading ``ceph-common`` and client libraries
685 (``librbd1`` and ``librados2``) on your client nodes too.
687 #. Upgrade the package::
690 apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd
692 #. Ensure that you have the latest version::
696 If you do not have the latest version, you may need to uninstall, auto remove
697 dependencies and reinstall.
700 Transitioning to ceph-deploy
701 ============================
703 If you have an existing cluster that you deployed with ``mkcephfs`` (usually
704 Argonaut or Bobtail releases), you will need to make a few changes to your
705 configuration to ensure that your cluster will work with ``ceph-deploy``.
711 You will need to add ``caps mon = "allow *"`` to your monitor keyring if it is
712 not already in the keyring. By default, the monitor keyring is located under
713 ``/var/lib/ceph/mon/ceph-$id/keyring``. When you have added the ``caps``
714 setting, your monitor keyring should look something like this::
717 key = AQBJIHhRuHCwDRAAZjBTSJcIBIoGpdOR9ToiyQ==
720 Adding ``caps mon = "allow *"`` will ease the transition from ``mkcephfs`` to
721 ``ceph-deploy`` by allowing ``ceph-create-keys`` to use the ``mon.`` keyring
722 file in ``$mon_data`` and get the caps it needs.
728 Under the ``/var/lib/ceph`` directory, the ``mon`` and ``osd`` directories need
729 to use the default paths.
731 - **OSDs**: The path should be ``/var/lib/ceph/osd/ceph-$id``
732 - **MON**: The path should be ``/var/lib/ceph/mon/ceph-$id``
734 Under those directories, the keyring should be in a file named ``keyring``.
739 .. _Monitor Config Reference: ../../rados/configuration/mon-config-ref
740 .. _Joao's blog post: http://ceph.com/dev-notes/cephs-new-monitor-changes
741 .. _User Management - Backward Compatibility: ../../rados/configuration/auth-config-ref/#backward-compatibility
742 .. _manually: ../install-storage-cluster/
743 .. _Operating a Cluster: ../../rados/operations/operating
744 .. _Monitoring a Cluster: ../../rados/operations/monitoring
745 .. _Firefly release notes: ../../release-notes/#v0-80-firefly
746 .. _release notes: ../../release-notes