1 ==========================
2 Monitor Config Reference
3 ==========================
5 Understanding how to configure a :term:`Ceph Monitor` is an important part of
6 building a reliable :term:`Ceph Storage Cluster`. **All Ceph Storage Clusters
7 have at least one monitor**. A monitor configuration usually remains fairly
8 consistent, but you can add, remove or replace a monitor in a cluster. See
9 `Adding/Removing a Monitor`_ and `Add/Remove a Monitor (ceph-deploy)`_ for
13 .. index:: Ceph Monitor; Paxos
18 Ceph Monitors maintain a "master copy" of the :term:`cluster map`, which means a
19 :term:`Ceph Client` can determine the location of all Ceph Monitors, Ceph OSD
20 Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and
21 retrieving a current cluster map. Before Ceph Clients can read from or write to
22 Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor
23 first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph
24 Client can compute the location for any object. The ability to compute object
25 locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a
26 very important aspect of Ceph's high scalability and performance. See
27 `Scalability and High Availability`_ for additional details.
29 The primary role of the Ceph Monitor is to maintain a master copy of the cluster
30 map. Ceph Monitors also provide authentication and logging services. Ceph
31 Monitors write all changes in the monitor services to a single Paxos instance,
32 and Paxos writes the changes to a key/value store for strong consistency. Ceph
33 Monitors can query the most recent version of the cluster map during sync
34 operations. Ceph Monitors leverage the key/value store's snapshots and iterators
35 (using leveldb) to perform store-wide synchronization.
39 /-------------\ /-------------\
40 | Monitor | Write Changes | Paxos |
41 | cCCC +-------------->+ cCCC |
43 +-------------+ \------+------/
45 +-------------+ | Write Changes
48 | Monitor Map | /------+------\
49 +-------------+ | Key / Value |
51 +-------------+ | cCCC |
52 | PG Map | \------+------/
54 | MDS Map | | Read Changes
56 | cCCC |*---------------------+
60 .. deprecated:: version 0.58
62 In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for
63 each service and store the map as a file.
65 .. index:: Ceph Monitor; cluster map
70 The cluster map is a composite of maps, including the monitor map, the OSD map,
71 the placement group map and the metadata server map. The cluster map tracks a
72 number of important things: which processes are ``in`` the Ceph Storage Cluster;
73 which processes that are ``in`` the Ceph Storage Cluster are ``up`` and running
74 or ``down``; whether, the placement groups are ``active`` or ``inactive``, and
75 ``clean`` or in some other state; and, other details that reflect the current
76 state of the cluster such as the total amount of storage space, and the amount
79 When there is a significant change in the state of the cluster--e.g., a Ceph OSD
80 Daemon goes down, a placement group falls into a degraded state, etc.--the
81 cluster map gets updated to reflect the current state of the cluster.
82 Additionally, the Ceph Monitor also maintains a history of the prior states of
83 the cluster. The monitor map, OSD map, placement group map and metadata server
84 map each maintain a history of their map versions. We call each version an
87 When operating your Ceph Storage Cluster, keeping track of these states is an
88 important part of your system administration duties. See `Monitoring a Cluster`_
89 and `Monitoring OSDs and PGs`_ for additional details.
91 .. index:: high availability; quorum
96 Our Configuring ceph section provides a trivial `Ceph configuration file`_ that
97 provides for one monitor in the test cluster. A cluster will run fine with a
98 single monitor; however, **a single monitor is a single-point-of-failure**. To
99 ensure high availability in a production Ceph Storage Cluster, you should run
100 Ceph with multiple monitors so that the failure of a single monitor **WILL NOT**
101 bring down your entire cluster.
103 When a Ceph Storage Cluster runs multiple Ceph Monitors for high availability,
104 Ceph Monitors use `Paxos`_ to establish consensus about the master cluster map.
105 A consensus requires a majority of monitors running to establish a quorum for
106 consensus about the cluster map (e.g., 1; 2 out of 3; 3 out of 5; 4 out of 6;
109 ``mon force quorum join``
111 :Description: Force monitor to join quorum even if it has been previously removed from the map
115 .. index:: Ceph Monitor; consistency
120 When you add monitor settings to your Ceph configuration file, you need to be
121 aware of some of the architectural aspects of Ceph Monitors. **Ceph imposes
122 strict consistency requirements** for a Ceph monitor when discovering another
123 Ceph Monitor within the cluster. Whereas, Ceph Clients and other Ceph daemons
124 use the Ceph configuration file to discover monitors, monitors discover each
125 other using the monitor map (monmap), not the Ceph configuration file.
127 A Ceph Monitor always refers to the local copy of the monmap when discovering
128 other Ceph Monitors in the Ceph Storage Cluster. Using the monmap instead of the
129 Ceph configuration file avoids errors that could break the cluster (e.g., typos
130 in ``ceph.conf`` when specifying a monitor address or port). Since monitors use
131 monmaps for discovery and they share monmaps with clients and other Ceph
132 daemons, **the monmap provides monitors with a strict guarantee that their
133 consensus is valid.**
135 Strict consistency also applies to updates to the monmap. As with any other
136 updates on the Ceph Monitor, changes to the monmap always run through a
137 distributed consensus algorithm called `Paxos`_. The Ceph Monitors must agree on
138 each update to the monmap, such as adding or removing a Ceph Monitor, to ensure
139 that each monitor in the quorum has the same version of the monmap. Updates to
140 the monmap are incremental so that Ceph Monitors have the latest agreed upon
141 version, and a set of previous versions. Maintaining a history enables a Ceph
142 Monitor that has an older version of the monmap to catch up with the current
143 state of the Ceph Storage Cluster.
145 If Ceph Monitors discovered each other through the Ceph configuration file
146 instead of through the monmap, it would introduce additional risks because the
147 Ceph configuration files are not updated and distributed automatically. Ceph
148 Monitors might inadvertently use an older Ceph configuration file, fail to
149 recognize a Ceph Monitor, fall out of a quorum, or develop a situation where
150 `Paxos`_ is not able to determine the current state of the system accurately.
153 .. index:: Ceph Monitor; bootstrapping monitors
155 Bootstrapping Monitors
156 ----------------------
158 In most configuration and deployment cases, tools that deploy Ceph may help
159 bootstrap the Ceph Monitors by generating a monitor map for you (e.g.,
160 ``ceph-deploy``, etc). A Ceph Monitor requires a few explicit
163 - **Filesystem ID**: The ``fsid`` is the unique identifier for your
164 object store. Since you can run multiple clusters on the same
165 hardware, you must specify the unique ID of the object store when
166 bootstrapping a monitor. Deployment tools usually do this for you
167 (e.g., ``ceph-deploy`` can call a tool like ``uuidgen``), but you
168 may specify the ``fsid`` manually too.
170 - **Monitor ID**: A monitor ID is a unique ID assigned to each monitor within
171 the cluster. It is an alphanumeric value, and by convention the identifier
172 usually follows an alphabetical increment (e.g., ``a``, ``b``, etc.). This
173 can be set in a Ceph configuration file (e.g., ``[mon.a]``, ``[mon.b]``, etc.),
174 by a deployment tool, or using the ``ceph`` commandline.
176 - **Keys**: The monitor must have secret keys. A deployment tool such as
177 ``ceph-deploy`` usually does this for you, but you may
178 perform this step manually too. See `Monitor Keyrings`_ for details.
180 For additional details on bootstrapping, see `Bootstrapping a Monitor`_.
182 .. index:: Ceph Monitor; configuring monitors
187 To apply configuration settings to the entire cluster, enter the configuration
188 settings under ``[global]``. To apply configuration settings to all monitors in
189 your cluster, enter the configuration settings under ``[mon]``. To apply
190 configuration settings to specific monitors, specify the monitor instance
191 (e.g., ``[mon.a]``). By convention, monitor instance names use alpha notation.
206 Minimum Configuration
207 ---------------------
209 The bare minimum monitor settings for a Ceph monitor via the Ceph configuration
210 file include a hostname and a monitor address for each monitor. You can configure
211 these under ``[mon]`` or under the entry for a specific monitor.
216 mon host = hostname1,hostname2,hostname3
217 mon addr = 10.0.0.10:6789,10.0.0.11:6789,10.0.0.12:6789
224 mon addr = 10.0.0.10:6789
226 See the `Network Configuration Reference`_ for details.
228 .. note:: This minimum configuration for monitors assumes that a deployment
229 tool generates the ``fsid`` and the ``mon.`` key for you.
231 Once you deploy a Ceph cluster, you **SHOULD NOT** change the IP address of
232 the monitors. However, if you decide to change the monitor's IP address, you
233 must follow a specific procedure. See `Changing a Monitor's IP Address`_ for
236 Monitors can also be found by clients using DNS SRV records. See `Monitor lookup through DNS`_ for details.
241 Each Ceph Storage Cluster has a unique identifier (``fsid``). If specified, it
242 usually appears under the ``[global]`` section of the configuration file.
243 Deployment tools usually generate the ``fsid`` and store it in the monitor map,
244 so the value may not appear in a configuration file. The ``fsid`` makes it
245 possible to run daemons for multiple clusters on the same hardware.
249 :Description: The cluster ID. One per cluster.
252 :Default: N/A. May be generated by a deployment tool if not specified.
254 .. note:: Do not set this value if you use a deployment tool that does
258 .. index:: Ceph Monitor; initial members
263 We recommend running a production Ceph Storage Cluster with at least three Ceph
264 Monitors to ensure high availability. When you run multiple monitors, you may
265 specify the initial monitors that must be members of the cluster in order to
266 establish a quorum. This may reduce the time it takes for your cluster to come
272 mon initial members = a,b,c
275 ``mon initial members``
277 :Description: The IDs of initial monitors in a cluster during startup. If
278 specified, Ceph requires an odd number of monitors to form an
279 initial quorum (e.g., 3).
284 .. note:: A *majority* of monitors in your cluster must be able to reach
285 each other in order to establish a quorum. You can decrease the initial
286 number of monitors to establish a quorum with this setting.
288 .. index:: Ceph Monitor; data path
293 Ceph provides a default path where Ceph Monitors store data. For optimal
294 performance in a production Ceph Storage Cluster, we recommend running Ceph
295 Monitors on separate hosts and drives from Ceph OSD Daemons. As leveldb is using
296 ``mmap()`` for writing the data, Ceph Monitors flush their data from memory to disk
297 very often, which can interfere with Ceph OSD Daemon workloads if the data
298 store is co-located with the OSD Daemons.
300 In Ceph versions 0.58 and earlier, Ceph Monitors store their data in files. This
301 approach allows users to inspect monitor data with common tools like ``ls``
302 and ``cat``. However, it doesn't provide strong consistency.
304 In Ceph versions 0.59 and later, Ceph Monitors store their data as key/value
305 pairs. Ceph Monitors require `ACID`_ transactions. Using a data store prevents
306 recovering Ceph Monitors from running corrupted versions through Paxos, and it
307 enables multiple modification operations in one single atomic batch, among other
310 Generally, we do not recommend changing the default data location. If you modify
311 the default location, we recommend that you make it uniform across Ceph Monitors
312 by setting it in the ``[mon]`` section of the configuration file.
317 :Description: The monitor's data location.
319 :Default: ``/var/lib/ceph/mon/$cluster-$id``
322 ``mon data size warn``
324 :Description: Issue a ``HEALTH_WARN`` in cluster log when the monitor's data
325 store goes over 15GB.
327 :Default: 15*1024*1024*1024*
330 ``mon data avail warn``
332 :Description: Issue a ``HEALTH_WARN`` in cluster log when the available disk
333 space of monitor's data store is lower or equal to this
339 ``mon data avail crit``
341 :Description: Issue a ``HEALTH_ERR`` in cluster log when the available disk
342 space of monitor's data store is lower or equal to this
348 ``mon warn on cache pools without hit sets``
350 :Description: Issue a ``HEALTH_WARN`` in cluster log if a cache pool does not
351 have the hitset type set set.
352 See `hit set type <../operations/pools#hit-set-type>`_ for more
358 ``mon warn on crush straw calc version zero``
360 :Description: Issue a ``HEALTH_WARN`` in cluster log if the CRUSH's
361 ``straw_calc_version`` is zero. See
362 `CRUSH map tunables <../operations/crush-map#tunables>`_ for
368 ``mon warn on legacy crush tunables``
370 :Description: Issue a ``HEALTH_WARN`` in cluster log if
371 CRUSH tunables are too old (older than ``mon_min_crush_required_version``)
376 ``mon crush min required version``
378 :Description: The minimum tunable profile version required by the cluster.
380 `CRUSH map tunables <../operations/crush-map#tunables>`_ for
383 :Default: ``firefly``
386 ``mon warn on osd down out interval zero``
388 :Description: Issue a ``HEALTH_WARN`` in cluster log if
389 ``mon osd down out interval`` is zero. Having this option set to
390 zero on the leader acts much like the ``noout`` flag. It's hard
391 to figure out what's going wrong with clusters witout the
392 ``noout`` flag set but acting like that just the same, so we
393 report a warning in this case.
398 ``mon cache target full warn ratio``
400 :Description: Position between pool's ``cache_target_full`` and
401 ``target_max_object`` where we start warning
406 ``mon health data update interval``
408 :Description: How often (in seconds) the monitor in quorum shares its health
409 status with its peers. (negative number disables it)
414 ``mon health to clog``
416 :Description: Enable sending health summary to cluster log periodically.
421 ``mon health to clog tick interval``
423 :Description: How often (in seconds) the monitor send health summary to cluster
424 log (a non-positive number disables it). If current health summary
425 is empty or identical to the last time, monitor will not send it
431 ``mon health to clog interval``
433 :Description: How often (in seconds) the monitor send health summary to cluster
434 log (a non-positive number disables it). Monitor will always
435 send the summary to cluster log no matter if the summary changes
442 .. index:: Ceph Storage Cluster; capacity planning, Ceph Monitor; capacity planning
447 When a Ceph Storage Cluster gets close to its maximum capacity (i.e., ``mon osd
448 full ratio``), Ceph prevents you from writing to or reading from Ceph OSD
449 Daemons as a safety measure to prevent data loss. Therefore, letting a
450 production Ceph Storage Cluster approach its full ratio is not a good practice,
451 because it sacrifices high availability. The default full ratio is ``.95``, or
452 95% of capacity. This a very aggressive setting for a test cluster with a small
455 .. tip:: When monitoring your cluster, be alert to warnings related to the
456 ``nearfull`` ratio. This means that a failure of some OSDs could result
457 in a temporary service disruption if one or more OSDs fails. Consider adding
458 more OSDs to increase storage capacity.
460 A common scenario for test clusters involves a system administrator removing a
461 Ceph OSD Daemon from the Ceph Storage Cluster to watch the cluster rebalance;
462 then, removing another Ceph OSD Daemon, and so on until the Ceph Storage Cluster
463 eventually reaches the full ratio and locks up. We recommend a bit of capacity
464 planning even with a test cluster. Planning enables you to gauge how much spare
465 capacity you will need in order to maintain high availability. Ideally, you want
466 to plan for a series of Ceph OSD Daemon failures where the cluster can recover
467 to an ``active + clean`` state without replacing those Ceph OSD Daemons
468 immediately. You can run a cluster in an ``active + degraded`` state, but this
469 is not ideal for normal operating conditions.
471 The following diagram depicts a simplistic Ceph Storage Cluster containing 33
472 Ceph Nodes with one Ceph OSD Daemon per host, each Ceph OSD Daemon reading from
473 and writing to a 3TB drive. So this exemplary Ceph Storage Cluster has a maximum
474 actual capacity of 99TB. With a ``mon osd full ratio`` of ``0.95``, if the Ceph
475 Storage Cluster falls to 5TB of remaining capacity, the cluster will not allow
476 Ceph Clients to read and write data. So the Ceph Storage Cluster's operating
477 capacity is 95TB, not 99TB.
481 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
482 | Rack 1 | | Rack 2 | | Rack 3 | | Rack 4 | | Rack 5 | | Rack 6 |
483 | cCCC | | cF00 | | cCCC | | cCCC | | cCCC | | cCCC |
484 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
485 | OSD 1 | | OSD 7 | | OSD 13 | | OSD 19 | | OSD 25 | | OSD 31 |
486 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
487 | OSD 2 | | OSD 8 | | OSD 14 | | OSD 20 | | OSD 26 | | OSD 32 |
488 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
489 | OSD 3 | | OSD 9 | | OSD 15 | | OSD 21 | | OSD 27 | | OSD 33 |
490 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
491 | OSD 4 | | OSD 10 | | OSD 16 | | OSD 22 | | OSD 28 | | Spare |
492 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
493 | OSD 5 | | OSD 11 | | OSD 17 | | OSD 23 | | OSD 29 | | Spare |
494 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
495 | OSD 6 | | OSD 12 | | OSD 18 | | OSD 24 | | OSD 30 | | Spare |
496 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
498 It is normal in such a cluster for one or two OSDs to fail. A less frequent but
499 reasonable scenario involves a rack's router or power supply failing, which
500 brings down multiple OSDs simultaneously (e.g., OSDs 7-12). In such a scenario,
501 you should still strive for a cluster that can remain operational and achieve an
502 ``active + clean`` state--even if that means adding a few hosts with additional
503 OSDs in short order. If your capacity utilization is too high, you may not lose
504 data, but you could still sacrifice data availability while resolving an outage
505 within a failure domain if capacity utilization of the cluster exceeds the full
506 ratio. For this reason, we recommend at least some rough capacity planning.
508 Identify two numbers for your cluster:
510 #. The number of OSDs.
511 #. The total capacity of the cluster
513 If you divide the total capacity of your cluster by the number of OSDs in your
514 cluster, you will find the mean average capacity of an OSD within your cluster.
515 Consider multiplying that number by the number of OSDs you expect will fail
516 simultaneously during normal operations (a relatively small number). Finally
517 multiply the capacity of the cluster by the full ratio to arrive at a maximum
518 operating capacity; then, subtract the number of amount of data from the OSDs
519 you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing
520 process with a higher number of OSD failures (e.g., a rack of OSDs) to arrive at
521 a reasonable number for a near full ratio.
527 mon osd full ratio = .80
528 mon osd backfillfull ratio = .75
529 mon osd nearfull ratio = .70
532 ``mon osd full ratio``
534 :Description: The percentage of disk space used before an OSD is
541 ``mon osd backfillfull ratio``
543 :Description: The percentage of disk space used before an OSD is
544 considered too ``full`` to backfill.
550 ``mon osd nearfull ratio``
552 :Description: The percentage of disk space used before an OSD is
553 considered ``nearfull``.
559 .. tip:: If some OSDs are nearfull, but others have plenty of capacity, you
560 may have a problem with the CRUSH weight for the nearfull OSDs.
567 Ceph monitors know about the cluster by requiring reports from each OSD, and by
568 receiving reports from OSDs about the status of their neighboring OSDs. Ceph
569 provides reasonable default settings for monitor/OSD interaction; however, you
570 may modify them as needed. See `Monitor/OSD Interaction`_ for details.
573 .. index:: Ceph Monitor; leader, Ceph Monitor; provider, Ceph Monitor; requester, Ceph Monitor; synchronization
575 Monitor Store Synchronization
576 -----------------------------
578 When you run a production cluster with multiple monitors (recommended), each
579 monitor checks to see if a neighboring monitor has a more recent version of the
580 cluster map (e.g., a map in a neighboring monitor with one or more epoch numbers
581 higher than the most current epoch in the map of the instant monitor).
582 Periodically, one monitor in the cluster may fall behind the other monitors to
583 the point where it must leave the quorum, synchronize to retrieve the most
584 current information about the cluster, and then rejoin the quorum. For the
585 purposes of synchronization, monitors may assume one of three roles:
587 #. **Leader**: The `Leader` is the first monitor to achieve the most recent
588 Paxos version of the cluster map.
590 #. **Provider**: The `Provider` is a monitor that has the most recent version
591 of the cluster map, but wasn't the first to achieve the most recent version.
593 #. **Requester:** A `Requester` is a monitor that has fallen behind the leader
594 and must synchronize in order to retrieve the most recent information about
595 the cluster before it can rejoin the quorum.
597 These roles enable a leader to delegate synchronization duties to a provider,
598 which prevents synchronization requests from overloading the leader--improving
599 performance. In the following diagram, the requester has learned that it has
600 fallen behind the other monitors. The requester asks the leader to synchronize,
601 and the leader tells the requester to synchronize with a provider.
604 .. ditaa:: +-----------+ +---------+ +----------+
605 | Requester | | Leader | | Provider |
606 +-----------+ +---------+ +----------+
609 | Ask to Synchronize | |
610 |------------------->| |
612 |<-------------------| |
613 | Tell Requester to | |
614 | Sync with Provider | |
617 |--------------------+-------------------->|
619 |<-------------------+---------------------|
620 | Send Chunk to Requester |
621 | (repeat as necessary) |
622 | Requester Acks Chuck to Provider |
623 |--------------------+-------------------->|
627 |------------------->|
629 |<-------------------|
634 Synchronization always occurs when a new monitor joins the cluster. During
635 runtime operations, monitors may receive updates to the cluster map at different
636 times. This means the leader and provider roles may migrate from one monitor to
637 another. If this happens while synchronizing (e.g., a provider falls behind the
638 leader), the provider can terminate synchronization with a requester.
640 Once synchronization is complete, Ceph requires trimming across the cluster.
641 Trimming requires that the placement groups are ``active + clean``.
644 ``mon sync trim timeout``
651 ``mon sync heartbeat timeout``
658 ``mon sync heartbeat interval``
665 ``mon sync backoff timeout``
674 :Description: Number of seconds the monitor will wait for the next update
675 message from its sync provider before it gives up and bootstrap
681 ``mon sync max retries``
688 ``mon sync max payload size``
690 :Description: The maximum size for a sync payload (in bytes).
691 :Type: 32-bit Integer
692 :Default: ``1045676``
695 ``paxos max join drift``
697 :Description: The maximum Paxos iterations before we must first sync the
698 monitor data stores. When a monitor finds that its peer is too
699 far ahead of it, it will first sync with data stores before moving
704 ``paxos stash full interval``
706 :Description: How often (in commits) to stash a full copy of the PaxosService state.
707 Current this setting only affects ``mds``, ``mon``, ``auth`` and ``mgr``
712 ``paxos propose interval``
714 :Description: Gather updates for this time interval before proposing
722 :Description: The minimum number of paxos states to keep around
729 :Description: The minimum amount of time to gather updates after a period of
737 :Description: Number of extra proposals tolerated before trimming
744 :Description: The maximum number of extra proposals to trim at a time
749 ``paxos service trim min``
751 :Description: The minimum amount of versions to trigger a trim (0 disables it)
756 ``paxos service trim max``
758 :Description: The maximum amount of versions to trim during a single proposal (0 disables it)
763 ``mon max log epochs``
765 :Description: The maximum amount of log epochs to trim during a single proposal
770 ``mon max pgmap epochs``
772 :Description: The maximum amount of pgmap epochs to trim during a single proposal
777 ``mon mds force trim to``
779 :Description: Force monitor to trim mdsmaps to this point (0 disables it.
780 dangerous, use with care)
785 ``mon osd force trim to``
787 :Description: Force monitor to trim osdmaps to this point, even if there is
788 PGs not clean at the specified epoch (0 disables it. dangerous,
793 ``mon osd cache size``
795 :Description: The size of osdmaps cache, not to rely on underlying store's cache
800 ``mon election timeout``
802 :Description: On election proposer, maximum waiting time for all ACKs in seconds.
809 :Description: The length (in seconds) of the lease on the monitor's versions.
814 ``mon lease renew interval factor``
816 :Description: ``mon lease`` \* ``mon lease renew interval factor`` will be the
817 interval for the Leader to renew the other monitor's leases. The
818 factor should be less than ``1.0``.
823 ``mon lease ack timeout factor``
825 :Description: The Leader will wait ``mon lease`` \* ``mon lease ack timeout factor``
826 for the Providers to acknowledge the lease extension.
831 ``mon accept timeout factor``
833 :Description: The Leader will wait ``mon lease`` \* ``mon accept timeout factor``
834 for the Requester(s) to accept a Paxos update. It is also used
835 during the Paxos recovery phase for similar purposes.
840 ``mon min osdmap epochs``
842 :Description: Minimum number of OSD map epochs to keep at all times.
843 :Type: 32-bit Integer
847 ``mon max pgmap epochs``
849 :Description: Maximum number of PG map epochs the monitor should keep.
850 :Type: 32-bit Integer
854 ``mon max log epochs``
856 :Description: Maximum number of Log epochs the monitor should keep.
857 :Type: 32-bit Integer
862 .. index:: Ceph Monitor; clock
867 Ceph daemons pass critical messages to each other, which must be processed
868 before daemons reach a timeout threshold. If the clocks in Ceph monitors
869 are not synchronized, it can lead to a number of anomalies. For example:
871 - Daemons ignoring received messages (e.g., timestamps outdated)
872 - Timeouts triggered too soon/late when a message wasn't received in time.
874 See `Monitor Store Synchronization`_ for details.
877 .. tip:: You SHOULD install NTP on your Ceph monitor hosts to
878 ensure that the monitor cluster operates with synchronized clocks.
880 Clock drift may still be noticeable with NTP even though the discrepancy is not
881 yet harmful. Ceph's clock drift / clock skew warnings may get triggered even
882 though NTP maintains a reasonable level of synchronization. Increasing your
883 clock drift may be tolerable under such circumstances; however, a number of
884 factors such as workload, network latency, configuring overrides to default
885 timeouts and the `Monitor Store Synchronization`_ settings may influence
886 the level of acceptable clock drift without compromising Paxos guarantees.
888 Ceph provides the following tunable options to allow you to find
894 :Description: How much to offset the system clock. See ``Clock.cc`` for details.
901 ``mon tick interval``
903 :Description: A monitor's tick interval in seconds.
904 :Type: 32-bit Integer
908 ``mon clock drift allowed``
910 :Description: The clock drift in seconds allowed between monitors.
915 ``mon clock drift warn backoff``
917 :Description: Exponential backoff for clock drift warnings
922 ``mon timecheck interval``
924 :Description: The time check interval (clock drift check) in seconds
931 ``mon timecheck skew interval``
933 :Description: The time check interval (clock drift check) in seconds when in
934 presence of a skew in seconds for the Leader.
942 ``mon client hunt interval``
944 :Description: The client will try a new monitor every ``N`` seconds until it
945 establishes a connection.
951 ``mon client ping interval``
953 :Description: The client will ping the monitor every ``N`` seconds.
958 ``mon client max log entries per message``
960 :Description: The maximum number of log entries a monitor will generate
969 :Description: The amount of client message data allowed in memory (in bytes).
970 :Type: 64-bit Integer Unsigned
971 :Default: ``100ul << 20``
976 Since version v0.94 there is support for pool flags which allow or disallow changes to be made to pools.
978 Monitors can also disallow removal of pools if configured that way.
980 ``mon allow pool delete``
982 :Description: If the monitors should allow pools to be removed. Regardless of what the pool flags say.
986 ``osd pool default flag hashpspool``
988 :Description: Set the hashpspool flag on new pools
992 ``osd pool default flag nodelete``
994 :Description: Set the nodelete flag on new pools. Prevents allow pool removal with this flag in any way.
998 ``osd pool default flag nopgchange``
1000 :Description: Set the nopgchange flag on new pools. Does not allow the number of PGs to be changed for a pool.
1004 ``osd pool default flag nosizechange``
1006 :Description: Set the nosizechange flag on new pools. Does not allow the size to be changed of pool.
1010 For more information about the pool flags see `Pool values`_.
1018 :Description: The maximum number of OSDs allowed in the cluster.
1019 :Type: 32-bit Integer
1022 ``mon globalid prealloc``
1024 :Description: The number of global IDs to pre-allocate for clients and daemons in the cluster.
1025 :Type: 32-bit Integer
1028 ``mon subscribe interval``
1030 :Description: The refresh interval (in seconds) for subscriptions. The
1031 subscription mechanism enables obtaining the cluster maps
1032 and log information.
1038 ``mon stat smooth intervals``
1040 :Description: Ceph will smooth statistics over the last ``N`` PG maps.
1045 ``mon probe timeout``
1047 :Description: Number of seconds the monitor will wait to find peers before bootstrapping.
1052 ``mon daemon bytes``
1054 :Description: The message memory cap for metadata server and OSD messages (in bytes).
1055 :Type: 64-bit Integer Unsigned
1056 :Default: ``400ul << 20``
1059 ``mon max log entries per event``
1061 :Description: The maximum number of log entries per event.
1066 ``mon osd prime pg temp``
1068 :Description: Enables or disable priming the PGMap with the previous OSDs when an out
1069 OSD comes back into the cluster. With the ``true`` setting the clients
1070 will continue to use the previous OSDs until the newly in OSDs as that
1076 ``mon osd prime pg temp max time``
1078 :Description: How much time in seconds the monitor should spend trying to prime the
1079 PGMap when an out OSD comes back into the cluster.
1084 ``mon osd prime pg temp max time estimate``
1086 :Description: Maximum estimate of time spent on each PG before we prime all PGs
1092 ``mon osd allow primary affinity``
1094 :Description: allow ``primary_affinity`` to be set in the osdmap.
1099 ``mon osd pool ec fast read``
1101 :Description: Whether turn on fast read on the pool or not. It will be used as
1102 the default setting of newly created erasure pools if ``fast_read``
1103 is not specified at create time.
1108 ``mon mds skip sanity``
1110 :Description: Skip safety assertions on FSMap (in case of bugs where we want to
1111 continue anyway). Monitor terminates if the FSMap sanity check
1112 fails, but we can disable it by enabling this option.
1117 ``mon max mdsmap epochs``
1119 :Description: The maximum amount of mdsmap epochs to trim during a single proposal.
1124 ``mon config key max entry size``
1126 :Description: The maximum size of config-key entry (in bytes)
1131 ``mon scrub interval``
1133 :Description: How often (in seconds) the monitor scrub its store by comparing
1134 the stored checksums with the computed ones of all the stored
1140 ``mon scrub max keys``
1142 :Description: The maximum number of keys to scrub each time.
1147 ``mon compact on start``
1149 :Description: Compact the database used as Ceph Monitor store on
1150 ``ceph-mon`` start. A manual compaction helps to shrink the
1151 monitor database and improve the performance of it if the regular
1152 compaction fails to work.
1157 ``mon compact on bootstrap``
1159 :Description: Compact the database used as Ceph Monitor store on
1160 on bootstrap. Monitor starts probing each other for creating
1161 a quorum after bootstrap. If it times out before joining the
1162 quorum, it will start over and bootstrap itself again.
1167 ``mon compact on trim``
1169 :Description: Compact a certain prefix (including paxos) when we trim its old states.
1176 :Description: Number of threads for performing CPU intensive work on monitor.
1181 ``mon osd mapping pgs per chunk``
1183 :Description: We calculate the mapping from placement group to OSDs in chunks.
1184 This option specifies the number of placement groups per chunk.
1189 ``mon osd max split count``
1191 :Description: Largest number of PGs per "involved" OSD to let split create.
1192 When we increase the ``pg_num`` of a pool, the placement groups
1193 will be splitted on all OSDs serving that pool. We want to avoid
1194 extreme multipliers on PG splits.
1199 ``mon session timeout``
1201 :Description: Monitor will terminate inactive sessions stay idle over this
1208 .. _Paxos: http://en.wikipedia.org/wiki/Paxos_(computer_science)
1209 .. _Monitor Keyrings: ../../../dev/mon-bootstrap#secret-keys
1210 .. _Ceph configuration file: ../ceph-conf/#monitors
1211 .. _Network Configuration Reference: ../network-config-ref
1212 .. _Monitor lookup through DNS: ../mon-lookup-dns
1213 .. _ACID: http://en.wikipedia.org/wiki/ACID
1214 .. _Adding/Removing a Monitor: ../../operations/add-or-rm-mons
1215 .. _Add/Remove a Monitor (ceph-deploy): ../../deployment/ceph-deploy-mon
1216 .. _Monitoring a Cluster: ../../operations/monitoring
1217 .. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg
1218 .. _Bootstrapping a Monitor: ../../../dev/mon-bootstrap
1219 .. _Changing a Monitor's IP Address: ../../operations/add-or-rm-mons#changing-a-monitor-s-ip-address
1220 .. _Monitor/OSD Interaction: ../mon-osd-interaction
1221 .. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
1222 .. _Pool values: ../../operations/pools/#set-pool-values