3 ==================================
4 ceph -- ceph administration tool
5 ==================================
12 | **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
14 | **ceph** **compact**
16 | **ceph** **config-key** [ *del* | *exists* | *get* | *list* | *dump* | *put* ] ...
18 | **ceph** **daemon** *<name>* \| *<path>* *<command>* ...
20 | **ceph** **daemonperf** *<name>* \| *<path>* [ *interval* [ *count* ] ]
22 | **ceph** **df** *{detail}*
24 | **ceph** **fs** [ *ls* \| *new* \| *reset* \| *rm* ] ...
28 | **ceph** **health** *{detail}*
30 | **ceph** **heap** [ *dump* \| *start_profiler* \| *stop_profiler* \| *release* \| *stats* ] ...
32 | **ceph** **injectargs** *<injectedargs>* [ *<injectedargs>*... ]
34 | **ceph** **log** *<logtext>* [ *<logtext>*... ]
36 | **ceph** **mds** [ *compat* \| *deactivate* \| *fail* \| *rm* \| *rmfailed* \| *set_state* \| *stat* \| *tell* ] ...
38 | **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
40 | **ceph** **mon_status**
42 | **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
44 | **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
46 | **ceph** **osd** **pool** [ *create* \| *delete* \| *get* \| *get-quota* \| *ls* \| *mksnap* \| *rename* \| *rmsnap* \| *set* \| *set-quota* \| *stats* ] ...
48 | **ceph** **osd** **tier** [ *add* \| *add-cache* \| *cache-mode* \| *remove* \| *remove-overlay* \| *set-overlay* ] ...
50 | **ceph** **pg** [ *debug* \| *deep-scrub* \| *dump* \| *dump_json* \| *dump_pools_json* \| *dump_stuck* \| *force_create_pg* \| *getmap* \| *ls* \| *ls-by-osd* \| *ls-by-pool* \| *ls-by-primary* \| *map* \| *repair* \| *scrub* \| *set_full_ratio* \| *set_nearfull_ratio* \| *stat* ] ...
52 | **ceph** **quorum** [ *enter* \| *exit* ]
54 | **ceph** **quorum_status**
56 | **ceph** **report** { *<tags>* [ *<tags>...* ] }
62 | **ceph** **sync** **force** {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
64 | **ceph** **tell** *<name (type.id)> <args> [<args>...]*
66 | **ceph** **version**
71 :program:`ceph` is a control utility which is used for manual deployment and maintenance
72 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
73 monitors, OSDs, placement groups, MDS and overall maintenance, administration
82 Manage authentication keys. It is used for adding, removing, exporting
83 or updating of authentication keys for a particular entity such as a monitor or
84 OSD. It uses some additional subcommands.
86 Subcommand ``add`` adds authentication info for a particular entity from input
87 file, or random key if no input is given and/or any caps specified in the command.
91 ceph auth add <entity> {<caps> [<caps>...]}
93 Subcommand ``caps`` updates caps for **name** from caps specified in the command.
97 ceph auth caps <entity> <caps> [<caps>...]
99 Subcommand ``del`` deletes all caps for ``name``.
103 ceph auth del <entity>
105 Subcommand ``export`` writes keyring for requested entity, or master keyring if
110 ceph auth export {<entity>}
112 Subcommand ``get`` writes keyring file with requested key.
116 ceph auth get <entity>
118 Subcommand ``get-key`` displays requested key.
122 ceph auth get-key <entity>
124 Subcommand ``get-or-create`` adds authentication info for a particular entity
125 from input file, or random key if no input given and/or any caps specified in the
130 ceph auth get-or-create <entity> {<caps> [<caps>...]}
132 Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
133 pairs specified in the command. If key already exists, any given caps must match
134 the existing caps for that key.
138 ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
140 Subcommand ``import`` reads keyring from input file.
146 Subcommand ``ls`` lists authentication state.
152 Subcommand ``print-key`` displays requested key.
156 ceph auth print-key <entity>
158 Subcommand ``print_key`` displays requested key.
162 ceph auth print_key <entity>
168 Causes compaction of monitor's leveldb storage.
178 Manage configuration key. It uses some additional subcommands.
180 Subcommand ``del`` deletes configuration key.
184 ceph config-key del <key>
186 Subcommand ``exists`` checks for configuration keys existence.
190 ceph config-key exists <key>
192 Subcommand ``get`` gets the configuration key.
196 ceph config-key get <key>
198 Subcommand ``list`` lists configuration keys.
204 Subcommand ``dump`` dumps configuration keys and values.
210 Subcommand ``set`` puts configuration key and value.
214 ceph config-key set <key> {<val>}
220 Submit admin-socket commands.
224 ceph daemon {daemon_name|socket_path} {command} ...
228 ceph daemon osd.0 help
234 Watch performance counters from a Ceph daemon.
238 ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
244 Show cluster's free space status.
255 Show the releases and features of all connected daemons and clients connected
256 to the cluster, along with the numbers of them in each bucket grouped by the
257 corresponding features/releases. Each release of Ceph supports a different set
258 of features, expressed by the features bitmask. New cluster features require
259 that clients support the feature, or else they are not allowed to connect to
260 these new features. As new features or capabilities are enabled after an
261 upgrade, older clients are prevented from connecting.
270 Manage cephfs filesystems. It uses some additional subcommands.
272 Subcommand ``ls`` to list filesystems
278 Subcommand ``new`` to make a new filesystem using named pools <metadata> and <data>
282 ceph fs new <fs_name> <metadata> <data>
284 Subcommand ``reset`` is used for disaster recovery only: reset to a single-MDS map
288 ceph fs reset <fs_name> {--yes-i-really-mean-it}
290 Subcommand ``rm`` to disable the named filesystem
294 ceph fs rm <fs_name> {--yes-i-really-mean-it}
300 Show cluster's FSID/UUID.
310 Show cluster's health.
320 Show heap usage info (available only if compiled with tcmalloc)
324 ceph heap dump|start_profiler|stop_profiler|release|stats
330 Inject configuration arguments into monitor.
334 ceph injectargs <injected_args> [<injected_args>...]
340 Log supplied text to the monitor log.
344 ceph log <logtext> [<logtext>...]
350 Manage metadata server configuration and administration. It uses some
351 additional subcommands.
353 Subcommand ``compat`` manages compatible features. It uses some additional
356 Subcommand ``rm_compat`` removes compatible feature.
360 ceph mds compat rm_compat <int[0-]>
362 Subcommand ``rm_incompat`` removes incompatible feature.
366 ceph mds compat rm_incompat <int[0-]>
368 Subcommand ``show`` shows mds compatibility settings.
374 Subcommand ``deactivate`` stops mds.
378 ceph mds deactivate <who>
380 Subcommand ``fail`` forces mds to status fail.
386 Subcommand ``rm`` removes inactive mds.
390 ceph mds rm <int[0-]> <name> (type.id)>
392 Subcommand ``rmfailed`` removes failed mds.
396 ceph mds rmfailed <int[0-]>
398 Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
402 ceph mds set_state <int[0-]> <int[0-20]>
404 Subcommand ``stat`` shows MDS status.
410 Subcommand ``tell`` sends command to particular mds.
414 ceph mds tell <who> <args> [<args>...]
419 Manage monitor configuration and administration. It uses some additional
422 Subcommand ``add`` adds new monitor named <name> at <addr>.
426 ceph mon add <name> <IPaddr[:port]>
428 Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
432 ceph mon dump {<int[0-]>}
434 Subcommand ``getmap`` gets monmap.
438 ceph mon getmap {<int[0-]>}
440 Subcommand ``remove`` removes monitor named <name>.
444 ceph mon remove <name>
446 Subcommand ``stat`` summarizes monitor status.
455 Reports status of monitors.
464 Ceph manager daemon configuration and management.
466 Subcommand ``dump`` dumps the latest MgrMap, which describes the active
467 and standby manager daemons.
473 Subcommand ``fail`` will mark a manager daemon as failed, removing it
474 from the manager map. If it is the active manager daemon a standby
481 Subcommand ``module ls`` will list currently enabled manager modules (plugins).
487 Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
491 ceph mgr module enable <module>
493 Subcommand ``module disable`` will disable an active manager module.
497 ceph mgr module disable <module>
499 Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
503 ceph mgr metadata [name]
505 Subcommand ``versions`` will report a count of running daemon versions.
511 Subcommand ``count-metadata`` will report a count of any daemon metadata field.
515 ceph mgr count-metadata <field>
521 Manage OSD configuration and administration. It uses some additional
524 Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
527 Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
532 ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
534 Subcommand ``ls`` show blacklisted clients
538 ceph osd blacklist ls
540 Subcommand ``rm`` remove <addr> from blacklist
544 ceph osd blacklist rm <EntityAddr>
546 Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
552 Subcommand ``create`` creates new osd (with optional UUID and ID).
554 This command is DEPRECATED as of the Luminous release, and will be removed in
557 Subcommand ``new`` should instead be used.
561 ceph osd create {<uuid>} {<id>}
563 Subcommand ``new`` can be used to create a new OSD or to recreate a previously
564 destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
565 and the command expects a JSON file containing the base64 cephx key for auth
566 entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
567 lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
568 the accompanying lockbox cephx key.
572 ceph osd new {<uuid>} {<id>} -i {<secrets.json>}
574 The secrets JSON file is optional but if provided, is expected to maintain
575 a form of the following format::
578 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg=="
584 "cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
585 "cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
586 "dmcrypt_key": "<dm-crypt key>"
590 Subcommand ``crush`` is used for CRUSH management. It uses some additional
593 Subcommand ``add`` adds or updates crushmap position and weight for <name> with
594 <weight> and location <args>.
598 ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
600 Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
605 ceph osd crush add-bucket <name> <type>
607 Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
608 <weight> at/to location <args>.
612 ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
615 Subcommand ``dump`` dumps crush map.
621 Subcommand ``get-tunable`` get crush tunable straw_calc_version
625 ceph osd crush get-tunable straw_calc_version
627 Subcommand ``link`` links existing entry for <name> under location <args>.
631 ceph osd crush link <name> <args> [<args>...]
633 Subcommand ``move`` moves existing entry for <name> to location <args>.
637 ceph osd crush move <name> <args> [<args>...]
639 Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
644 ceph osd crush remove <name> {<ancestor>}
646 Subcommand ``rename-bucket`` renames buchket <srcname> to <stname>
650 ceph osd crush rename-bucket <srcname> <dstname>
652 Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
656 ceph osd crush reweight <name> <float[0.0-]>
658 Subcommand ``reweight-all`` recalculate the weights for the tree to
659 ensure they sum correctly
663 ceph osd crush reweight-all
665 Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
666 to <weight> in crush map
670 ceph osd crush reweight-subtree <name> <weight>
672 Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
677 ceph osd crush rm <name> {<ancestor>}
679 Subcommand ``rule`` is used for creating crush rules. It uses some additional
682 Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
683 created with <profile> (default default).
687 ceph osd crush rule create-erasure <name> {<profile>}
689 Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
690 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
691 (default firstn; indep best for erasure pools).
695 ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
697 Subcommand ``dump`` dumps crush rule <name> (default all).
701 ceph osd crush rule dump {<name>}
703 Subcommand ``ls`` lists crush rules.
707 ceph osd crush rule ls
709 Subcommand ``rm`` removes crush rule <name>.
713 ceph osd crush rule rm <name>
715 Subcommand ``set`` used alone, sets crush map from input file.
721 Subcommand ``set`` with osdname/osd.id update crushmap position and weight
722 for <name> to <weight> with location <args>.
726 ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
728 Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
729 tunable that can be set is straw_calc_version.
733 ceph osd crush set-tunable straw_calc_version <value>
735 Subcommand ``show-tunables`` shows current crush tunables.
739 ceph osd crush show-tunables
741 Subcommand ``tree`` shows the crush buckets and items in a tree view.
747 Subcommand ``tunables`` sets crush tunables values to <profile>.
751 ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
753 Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
758 ceph osd crush unlink <name> {<ancestor>}
760 Subcommand ``df`` shows OSD utilization
764 ceph osd df {plain|tree}
766 Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
770 ceph osd deep-scrub <who>
772 Subcommand ``down`` sets osd(s) <id> [<id>...] down.
776 ceph osd down <ids> [<ids>...]
778 Subcommand ``dump`` prints summary of OSD map.
782 ceph osd dump {<int[0-]>}
784 Subcommand ``erasure-code-profile`` is used for managing the erasure code
785 profiles. It uses some additional subcommands.
787 Subcommand ``get`` gets erasure code profile <name>.
791 ceph osd erasure-code-profile get <name>
793 Subcommand ``ls`` lists all erasure code profiles.
797 ceph osd erasure-code-profile ls
799 Subcommand ``rm`` removes erasure code profile <name>.
803 ceph osd erasure-code-profile rm <name>
805 Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
806 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
810 ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
812 Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
816 ceph osd find <int[0-]>
818 Subcommand ``getcrushmap`` gets CRUSH map.
822 ceph osd getcrushmap {<int[0-]>}
824 Subcommand ``getmap`` gets OSD map.
828 ceph osd getmap {<int[0-]>}
830 Subcommand ``getmaxosd`` shows largest OSD id.
836 Subcommand ``in`` sets osd(s) <id> [<id>...] in.
840 ceph osd in <ids> [<ids>...]
842 Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
843 MORE REPLICAS EXIST, BE CAREFUL.
847 ceph osd lost <int[0-]> {--yes-i-really-mean-it}
849 Subcommand ``ls`` shows all OSD ids.
853 ceph osd ls {<int[0-]>}
855 Subcommand ``lspools`` lists pools.
859 ceph osd lspools {<int>}
861 Subcommand ``map`` finds pg for <object> in <pool>.
865 ceph osd map <poolname> <objectname>
867 Subcommand ``metadata`` fetches metadata for osd <id>.
871 ceph osd metadata {int[0-]} (default all)
873 Subcommand ``out`` sets osd(s) <id> [<id>...] out.
877 ceph osd out <ids> [<ids>...]
879 Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
880 stopped without immediately making data unavailable. That is, all
881 data should remain readable and writeable, although data redundancy
882 may be reduced as some PGs may end up in a degraded (but active)
883 state. It will return a success code if it is okay to stop the
884 OSD(s), or an error code and informative message if it is not or if no
885 conclusion can be drawn at the current time.
889 ceph osd ok-to-stop <id> [<ids>...]
891 Subcommand ``pause`` pauses osd.
897 Subcommand ``perf`` prints dump of OSD perf summary stats.
903 Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
908 ceph osd pg-temp <pgid> {<id> [<id>...]}
910 Subcommand ``force-create-pg`` forces creation of pg <pgid>.
914 ceph osd force-create-pg <pgid>
917 Subcommand ``pool`` is used for managing data pools. It uses some additional
920 Subcommand ``create`` creates pool.
924 ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
925 {<erasure_code_profile>} {<ruleset>} {<int>}
927 Subcommand ``delete`` deletes pool.
931 ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
933 Subcommand ``get`` gets pool parameter <var>.
937 ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
938 pgp_num|crush_ruleset|auid|write_fadvise_dontneed
940 Only for tiered pools::
942 ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
943 target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
944 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
945 min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
947 Only for erasure coded pools::
949 ceph osd pool get <poolname> erasure_code_profile
951 Use ``all`` to get all pool parameters that apply to the pool's type::
953 ceph osd pool get <poolname> all
955 Subcommand ``get-quota`` obtains object or byte limits for pool.
959 ceph osd pool get-quota <poolname>
961 Subcommand ``ls`` list pools
965 ceph osd pool ls {detail}
967 Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
971 ceph osd pool mksnap <poolname> <snap>
973 Subcommand ``rename`` renames <srcpool> to <destpool>.
977 ceph osd pool rename <poolname> <poolname>
979 Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
983 ceph osd pool rmsnap <poolname> <snap>
985 Subcommand ``set`` sets pool parameter <var> to <val>.
989 ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
990 pgp_num|crush_ruleset|hashpspool|nodelete|nopgchange|nosizechange|
991 hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
992 target_max_bytes|target_max_objects|cache_target_dirty_ratio|
993 cache_target_dirty_high_ratio|
994 cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid|
995 min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
996 hit_set_search_last_n
997 <val> {--yes-i-really-mean-it}
999 Subcommand ``set-quota`` sets object or byte limit on pool.
1003 ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
1005 Subcommand ``stats`` obtain stats from all pools, or from specified pool.
1009 ceph osd pool stats {<name>}
1011 Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
1016 ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
1018 Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
1023 ceph osd primary-temp <pgid> <id>
1025 Subcommand ``repair`` initiates repair on a specified osd.
1029 ceph osd repair <who>
1031 Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
1035 osd reweight <int[0-]> <float[0.0-1.0]>
1037 Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
1038 [overload-percentage-for-consideration, default 120].
1042 ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
1045 Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
1046 [overload-percentage-for-consideration, default 120].
1050 ceph osd reweight-by-utilization {<int[100-]>}
1053 Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
1058 ceph osd rm <ids> [<ids>...]
1060 Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
1061 entity's keys and all of its dm-crypt and daemon-private config key
1064 This command will not remove the OSD from crush, nor will it remove the
1065 OSD from the OSD map. Instead, once the command successfully completes,
1066 the OSD will show marked as *destroyed*.
1068 In order to mark an OSD as destroyed, the OSD must first be marked as
1073 ceph osd destroy <id> {--yes-i-really-mean-it}
1076 Subcommand ``purge`` performs a combination of ``osd destroy``,
1077 ``osd rm`` and ``osd crush remove``.
1081 ceph osd purge <id> {--yes-i-really-mean-it}
1083 Subcommand ``safe-to-destroy`` checks whether it is safe to remove or
1084 destroy an OSD without reducing overall data redundancy or durability.
1085 It will return a success code if it is definitely safe, or an error
1086 code and informative message if it is not or if no conclusion can be
1087 drawn at the current time.
1091 ceph osd safe-to-destroy <id> [<ids>...]
1093 Subcommand ``scrub`` initiates scrub on specified osd.
1097 ceph osd scrub <who>
1099 Subcommand ``set`` sets <key>.
1103 ceph osd set full|pause|noup|nodown|noout|noin|nobackfill|
1104 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1106 Subcommand ``setcrushmap`` sets crush map from input file.
1110 ceph osd setcrushmap
1112 Subcommand ``setmaxosd`` sets new maximum osd value.
1116 ceph osd setmaxosd <int[0-]>
1118 Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
1119 compatible with the specified client version. This subcommand prevents you from
1120 making any changes (e.g., crush tunables, or using new features) that
1121 would violate the current setting. Please note, This subcommand will fail if
1122 any connected daemon or client is not compatible with the features offered by
1123 the given <version>. To see the features and releases of all clients connected
1124 to cluster, please see `ceph features`_.
1128 ceph osd set-require-min-compat-client <version>
1130 Subcommand ``stat`` prints summary of OSD map.
1136 Subcommand ``tier`` is used for managing tiers. It uses some additional
1139 Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
1144 ceph osd tier add <poolname> <poolname> {--force-nonempty}
1146 Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
1147 to existing pool <pool> (the first one).
1151 ceph osd tier add-cache <poolname> <poolname> <int[0-]>
1153 Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
1157 ceph osd tier cache-mode <poolname> none|writeback|forward|readonly|
1158 readforward|readproxy
1160 Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
1161 <pool> (the first one).
1165 ceph osd tier remove <poolname> <poolname>
1167 Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
1171 ceph osd tier remove-overlay <poolname>
1173 Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
1178 ceph osd tier set-overlay <poolname> <poolname>
1180 Subcommand ``tree`` prints OSD tree.
1184 ceph osd tree {<int[0-]>}
1186 Subcommand ``unpause`` unpauses osd.
1192 Subcommand ``unset`` unsets <key>.
1196 ceph osd unset full|pause|noup|nodown|noout|noin|nobackfill|
1197 norebalance|norecover|noscrub|nodeep-scrub|notieragent
1203 It is used for managing the placement groups in OSDs. It uses some
1204 additional subcommands.
1206 Subcommand ``debug`` shows debug info about pgs.
1210 ceph pg debug unfound_objects_exist|degraded_pgs_exist
1212 Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
1216 ceph pg deep-scrub <pgid>
1218 Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
1223 ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1225 Subcommand ``dump_json`` shows human-readable version of pg map in json only.
1229 ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
1231 Subcommand ``dump_pools_json`` shows pg pools info in json only.
1235 ceph pg dump_pools_json
1237 Subcommand ``dump_stuck`` shows information about stuck pgs.
1241 ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
1244 Subcommand ``getmap`` gets binary pg map to -o/stdout.
1250 Subcommand ``ls`` lists pg with specific pool, osd, state
1254 ceph pg ls {<int>} {active|clean|down|replay|splitting|
1255 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1256 recovery|backfill_wait|incomplete|stale| remapped|
1257 deep_scrub|backfill|backfill_toofull|recovery_wait|
1258 undersized [active|clean|down|replay|splitting|
1259 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1260 recovery|backfill_wait|incomplete|stale|remapped|
1261 deep_scrub|backfill|backfill_toofull|recovery_wait|
1264 Subcommand ``ls-by-osd`` lists pg on osd [osd]
1268 ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
1269 {active|clean|down|replay|splitting|
1270 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1271 recovery|backfill_wait|incomplete|stale| remapped|
1272 deep_scrub|backfill|backfill_toofull|recovery_wait|
1273 undersized [active|clean|down|replay|splitting|
1274 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1275 recovery|backfill_wait|incomplete|stale|remapped|
1276 deep_scrub|backfill|backfill_toofull|recovery_wait|
1279 Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
1283 ceph pg ls-by-pool <poolstr> {<int>} {active|
1284 clean|down|replay|splitting|
1285 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1286 recovery|backfill_wait|incomplete|stale| remapped|
1287 deep_scrub|backfill|backfill_toofull|recovery_wait|
1288 undersized [active|clean|down|replay|splitting|
1289 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1290 recovery|backfill_wait|incomplete|stale|remapped|
1291 deep_scrub|backfill|backfill_toofull|recovery_wait|
1294 Subcommand ``ls-by-primary`` lists pg with primary = [osd]
1298 ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
1299 {active|clean|down|replay|splitting|
1300 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1301 recovery|backfill_wait|incomplete|stale| remapped|
1302 deep_scrub|backfill|backfill_toofull|recovery_wait|
1303 undersized [active|clean|down|replay|splitting|
1304 scrubbing|scrubq|degraded|inconsistent|peering|repair|
1305 recovery|backfill_wait|incomplete|stale|remapped|
1306 deep_scrub|backfill|backfill_toofull|recovery_wait|
1309 Subcommand ``map`` shows mapping of pg to osds.
1315 Subcommand ``repair`` starts repair on <pgid>.
1319 ceph pg repair <pgid>
1321 Subcommand ``scrub`` starts scrub on <pgid>.
1325 ceph pg scrub <pgid>
1327 Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
1331 ceph pg set_full_ratio <float[0.0-1.0]>
1333 Subcommand ``set_backfillfull_ratio`` sets ratio at which pgs are considered too full to backfill.
1337 ceph pg set_backfillfull_ratio <float[0.0-1.0]>
1339 Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
1344 ceph pg set_nearfull_ratio <float[0.0-1.0]>
1346 Subcommand ``stat`` shows placement group status.
1356 Cause MON to enter or exit quorum.
1360 ceph quorum enter|exit
1362 Note: this only works on the MON to which the ``ceph`` command is connected.
1363 If you want a specific MON to enter or exit quorum, use this syntax::
1365 ceph tell mon.<id> quorum enter|exit
1370 Reports status of monitor quorum.
1380 Reports full status of cluster, optional title tag strings.
1384 ceph report {<tags> [<tags>...]}
1390 Scrubs the monitor stores.
1400 Shows cluster status.
1410 Forces sync of and clear monitor store.
1414 ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
1420 Sends a command to a specific daemon.
1424 ceph tell <name (type.id)> <args> [<args>...]
1427 List all available commands.
1431 ceph tell <name (type.id)> help
1436 Show mon daemon version
1445 .. option:: -i infile
1447 will specify an input file to be passed along as a payload with the
1448 command to the monitor cluster. This is only used for specific
1451 .. option:: -o outfile
1453 will write any payload returned by the monitor cluster with its
1454 reply to outfile. Only specific monitor commands (e.g. osd getmap)
1457 .. option:: -c ceph.conf, --conf=ceph.conf
1459 Use ceph.conf configuration file instead of the default
1460 ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
1462 .. option:: --id CLIENT_ID, --user CLIENT_ID
1464 Client id for authentication.
1466 .. option:: --name CLIENT_NAME, -n CLIENT_NAME
1468 Client name for authentication.
1470 .. option:: --cluster CLUSTER
1472 Name of the Ceph cluster.
1474 .. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
1476 Submit admin-socket commands via admin sockets in /var/run/ceph.
1478 .. option:: --admin-socket ADMIN_SOCKET_NOPE
1480 You probably mean --admin-daemon
1482 .. option:: -s, --status
1484 Show cluster status.
1486 .. option:: -w, --watch
1488 Watch live cluster changes.
1490 .. option:: --watch-debug
1494 .. option:: --watch-info
1498 .. option:: --watch-sec
1500 Watch security events.
1502 .. option:: --watch-warn
1504 Watch warning events.
1506 .. option:: --watch-error
1510 .. option:: --version, -v
1514 .. option:: --verbose
1518 .. option:: --concise
1522 .. option:: -f {json,json-pretty,xml,xml-pretty,plain}, --format
1526 .. option:: --connect-timeout CLUSTER_TIMEOUT
1528 Set a timeout for connecting to the cluster.
1530 .. option:: --no-increasing
1532 ``--no-increasing`` is off by default. So increasing the osd weight is allowed
1533 using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
1534 If this option is used with these commands, it will help not to increase osd weight
1535 even the osd is under utilized.
1541 :program:`ceph` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
1542 the Ceph documentation at http://ceph.com/docs for more information.
1548 :doc:`ceph-mon <ceph-mon>`\(8),
1549 :doc:`ceph-osd <ceph-osd>`\(8),
1550 :doc:`ceph-mds <ceph-mds>`\(8)