2 CephFS Administrative commands
3 ==============================
8 These commands operate on the CephFS filesystems in your Ceph cluster.
9 Note that by default only one filesystem is permitted: to enable
10 creation of multiple filesystems use ``ceph fs flag set enable_multiple true``.
14 fs new <filesystem name> <metadata pool name> <data pool name>
22 fs rm <filesystem name> [--yes-i-really-mean-it]
26 fs reset <filesystem name>
30 fs get <filesystem name>
34 fs set <filesystem name> <var> <val>
38 fs add_data_pool <filesystem name> <pool name/id>
42 fs rm_data_pool <filesystem name> <pool name/id>
50 fs set <fs name> max_file_size <size in bytes>
52 CephFS has a configurable maximum file size, and it's 1TB by default.
53 You may wish to set this limit higher if you expect to store large files
54 in CephFS. It is a 64-bit field.
56 Setting ``max_file_size`` to 0 does not disable the limit. It would
57 simply limit clients to only creating empty files.
60 Maximum file sizes and performance
61 ----------------------------------
63 CephFS enforces the maximum file size limit at the point of appending to
64 files or setting their size. It does not affect how anything is stored.
66 When users create a file of an enormous size (without necessarily
67 writing any data to it), some operations (such as deletes) cause the MDS
68 to have to do a large number of operations to check if any of the RADOS
69 objects within the range that could exist (according to the file size)
72 The ``max_file_size`` setting prevents users from creating files that
73 appear to be eg. exabytes in size, causing load on the MDS as it tries
74 to enumerate the objects during operations like stats or deletes.
80 These commands act on specific mds daemons or ranks.
84 mds fail <gid/name/role>
86 Mark an MDS daemon as failed. This is equivalent to what the cluster
87 would do if an MDS daemon had failed to send a message to the mon
88 for ``mds_beacon_grace`` second. If the daemon was active and a suitable
89 standby is available, using ``mds fail`` will force a failover to the standby.
91 If the MDS daemon was in reality still running, then using ``mds fail``
92 will cause the daemon to restart. If it was active and a standby was
93 available, then the "failed" daemon will return as a standby.
99 Deactivate an MDS, causing it to flush its entire journal to
100 backing RADOS objects and close all open client sessions. Deactivating an MDS
101 is primarily intended for bringing down a rank after reducing the number of
102 active MDS (max_mds). Once the rank is deactivated, the MDS daemon will rejoin the
103 cluster as a standby.
104 ``<role>`` can take one of three forms:
112 Use ``mds deactivate`` in conjunction with adjustments to ``max_mds`` to
113 shrink an MDS cluster. See :doc:`/cephfs/multimds`
117 tell mds.<daemon name>
121 mds metadata <gid/name/role>
137 fs flag set <flag name> <flag val> [<confirmation string>]
139 "flag name" must be one of ['enable_multiple']
141 Some flags require you to confirm your intentions with "--yes-i-really-mean-it"
142 or a similar string they will prompt you with. Consider these actions carefully
143 before proceeding; they are placed on especially dangerous activities.
149 These commands are not required in normal operation, and exist
150 for use in exceptional circumstances. Incorrect use of these
151 commands may cause serious problems, such as an inaccessible
160 mds compat rm_incompat
181 The ``ceph mds set`` command is the deprecated version of ``ceph fs set``,
182 from before there was more than one filesystem per cluster. It operates
183 on whichever filesystem is marked as the default (see ``ceph fs
189 mds dump # replaced by "fs get"
190 mds stop # replaced by "mds deactivate"
191 mds set_max_mds # replaced by "fs set max_mds"
192 mds set # replaced by "fs set"
193 mds cluster_down # replaced by "fs set cluster_down"
194 mds cluster_up # replaced by "fs set cluster_up"
195 mds newfs # replaced by "fs new"
196 mds add_data_pool # replaced by "fs add_data_pool"
197 mds remove_data_pool #replaced by "fs remove_data_pool"