X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=src%2Fceph%2Fdoc%2Fcephfs%2Fexperimental-features.rst;fp=src%2Fceph%2Fdoc%2Fcephfs%2Fexperimental-features.rst;h=1dc781a11f05b778d052cc0dc3c0e903286dc16f;hb=812ff6ca9fcd3e629e49d4328905f33eee8ca3f5;hp=0000000000000000000000000000000000000000;hpb=15280273faafb77777eab341909a3f495cf248d9;p=stor4nfv.git diff --git a/src/ceph/doc/cephfs/experimental-features.rst b/src/ceph/doc/cephfs/experimental-features.rst new file mode 100644 index 0000000..1dc781a --- /dev/null +++ b/src/ceph/doc/cephfs/experimental-features.rst @@ -0,0 +1,107 @@ + +Experimental Features +===================== + +CephFS includes a number of experimental features which are not fully stabilized +or qualified for users to turn on in real deployments. We generally do our best +to clearly demarcate these and fence them off so they cannot be used by mistake. + +Some of these features are closer to being done than others, though. We describe +each of them with an approximation of how risky they are and briefly describe +what is required to enable them. Note that doing so will *irrevocably* flag maps +in the monitor as having once enabled this flag to improve debugging and +support processes. + +Inline data +----------- +By default, all CephFS file data is stored in RADOS objects. The inline data +feature enables small files (generally <2KB) to be stored in the inode +and served out of the MDS. This may improve small-file performance but increases +load on the MDS. It is not sufficiently tested to support at this time, although +failures within it are unlikely to make non-inlined data inaccessible + +Inline data has always been off by default and requires setting +the "inline_data" flag. + + + +Mantle: Programmable Metadata Load Balancer +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Mantle is a programmable metadata balancer built into the MDS. The idea is to +protect the mechanisms for balancing load (migration, replication, +fragmentation) but stub out the balancing policies using Lua. For details, see +:doc:`/cephfs/mantle`. + +Snapshots +--------- +Like multiple active MDSes, CephFS is designed from the ground up to support +snapshotting of arbitrary directories. There are no known bugs at the time of +writing, but there is insufficient testing to provide stability guarantees and +every expansion of testing has generally revealed new issues. If you do enable +snapshots and experience failure, manual intervention will be needed. + +Snapshots are known not to work properly with multiple filesystems (below) in +some cases. Specifically, if you share a pool for multiple FSes and delete +a snapshot in one FS, expect to lose snapshotted file data in any other FS using +snapshots. See the :doc:`/dev/cephfs-snapshots` page for more information. + +Snapshots are known not to work with multi-MDS filesystems. + +Snapshotting was blocked off with the "allow_new_snaps" flag prior to Firefly. + +Multiple filesystems within a Ceph cluster +------------------------------------------ +Code was merged prior to the Jewel release which enables administrators +to create multiple independent CephFS filesystems within a single Ceph cluster. +These independent filesystems have their own set of active MDSes, cluster maps, +and data. But the feature required extensive changes to data structures which +are not yet fully qualified, and has security implications which are not all +apparent nor resolved. + +There are no known bugs, but any failures which do result from having multiple +active filesystems in your cluster will require manual intervention and, so far, +will not have been experienced by anybody else -- knowledgeable help will be +extremely limited. You also probably do not have the security or isolation +guarantees you want or think you have upon doing so. + +Note that snapshots and multiple filesystems are *not* tested in combination +and may not work together; see above. + +Multiple filesystems were available starting in the Jewel release candidates +but were protected behind the "enable_multiple" flag before the final release. + + +Previously experimental features +================================ + +Directory Fragmentation +----------------------- + +Directory fragmentation was considered experimental prior to the *Luminous* +(12.2.x). It is now enabled by default on new filesystems. To enable directory +fragmentation on filesystems created with older versions of Ceph, set +the ``allow_dirfrags`` flag on the filesystem: + +:: + + ceph fs set allow_dirfrags + +Multiple active metadata servers +-------------------------------- + +Prior to the *Luminous* (12.2.x) release, running multiple active metadata +servers within a single filesystem was considered experimental. Creating +multiple active metadata servers is now permitted by default on new +filesystems. + +Filesystems created with older versions of Ceph still require explicitly +enabling multiple active metadata servers as follows: + +:: + + ceph fs set allow_multimds + +Note that the default size of the active mds cluster (``max_mds``) is +still set to 1 initially. +