X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=src%2Fceph%2Fdoc%2Fcephfs%2Findex.rst;fp=src%2Fceph%2Fdoc%2Fcephfs%2Findex.rst;h=0000000000000000000000000000000000000000;hb=7da45d65be36d36b880cc55c5036e96c24b53f00;hp=c63364fd5cb8a82cc55899d0ce009fb41dcae578;hpb=691462d09d0987b47e112d6ee8740375df3c51b2;p=stor4nfv.git diff --git a/src/ceph/doc/cephfs/index.rst b/src/ceph/doc/cephfs/index.rst deleted file mode 100644 index c63364f..0000000 --- a/src/ceph/doc/cephfs/index.rst +++ /dev/null @@ -1,116 +0,0 @@ -================= - Ceph Filesystem -================= - -The :term:`Ceph Filesystem` (Ceph FS) is a POSIX-compliant filesystem that uses -a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph -Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 -and Swift APIs, or native bindings (librados). - -.. note:: If you are evaluating CephFS for the first time, please review - the best practices for deployment: :doc:`/cephfs/best-practices` - -.. ditaa:: - +-----------------------+ +------------------------+ - | | | CephFS FUSE | - | | +------------------------+ - | | - | | +------------------------+ - | CephFS Kernel Object | | CephFS Library | - | | +------------------------+ - | | - | | +------------------------+ - | | | librados | - +-----------------------+ +------------------------+ - - +---------------+ +---------------+ +---------------+ - | OSDs | | MDSs | | Monitors | - +---------------+ +---------------+ +---------------+ - - -Using CephFS -============ - -Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in -your Ceph Storage Cluster. - - - -.. raw:: html - - -

Step 1: Metadata Server

- -To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at -least one :term:`Ceph Metadata Server` running. - - -.. toctree:: - :maxdepth: 1 - - Add/Remove MDS(s) <../../rados/deployment/ceph-deploy-mds> - MDS failover and standby configuration - MDS Configuration Settings - Client Configuration Settings - Journaler Configuration - Manpage ceph-mds <../../man/8/ceph-mds> - -.. raw:: html - -

Step 2: Mount CephFS

- -Once you have a healthy Ceph Storage Cluster with at least -one Ceph Metadata Server, you may create and mount your Ceph Filesystem. -Ensure that you client has network connectivity and the proper -authentication keyring. - -.. toctree:: - :maxdepth: 1 - - Create CephFS - Mount CephFS - Mount CephFS as FUSE - Mount CephFS in fstab - Manpage ceph-fuse <../../man/8/ceph-fuse> - Manpage mount.ceph <../../man/8/mount.ceph> - - -.. raw:: html - -

Additional Details

- -.. toctree:: - :maxdepth: 1 - - Deployment best practices - Administrative commands - POSIX compatibility - Experimental Features - CephFS Quotas - Using Ceph with Hadoop - cephfs-journal-tool - File layouts - Client eviction - Handling full filesystems - Health messages - Troubleshooting - Disaster recovery - Client authentication - Upgrading old filesystems - Configuring directory fragmentation - Configuring multiple active MDS daemons - -.. raw:: html - -
- -For developers -============== - -.. toctree:: - :maxdepth: 1 - - Client's Capabilities - libcephfs <../../api/libcephfs-java/> - Mantle -