X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=src%2Fceph%2Fdoc%2Fcephfs%2Findex.rst;fp=src%2Fceph%2Fdoc%2Fcephfs%2Findex.rst;h=c63364fd5cb8a82cc55899d0ce009fb41dcae578;hb=812ff6ca9fcd3e629e49d4328905f33eee8ca3f5;hp=0000000000000000000000000000000000000000;hpb=15280273faafb77777eab341909a3f495cf248d9;p=stor4nfv.git diff --git a/src/ceph/doc/cephfs/index.rst b/src/ceph/doc/cephfs/index.rst new file mode 100644 index 0000000..c63364f --- /dev/null +++ b/src/ceph/doc/cephfs/index.rst @@ -0,0 +1,116 @@ +================= + Ceph Filesystem +================= + +The :term:`Ceph Filesystem` (Ceph FS) is a POSIX-compliant filesystem that uses +a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph +Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 +and Swift APIs, or native bindings (librados). + +.. note:: If you are evaluating CephFS for the first time, please review + the best practices for deployment: :doc:`/cephfs/best-practices` + +.. ditaa:: + +-----------------------+ +------------------------+ + | | | CephFS FUSE | + | | +------------------------+ + | | + | | +------------------------+ + | CephFS Kernel Object | | CephFS Library | + | | +------------------------+ + | | + | | +------------------------+ + | | | librados | + +-----------------------+ +------------------------+ + + +---------------+ +---------------+ +---------------+ + | OSDs | | MDSs | | Monitors | + +---------------+ +---------------+ +---------------+ + + +Using CephFS +============ + +Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in +your Ceph Storage Cluster. + + + +.. raw:: html + + +

Step 1: Metadata Server

+ +To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at +least one :term:`Ceph Metadata Server` running. + + +.. toctree:: + :maxdepth: 1 + + Add/Remove MDS(s) <../../rados/deployment/ceph-deploy-mds> + MDS failover and standby configuration + MDS Configuration Settings + Client Configuration Settings + Journaler Configuration + Manpage ceph-mds <../../man/8/ceph-mds> + +.. raw:: html + +

Step 2: Mount CephFS

+ +Once you have a healthy Ceph Storage Cluster with at least +one Ceph Metadata Server, you may create and mount your Ceph Filesystem. +Ensure that you client has network connectivity and the proper +authentication keyring. + +.. toctree:: + :maxdepth: 1 + + Create CephFS + Mount CephFS + Mount CephFS as FUSE + Mount CephFS in fstab + Manpage ceph-fuse <../../man/8/ceph-fuse> + Manpage mount.ceph <../../man/8/mount.ceph> + + +.. raw:: html + +

Additional Details

+ +.. toctree:: + :maxdepth: 1 + + Deployment best practices + Administrative commands + POSIX compatibility + Experimental Features + CephFS Quotas + Using Ceph with Hadoop + cephfs-journal-tool + File layouts + Client eviction + Handling full filesystems + Health messages + Troubleshooting + Disaster recovery + Client authentication + Upgrading old filesystems + Configuring directory fragmentation + Configuring multiple active MDS daemons + +.. raw:: html + +
+ +For developers +============== + +.. toctree:: + :maxdepth: 1 + + Client's Capabilities + libcephfs <../../api/libcephfs-java/> + Mantle +