X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=src%2Fceph%2Fdoc%2Frados%2Findex.rst;fp=src%2Fceph%2Fdoc%2Frados%2Findex.rst;h=0000000000000000000000000000000000000000;hb=7da45d65be36d36b880cc55c5036e96c24b53f00;hp=929bb7efacbcffd0d898deea46ce38b57579311f;hpb=691462d09d0987b47e112d6ee8740375df3c51b2;p=stor4nfv.git diff --git a/src/ceph/doc/rados/index.rst b/src/ceph/doc/rados/index.rst deleted file mode 100644 index 929bb7e..0000000 --- a/src/ceph/doc/rados/index.rst +++ /dev/null @@ -1,76 +0,0 @@ -====================== - Ceph Storage Cluster -====================== - -The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments. -Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph -Storage Clusters consist of two types of daemons: a :term:`Ceph OSD Daemon` -(OSD) stores data as objects on a storage node; and a :term:`Ceph Monitor` (MON) -maintains a master copy of the cluster map. A Ceph Storage Cluster may contain -thousands of storage nodes. A minimal system will have at least one -Ceph Monitor and two Ceph OSD Daemons for data replication. - -The Ceph Filesystem, Ceph Object Storage and Ceph Block Devices read data from -and write data to the Ceph Storage Cluster. - -.. raw:: html - - -

Config and Deploy

- -Ceph Storage Clusters have a few required settings, but most configuration -settings have default values. A typical deployment uses a deployment tool -to define a cluster and bootstrap a monitor. See `Deployment`_ for details -on ``ceph-deploy.`` - -.. toctree:: - :maxdepth: 2 - - Configuration - Deployment - -.. raw:: html - -

Operations

- -Once you have a deployed a Ceph Storage Cluster, you may begin operating -your cluster. - -.. toctree:: - :maxdepth: 2 - - - Operations - -.. toctree:: - :maxdepth: 1 - - Man Pages - - -.. toctree:: - :hidden: - - troubleshooting/index - -.. raw:: html - -

APIs

- -Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the -`Ceph Filesystem`_. You may also develop applications that talk directly to -the Ceph Storage Cluster. - -.. toctree:: - :maxdepth: 2 - - APIs - -.. raw:: html - -
- -.. _Ceph Block Devices: ../rbd/ -.. _Ceph Filesystem: ../cephfs/ -.. _Ceph Object Storage: ../radosgw/ -.. _Deployment: ../rados/deployment/