X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=src%2Fceph%2Fdoc%2Frados%2Findex.rst;fp=src%2Fceph%2Fdoc%2Frados%2Findex.rst;h=929bb7efacbcffd0d898deea46ce38b57579311f;hb=812ff6ca9fcd3e629e49d4328905f33eee8ca3f5;hp=0000000000000000000000000000000000000000;hpb=15280273faafb77777eab341909a3f495cf248d9;p=stor4nfv.git diff --git a/src/ceph/doc/rados/index.rst b/src/ceph/doc/rados/index.rst new file mode 100644 index 0000000..929bb7e --- /dev/null +++ b/src/ceph/doc/rados/index.rst @@ -0,0 +1,76 @@ +====================== + Ceph Storage Cluster +====================== + +The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments. +Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph +Storage Clusters consist of two types of daemons: a :term:`Ceph OSD Daemon` +(OSD) stores data as objects on a storage node; and a :term:`Ceph Monitor` (MON) +maintains a master copy of the cluster map. A Ceph Storage Cluster may contain +thousands of storage nodes. A minimal system will have at least one +Ceph Monitor and two Ceph OSD Daemons for data replication. + +The Ceph Filesystem, Ceph Object Storage and Ceph Block Devices read data from +and write data to the Ceph Storage Cluster. + +.. raw:: html + + +

Config and Deploy

+ +Ceph Storage Clusters have a few required settings, but most configuration +settings have default values. A typical deployment uses a deployment tool +to define a cluster and bootstrap a monitor. See `Deployment`_ for details +on ``ceph-deploy.`` + +.. toctree:: + :maxdepth: 2 + + Configuration + Deployment + +.. raw:: html + +

Operations

+ +Once you have a deployed a Ceph Storage Cluster, you may begin operating +your cluster. + +.. toctree:: + :maxdepth: 2 + + + Operations + +.. toctree:: + :maxdepth: 1 + + Man Pages + + +.. toctree:: + :hidden: + + troubleshooting/index + +.. raw:: html + +

APIs

+ +Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the +`Ceph Filesystem`_. You may also develop applications that talk directly to +the Ceph Storage Cluster. + +.. toctree:: + :maxdepth: 2 + + APIs + +.. raw:: html + +
+ +.. _Ceph Block Devices: ../rbd/ +.. _Ceph Filesystem: ../cephfs/ +.. _Ceph Object Storage: ../radosgw/ +.. _Deployment: ../rados/deployment/