X-Git-Url: https://gerrit.opnfv.org/gerrit/gitweb?a=blobdiff_plain;f=src%2Fceph%2Fdoc%2Fstart%2Fintro.rst;fp=src%2Fceph%2Fdoc%2Fstart%2Fintro.rst;h=95b51dd839efc10374d8ea19795a36e137c8057f;hb=812ff6ca9fcd3e629e49d4328905f33eee8ca3f5;hp=0000000000000000000000000000000000000000;hpb=15280273faafb77777eab341909a3f495cf248d9;p=stor4nfv.git diff --git a/src/ceph/doc/start/intro.rst b/src/ceph/doc/start/intro.rst new file mode 100644 index 0000000..95b51dd --- /dev/null +++ b/src/ceph/doc/start/intro.rst @@ -0,0 +1,87 @@ +=============== + Intro to Ceph +=============== + +Whether you want to provide :term:`Ceph Object Storage` and/or +:term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy +a :term:`Ceph Filesystem` or use Ceph for another purpose, all +:term:`Ceph Storage Cluster` deployments begin with setting up each +:term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph +Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and +Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also +required when running Ceph Filesystem clients. + +.. ditaa:: +---------------+ +------------+ +------------+ +---------------+ + | OSDs | | Monitors | | Managers | | MDSs | + +---------------+ +------------+ +------------+ +---------------+ + +- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps + of the cluster state, including the monitor map, manager map, the + OSD map, and the CRUSH map. These maps are critical cluster state + required for Ceph daemons to coordinate with each other. Monitors + are also responsible for managing authentication between daemons and + clients. At least three monitors are normally required for + redundancy and high availability. + +- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is + responsible for keeping track of runtime metrics and the current + state of the Ceph cluster, including storage utilization, current + performance metrics, and system load. The Ceph Manager daemons also + host python-based plugins to manage and expose Ceph cluster + information, including a web-based `dashboard`_ and `REST API`_. At + least two managers are normally required for high availability. + +- **Ceph OSDs**: A :term:`Ceph OSD` (object storage daemon, + ``ceph-osd``) stores data, handles data replication, recovery, + rebalancing, and provides some monitoring information to Ceph + Monitors and Managers by checking other Ceph OSD Daemons for a + heartbeat. At least 3 Ceph OSDs are normally required for redundancy + and high availability. + +- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores + metadata on behalf of the :term:`Ceph Filesystem` (i.e., Ceph Block + Devices and Ceph Object Storage do not use MDS). Ceph Metadata + Servers allow POSIX file system users to execute basic commands (like + ``ls``, ``find``, etc.) without placing an enormous burden on the + Ceph Storage Cluster. + +Ceph stores data as objects within logical storage pools. Using the +:term:`CRUSH` algorithm, Ceph calculates which placement group should +contain the object, and further calculates which Ceph OSD Daemon +should store the placement group. The CRUSH algorithm enables the +Ceph Storage Cluster to scale, rebalance, and recover dynamically. + +.. _dashboard: ../../mgr/dashboard +.. _REST API: ../../mgr/restful + +.. raw:: html + + +

Recommendations

+ +To begin using Ceph in production, you should review our hardware +recommendations and operating system recommendations. + +.. toctree:: + :maxdepth: 2 + + Hardware Recommendations + OS Recommendations + + +.. raw:: html + +

Get Involved

+ + You can avail yourself of help or contribute documentation, source + code or bugs by getting involved in the Ceph community. + +.. toctree:: + :maxdepth: 2 + + get-involved + documenting-ceph + +.. raw:: html + +