diff options
-rw-r--r-- | doc/start/intro.rst | 39 |
1 files changed, 39 insertions, 0 deletions
diff --git a/doc/start/intro.rst b/doc/start/intro.rst new file mode 100644 index 00000000000..b04363d9f52 --- /dev/null +++ b/doc/start/intro.rst @@ -0,0 +1,39 @@ +=============== + Intro to Ceph +=============== + +Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block +Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or +use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin +with setting up each :term:`Ceph Node`, your network and the Ceph Storage +Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor and at least +two Ceph OSD Daemons. The Ceph Metadata Server is essential when running Ceph +Filesystem clients. + +.. ditaa:: +---------------+ +---------------+ +---------------+ + | OSDs | | Monitor | | MDS | + +---------------+ +---------------+ +---------------+ + +- **OSDs**: A :term:`Ceph OSD Daemon` (OSD) stores data, handles data + replication, recovery, backfilling, rebalancing, and provides some monitoring + information to Ceph Monitors by checking other Ceph OSD Daemons for a + heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to + achieve an ``active + clean`` state when the cluster makes two copies of your + data (Ceph makes 2 copies by default, but you can adjust it). + +- **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state, + including the monitor map, the OSD map, the Placement Group (PG) map, and the + CRUSH map. Ceph maintains a history (called an "epoch") of each state change + in the Ceph Monitors, Ceph OSD Daemons, and PGs. + +- **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of + the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage + do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system + users to execute basic commands like ``ls``, ``find``, etc. without placing + an enormous burden on the Ceph Storage Cluster. + +Ceph stores a client's data as objects within storage pools. Using the CRUSH +algorithm, Ceph calculates which placement group should contain the object, +and further calculates which Ceph OSD Daemon should store the placement group. +The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and +recover dynamically.
\ No newline at end of file |