summaryrefslogtreecommitdiff
path: root/doc/start/index.rst
diff options
context:
space:
mode:
Diffstat (limited to 'doc/start/index.rst')
-rw-r--r--doc/start/index.rst46
1 files changed, 24 insertions, 22 deletions
diff --git a/doc/start/index.rst b/doc/start/index.rst
index e6e6ed2842b..cee41996627 100644
--- a/doc/start/index.rst
+++ b/doc/start/index.rst
@@ -2,30 +2,32 @@
Getting Started
=================
-Whether you want to provide RESTful object services and/or block devices to a
-cloud solution, deploy a CephFS filesystem or use Ceph for another purpose, all
-Ceph clusters begin with setting up your host computers, network and the Ceph
-Object Store. A Ceph object store cluster has three essential daemons:
+Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block
+Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or
+use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin
+with setting up each :term:`Ceph Node`, your network and the Ceph Storage
+Cluster. A Ceph Storage Cluster has three essential daemons:
.. ditaa:: +---------------+ +---------------+ +---------------+
| OSDs | | Monitor | | MDS |
+---------------+ +---------------+ +---------------+
-- **OSDs**: Object Storage Daemons (OSDs) store data, handle data replication,
- recovery, backfilling, rebalancing, and provide some monitoring information
- to Ceph monitors by checking other OSDs for a heartbeat. A cluster requires
- at least two OSDs to achieve an ``active + clean`` state.
+- **OSDs**: A :term:`Ceph OSD Daemon` (OSD) stores data, handles data
+ replication, recovery, backfilling, rebalancing, and provides some monitoring
+ information to Ceph Monitors by checking other Ceph OSD Daemons for a
+ heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to
+ achieve an ``active + clean`` state.
-- **Monitors**: Ceph monitors maintain maps of the cluster state, including
- the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH
- map. Ceph maintains a history (called an "epoch") of each state change in
- the monitors, OSDs, and PGs.
+- **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state,
+ including the monitor map, the OSD map, the Placement Group (PG) map, and the
+ CRUSH map. Ceph maintains a history (called an "epoch") of each state change
+ in the Ceph Monitors, Ceph OSD Daemons, and PGs.
-- **MDSs**: Metadata Servers (MDSs) store metadata on behalf of the CephFS
- filesystem (i.e., Ceph block devices and Ceph gateways do not use MDS).
- Ceph MDS servers make it feasible for POSIX file system users to execute
- basic commands like ``ls``, ``find``, etc. without placing an enormous
- burden on the object store.
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of
+ the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage
+ do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system
+ users to execute basic commands like ``ls``, ``find``, etc. without placing
+ an enormous burden on the Ceph Storage Cluster.
.. raw:: html
@@ -33,9 +35,9 @@ Object Store. A Ceph object store cluster has three essential daemons:
<style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
<table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Preflight</h3>
-Client and server machines may require some basic configuration work prior to
-deploying a Ceph cluster. You can also avail yourself of help from the Ceph
-community by getting involved.
+A :term:`Ceph Client` and a :term:`Ceph Node` may require some basic
+configuration work prior to deploying a Ceph Storage Cluster. You can also
+avail yourself of help from the Ceph community by getting involved.
.. toctree::
@@ -59,12 +61,12 @@ deploying a Ceph Storage Cluster.
</td><td><h3>Step 3: Ceph Client(s)</h3>
Most Ceph users don't store objects directly in the Ceph Storage Cluster. They typically use at least one of
-Ceph Block Devices, the Ceph FS filesystem, and Ceph Object Storage.
+Ceph Block Devices, the Ceph Filesystem, and Ceph Object Storage.
.. toctree::
Block Device Quick Start <quick-rbd>
- Ceph FS Quick Start <quick-cephfs>
+ Filesystem Quick Start <quick-cephfs>
Object Storage Quick Start <quick-rgw>