summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@inktank.com>2013-04-26 14:01:20 -0700
committerJohn Wilkins <john.wilkins@inktank.com>2013-04-26 14:01:20 -0700
commit9365674036be06e860e05474c27e2e53a2ce4af8 (patch)
tree0dd656b4f5c31eaf08b2d927f558d9ce9e3d170e
parente0c39c1e2139e380977247f1f577be971cc8b6eb (diff)
downloadceph-9365674036be06e860e05474c27e2e53a2ce4af8.tar.gz
doc: Added ceph-deploy quick start.
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
-rw-r--r--doc/start/quick-ceph-deploy.rst179
1 files changed, 179 insertions, 0 deletions
diff --git a/doc/start/quick-ceph-deploy.rst b/doc/start/quick-ceph-deploy.rst
new file mode 100644
index 00000000000..1835fbf1c0b
--- /dev/null
+++ b/doc/start/quick-ceph-deploy.rst
@@ -0,0 +1,179 @@
+==========================
+ Object Store Quick Start
+==========================
+
+If you haven't completed your `Preflight Checklist`_, do that first. This
+**Quick Start** sets up a two-node demo cluster so you can explore some of the
+object store functionality. This **Quick Start** will help you install a
+minimal Ceph cluster on a server host from your admin host using
+``ceph-deploy``.
+
+.. ditaa::
+ /----------------\ /----------------\
+ | Admin Host |<------->| Server Host |
+ | cCCC | | cCCC |
+ +----------------+ +----------------+
+ | Ceph Commands | | ceph - mon |
+ \----------------/ +----------------+
+ | ceph - osd |
+ +----------------+
+ | ceph - mds |
+ \----------------/
+
+
+For best results, create a directory on your client machine
+for maintaining the configuration of your cluster. ::
+
+ mkdir my-cluster
+ cd my-cluster
+
+.. tip:: The ``ceph-deploy`` utility will output files to the
+ current directory.
+
+
+Install Ceph
+============
+
+To install Ceph on your server, open a command line on your client
+machine and type the following::
+
+ ceph-deploy install {server-name}[,{server-name}]
+ ceph-deploy install --stable cuttlefish ceph-server
+
+Without additional arguments, ``ceph-deploy`` will install the most recent
+stable Ceph package to the host machine. See `ceph-deploy install -h`_ for
+additional details.
+
+
+Create a Cluster
+================
+
+To create your cluster, declare its inital monitors, generate a filesystem ID
+(``fsid``) and generate monitor keys by entering the following command on a
+commandline prompt::
+
+ ceph-deploy new {server-name}
+ ceph-deploy new ceph-server
+
+Check the output with ``ls`` and ``cat`` in the current directory. You should
+see a Ceph configuration file, a keyring, and a log file for the new cluster.
+See `ceph-deploy new -h`_ for additional details.
+
+.. topic:: Single Host Quick Start
+
+ Assuming only one host for your cluster, you will need to modify the default
+ ``osd crush chooseleaf type`` setting (it defaults to ``1`` for ``host``) to
+ ``0`` so that it will peer with OSDs on the local host. Add the following
+ line to your Ceph configuration file::
+
+ osd crush chooseleaf type = 0
+
+
+Add a Monitor
+=============
+
+To run a Ceph cluster, you need at least one monitor. When using ``ceph-deploy``,
+the tool enforces a single monitor per host. Execute the following to create
+a monitor::
+
+ ceph-deploy mon create {server-name}
+ ceph-deploy mon create ceph-server
+
+.. tip:: In production environments, we recommend running monitors on hosts
+ that do not run OSDs.
+
+
+Gather Keys
+===========
+
+To deploy additional daemons and provision them with monitor authentication keys
+from your admin host, you must first gather keys from a monitor host. Execute
+the following to gather keys::
+
+ ceph-deploy gatherkeys {mon-server-name}
+ ceph-deploy gatherkeys ceph-server
+
+
+Add OSDs
+========
+
+For a cluster's object placement groups to reach an ``active + clean`` state,
+you must have at least two OSDs and at least two copies of an object (``osd pool
+default size`` is ``2`` by default).
+
+Adding OSDs is slightly more involved than other ``ceph-deploy`` commands,
+because an OSD involves both a data store and a journal. The ``ceph-deploy``
+tool has the ability to invoke ``ceph-disk-prepare`` to prepare the disk and
+activate the OSD for you.
+
+
+List Disks
+----------
+
+To list the available disk drives on a prospective OSD host, execute the
+following::
+
+ ceph-deploy disk list {osd-server-name}
+ ceph-deploy disk list ceph-server
+
+
+Zap a Disk
+----------
+
+To zap a disk (delete its partition table) in preparation for use with Ceph,
+execute the following::
+
+ ceph-deploy disk zap {osd-server-name}:/path/to/disk
+
+.. important:: This will delete all data in the partition.
+
+
+Add OSDs
+--------
+
+To prepare an OSD disk and activate it, execute the following::
+
+ ceph-deploy osd create {osd-server-name}:/path/to/disk[:/path/to/journal]
+ ceph-deploy osd create {osd-server-name}:/dev/sdb1
+ ceph-deploy osd create {osd-server-name}:/dev/sdb2
+
+You must add a minimum of two OSDs for the placement groups in a cluster to achieve
+an ``active + clean`` state.
+
+
+Add a MDS
+=========
+
+To use CephFS, you need at least one metadata server. Execute the following to
+create a metadata server::
+
+ ceph-deploy mds create {server-name}
+ ceph-deploy mds create ceph-server
+
+
+.. note:: Currently Ceph runs in production with one metadata server only. You
+ may use more, but there is currently no commercial support for a cluster
+ with multiple metadata servers.
+
+
+Summary
+=======
+
+Once you deploy a Ceph cluster, you can try out some of the administration
+functionality, the object store command line, and then proceed to Quick Start
+guides for RBD, CephFS, and the Ceph Gateway.
+
+.. topic:: Other ceph-deploy Commands
+
+ To view other ``ceph-deploy`` commands, execute:
+
+ ``ceph-deploy -h``
+
+
+See `Ceph Deploy`_ for additional details.
+
+
+.. _Preflight Checklist: ../quick-start-preflight
+.. _Ceph Deploy: ../../rados/deployment
+.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
+.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new \ No newline at end of file