summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorJohn Wilkins <john.wilkins@inktank.com>2013-05-10 09:37:03 -0700
committerJohn Wilkins <john.wilkins@inktank.com>2013-05-10 09:37:03 -0700
commit723062bbdd24c8b865c1fcf313b0603df80478b2 (patch)
treea6dfdbc5c287954fbd1dcfa58d58da76e96563ce /doc
parentb353da6f682d223ba14812da0fe814eca72ad6f5 (diff)
downloadceph-723062bbdd24c8b865c1fcf313b0603df80478b2.tar.gz
doc: Updated usage syntax. Added links to hardware and manual OSD remove.
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/rados/deployment/ceph-deploy-osd.rst68
1 files changed, 45 insertions, 23 deletions
diff --git a/doc/rados/deployment/ceph-deploy-osd.rst b/doc/rados/deployment/ceph-deploy-osd.rst
index 683dd24d2c5..9b27ac41094 100644
--- a/doc/rados/deployment/ceph-deploy-osd.rst
+++ b/doc/rados/deployment/ceph-deploy-osd.rst
@@ -2,13 +2,15 @@
Add/Remove OSDs
=================
-Adding and removing OSDs may involve a few more steps when compared to adding
-and removing other Ceph daemons. OSDs write data to the disk and to journals. So
-you need to provide paths for the OSD and journal.
+Adding and removing Ceph OSD Daemons to your cluster may involve a few more
+steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
+write data to the disk and to journals. So you need to provide a disk for the
+OSD and a path to the journal partition (i.e., this is the most common
+configuration, but you may configure your system to your own needs).
By default, ``ceph-deploy`` will create an OSD with the XFS filesystem. You may
-override this by providing a ``--fs-type FS_TYPE`` argument, where ``FS_TYPE``
-is an alternate filesystem such as ``ext4`` or ``btrfs``.
+override the filesystem type by providing a ``--fs-type FS_TYPE`` argument,
+where ``FS_TYPE`` is an alternate filesystem such as ``ext4`` or ``btrfs``.
In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
You may specify the ``--dm-crypt`` argument when preparing an OSD to tell
@@ -16,13 +18,16 @@ You may specify the ``--dm-crypt`` argument when preparing an OSD to tell
``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
encryption keys.
+You should test various drive configurations to gauge their throughput before
+before building out a large cluster. See `Data Storage`_ for additional details.
+
List Disks
==========
-To list the disks on a host, execute the following command::
+To list the disks on a node, execute the following command::
- ceph-deploy disk list {host-name [host-name]...}
+ ceph-deploy disk list {node-name [node-name]...}
Zap Disks
@@ -31,65 +36,82 @@ Zap Disks
To zap a disk (delete its partition table) in preparation for use with Ceph,
execute the following::
- ceph-deploy disk zap {osd-server-name}:/path/to/disk
+ ceph-deploy disk zap {osd-server-name}:{disk-name}
+ ceph-deploy disk zap osdserver1:sdb
-.. important:: This will delete all data in the partition.
+.. important:: This will delete all data.
Prepare OSDs
============
Once you create a cluster, install Ceph packages, and gather keys, you
-may prepare the OSDs and deploy them to the OSD host(s). If you need to
+may prepare the OSDs and deploy them to the OSD node(s). If you need to
identify a disk or zap it prior to preparing it for use as an OSD,
see `List Disks`_ and `Zap Disks`_. ::
- ceph-deploy osd prepare {host-name}:{path/to/disk}[:{path/to/journal}]
- ceph-deploy osd prepare osdserver1:/dev/sdb1:/dev/ssd1
+ ceph-deploy osd prepare {node-name}:{disk}[:{path/to/journal}]
+ ceph-deploy osd prepare osdserver1:sdb:/dev/ssd1
The ``prepare`` command only prepares the OSD. It does not activate it. To
activate a prepared OSD, use the ``activate`` command. See `Activate OSDs`_
for details.
+The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and
+a path to an SSD journal partition. We recommend storing the journal on
+a separate drive to maximize throughput. You may dedicate a single drive
+for the journal too (which may be expensive) or place the journal on the
+same disk as the OSD (not recommended as it impairs performance). In the
+foregoing example we store the journal on a partioned solid state drive.
+
+.. note:: When running multiple Ceph OSD daemons on a single node, and
+ sharing a partioned journal with each OSD daemon, you should consider
+ the entire node the minimum failure domain for CRUSH purposes, because
+ if the SSD drive fails, all of the Ceph OSD daemons that journal to it
+ will fail too.
+
Activate OSDs
=============
Once you prepare an OSD you may activate it with the following command. ::
- ceph-deploy osd activate {host-name}:{path/to/disk}[:{path/to/journal}]
+ ceph-deploy osd activate {node-name}:{path/to/disk}[:{path/to/journal}]
ceph-deploy osd activate osdserver1:/dev/sdb1:/dev/ssd1
The ``activate`` command will cause your OSD to come ``up`` and be placed
-``in`` the cluster.
+``in`` the cluster. The ``activate`` command uses the path to the partition
+created when running the ``prepare`` command.
Create OSDs
===========
-You may prepare OSDs, deploy them to the OSD host(s) and activate them in one
+You may prepare OSDs, deploy them to the OSD node(s) and activate them in one
step with the ``create`` command. The ``create`` command is a convenience method
for executing the ``prepare`` and ``activate`` command sequentially. ::
- ceph-deploy osd create {host-name}:{path-to-disk}[:{path/to/journal}]
- ceph-deploy osd create osdserver1:/dev/sdb1:/dev/ssd1
+ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
+ ceph-deploy osd create osdserver1:sdb:/dev/ssd1
.. List OSDs
.. =========
-.. To list the OSDs deployed on a host(s), execute the following command::
+.. To list the OSDs deployed on a node(s), execute the following command::
-.. ceph-deploy osd list {host-name}
+.. ceph-deploy osd list {node-name}
Destroy OSDs
============
-.. note:: Coming soon.
+.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
-To destroy an OSD, execute the following command::
+.. To destroy an OSD, execute the following command::
- ceph-deploy osd destroy {host-name}:{path-to-disk}[:{path/to/journal}]
+.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
-Destroying an OSD will take it ``down`` and ``out`` of the cluster.
+.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
+.. _Data Storage: ../../../install/hardware-recommendations#data-storage
+.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual \ No newline at end of file