summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/rados/deployment/preflight-checklist.rst43
-rw-r--r--doc/start/quick-ceph-deploy.rst146
-rw-r--r--doc/start/quick-start-preflight.rst52
3 files changed, 162 insertions, 79 deletions
diff --git a/doc/rados/deployment/preflight-checklist.rst b/doc/rados/deployment/preflight-checklist.rst
index e6e06777d73..5b79c23b042 100644
--- a/doc/rados/deployment/preflight-checklist.rst
+++ b/doc/rados/deployment/preflight-checklist.rst
@@ -4,12 +4,12 @@
.. versionadded:: 0.60
-This **Preflight Checklist** will help you prepare an admin host for use with
-``ceph-deploy``, and server hosts for use with passwordless ``ssh`` and
+This **Preflight Checklist** will help you prepare an admin node for use with
+``ceph-deploy``, and server nodes for use with passwordless ``ssh`` and
``sudo``.
Before you can deploy Ceph using ``ceph-deploy``, you need to ensure that you
-have a few things set up first on your admin host and on hosts running Ceph
+have a few things set up first on your admin node and on nodes running Ceph
daemons.
@@ -17,14 +17,14 @@ Install an Operating System
===========================
Install a recent release of Debian or Ubuntu (e.g., 12.04, 12.10) on your
-hosts. For additional details on operating systems or to use other operating
+nodes. For additional details on operating systems or to use other operating
systems other than Debian or Ubuntu, see `OS Recommendations`_.
Install an SSH Server
=====================
-The ``ceph-deploy`` utility requires ``ssh``, so your server host(s) require an
+The ``ceph-deploy`` utility requires ``ssh``, so your server node(s) require an
SSH server. ::
sudo apt-get install openssh-server
@@ -33,7 +33,7 @@ SSH server. ::
Create a User
=============
-Create a user on hosts running Ceph daemons.
+Create a user on nodes running Ceph daemons.
.. tip:: We recommend a username that brute force attackers won't
guess easily (e.g., something other than ``root``, ``ceph``, etc).
@@ -45,7 +45,7 @@ Create a user on hosts running Ceph daemons.
sudo passwd ceph
-``ceph-deploy`` installs packages onto your hosts. This means that
+``ceph-deploy`` installs packages onto your nodes. This means that
the user you create requires passwordless ``sudo`` privileges.
.. note:: We **DO NOT** recommend enabling the ``root`` password
@@ -61,7 +61,7 @@ To provide full privileges to the user, add the following to
Configure SSH
=============
-Configure your admin machine with password-less SSH access to each host
+Configure your admin machine with password-less SSH access to each node
running Ceph daemons (leave the passphrase empty). ::
ssh-keygen
@@ -72,11 +72,11 @@ running Ceph daemons (leave the passphrase empty). ::
Your identification has been saved in /ceph-client/.ssh/id_rsa.
Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
-Copy the key to each host running Ceph daemons::
+Copy the key to each node running Ceph daemons::
ssh-copy-id ceph@ceph-server
-Modify your ~/.ssh/config file of your admin host so that it defaults
+Modify your ~/.ssh/config file of your admin node so that it defaults
to logging in as the user you created when no username is specified. ::
Host ceph-server
@@ -88,7 +88,7 @@ Install git
===========
To clone the ``ceph-deploy`` repository, you will need install ``git``
-on your admin host. ::
+on your admin node. ::
sudo apt-get install git
@@ -118,9 +118,22 @@ After you clone the repository, bootstrap ``ceph-deploy``. ::
cd ceph-deploy
./bootstrap
-Add ``ceph-deploy`` to your path so that so that you can execute it without
-remaining in ``ceph-deploy`` directory (e.g., ``/etc/environment``,
-``~/.pam_environment``). Once you have completed this pre-flight checklist, you
-are ready to begin using ``ceph-deploy``.
+Add ``ceph-deploy`` to your path (e.g., ``/etc/environment``,
+``~/.pam_environment``) so that you can execute it without remaining in the
+directory that contains ``ceph-deploy``.
+
+
+Ensure Connectivity
+===================
+
+Ensure that your Admin node has connectivity to the network and to your Server
+node (e.g., ensure ``iptables``, ``ufw`` or other tools that may prevent
+connections, traffic forwarding, etc. to allow what you need).
+
+.. tip:: The ``ceph-deploy`` tool is new and you may encounter some issues
+ without effective error messages.
+
+Once you have completed this pre-flight checklist, you are ready to begin using
+``ceph-deploy``.
.. _OS Recommendations: ../../../install/os-recommendations \ No newline at end of file
diff --git a/doc/start/quick-ceph-deploy.rst b/doc/start/quick-ceph-deploy.rst
index 8662c18b556..612641a9443 100644
--- a/doc/start/quick-ceph-deploy.rst
+++ b/doc/start/quick-ceph-deploy.rst
@@ -5,12 +5,12 @@
If you haven't completed your `Preflight Checklist`_, do that first. This
**Quick Start** sets up a two-node demo cluster so you can explore some of the
object store functionality. This **Quick Start** will help you install a
-minimal Ceph cluster on a server host from your admin host using
+minimal Ceph cluster on a server node from your admin node using
``ceph-deploy``.
.. ditaa::
/----------------\ /----------------\
- | Admin Host |<------->| Server Host |
+ | Admin Node |<------->| Server Node |
| cCCC | | cCCC |
+----------------+ +----------------+
| Ceph Commands | | ceph - mon |
@@ -21,8 +21,8 @@ minimal Ceph cluster on a server host from your admin host using
\----------------/
-For best results, create a directory on your client machine
-for maintaining the configuration of your cluster. ::
+For best results, create a directory on your admin node for maintaining the
+configuration of your cluster. ::
mkdir my-cluster
cd my-cluster
@@ -34,22 +34,22 @@ for maintaining the configuration of your cluster. ::
Create a Cluster
================
-To create your cluster, declare its inital monitors, generate a filesystem ID
+To create your cluster, declare its initial monitors, generate a filesystem ID
(``fsid``) and generate monitor keys by entering the following command on a
commandline prompt::
- ceph-deploy new {server-name}
- ceph-deploy new ceph-server
+ ceph-deploy new {node-name}
+ ceph-deploy new ceph-node
Check the output with ``ls`` and ``cat`` in the current directory. You should
see a Ceph configuration file, a keyring, and a log file for the new cluster.
See `ceph-deploy new -h`_ for additional details.
-.. topic:: Single Host Quick Start
+.. topic:: Single Node Quick Start
- Assuming only one host for your cluster, you will need to modify the default
- ``osd crush chooseleaf type`` setting (it defaults to ``1`` for ``host``) to
- ``0`` so that it will peer with OSDs on the local host. Add the following
+ Assuming only one node for your cluster, you will need to modify the default
+ ``osd crush chooseleaf type`` setting (it defaults to ``1`` for ``node``) to
+ ``0`` so that it will peer with OSDs on the local node. Add the following
line to your Ceph configuration file::
osd crush chooseleaf type = 0
@@ -58,41 +58,50 @@ See `ceph-deploy new -h`_ for additional details.
Install Ceph
============
-To install Ceph on your server, open a command line on your client
-machine and type the following::
+To install Ceph on your server node, open a command line on your admin
+node and type the following::
- ceph-deploy install {server-name}[,{server-name}]
- ceph-deploy install --stable cuttlefish ceph-server
+ ceph-deploy install {node-name}[,{node-name}]
+ ceph-deploy install --stable cuttlefish ceph-node
Without additional arguments, ``ceph-deploy`` will install the most recent
-stable Ceph package to the host machine. See `ceph-deploy install -h`_ for
+stable Ceph package to the server node. See `ceph-deploy install -h`_ for
additional details.
Add a Monitor
=============
-To run a Ceph cluster, you need at least one monitor. When using ``ceph-deploy``,
-the tool enforces a single monitor per host. Execute the following to create
-a monitor::
+To run a Ceph cluster, you need at least one Ceph Monitor. When using
+``ceph-deploy``, the tool enforces a single Ceph Monitor per node. Execute the
+following to create a Ceph Monitor::
+
+ ceph-deploy mon create {node-name}
+ ceph-deploy mon create ceph-node
+
+.. tip:: In production environments, we recommend running Ceph Monitors on
+ nodes that do not run OSDs.
- ceph-deploy mon create {server-name}
- ceph-deploy mon create ceph-server
-.. tip:: In production environments, we recommend running monitors on hosts
- that do not run OSDs.
Gather Keys
===========
To deploy additional daemons and provision them with monitor authentication keys
-from your admin host, you must first gather keys from a monitor host. Execute
+from your admin node, you must first gather keys from a monitor node. Execute
the following to gather keys::
- ceph-deploy gatherkeys {mon-server-name}
- ceph-deploy gatherkeys ceph-server
+ ceph-deploy gatherkeys {mon-node-name}
+ ceph-deploy gatherkeys ceph-node
+
+Once you have gathered keys, you should have a keyring named
+``{cluster-name}.client.admin.keyring``,
+``{cluster-name}.bootstrap-osd.keyring`` and
+``{cluster-name}.bootstrap-mds.keyring`` in the local directory. If you don't,
+you may have a problem with your network connection. Ensure that you complete
+this step such that you have the foregoing keyrings before proceeding further.
Add OSDs
========
@@ -110,11 +119,11 @@ activate the OSD for you.
List Disks
----------
-To list the available disk drives on a prospective OSD host, execute the
+To list the available disk drives on a prospective OSD node, execute the
following::
- ceph-deploy disk list {osd-server-name}
- ceph-deploy disk list ceph-server
+ ceph-deploy disk list {osd-node-name}
+ ceph-deploy disk list ceph-node
Zap a Disk
@@ -123,37 +132,84 @@ Zap a Disk
To zap a disk (delete its partition table) in preparation for use with Ceph,
execute the following::
- ceph-deploy disk zap {osd-server-name}:/path/to/disk
- ceph-deploy disk zap ceph-server:/dev/sdb1 ceph-server:/dev/sdb2
+ ceph-deploy disk zap {osd-node-name}:{disk}
+ ceph-deploy disk zap ceph-node:sdb ceph-node:sdb2
-.. important:: This will delete all data in the partition.
+.. important:: This will delete all data on the disk.
-Add OSDs
---------
+Multiple OSDs on the OS Disk (Demo Only)
+----------------------------------------
+
+For demonstration purposes, you may wish to add multiple OSDs to the OS disk
+(not recommended for production systems). To use Ceph OSDs daemons on the OS
+disk, you must use ``prepare`` and ``activate`` as separate steps. First, define
+a directory for the Ceph OSD daemon(s). ::
+
+ mkdir /tmp/osd0
+ mkdir /tmp/osd1
+
+Then, use ``prepare`` to prepare the directory(ies) for use with a
+Ceph OSD Daemon. ::
+
+ ceph-deploy osd prepare {osd-node-name}:/tmp/osd0
+ ceph-deploy osd prepare {osd-node-name}:/tmp/osd1
+
+Finally, use ``activate`` to activate the Ceph OSD Daemons. ::
+
+ ceph-deploy osd activate {osd-node-name}:/tmp/osd0
+ ceph-deploy osd activate {osd-node-name}:/tmp/osd1
+
+.. tip:: You need two OSDs to reach an ``active + clean`` state. You can
+ add one OSD at a time, but OSDs need to communicate with each other
+ for Ceph to run properly. Always use more than one OSD per cluster.
+
+
+Add OSDs on Standalone Disks
+----------------------------
+
+You can add OSDs using ``prepare`` and ``activate`` in two discrete
+steps. To prepare a disk for use with a Ceph OSD Daemon, execute the
+following::
+
+ ceph-deploy osd prepare {osd-node-name}:{osd-disk-name}[:/path/to/journal]
+ ceph-deploy osd prepare ceph-node:sdb
+
+To activate the Ceph OSD Daemon, execute the following::
+
+ ceph-deploy osd activate {osd-node-name}:{osd-partition-name}
+ ceph-deploy osd activate ceph-node:sdb1
+
+
+To prepare an OSD disk and activate it in one step, execute the following::
+
+ ceph-deploy osd create {osd-node-name}:{osd-disk-name}[:/path/to/journal] [{osd-node-name}:{osd-disk-name}[:/path/to/journal]]
+ ceph-deploy osd create ceph-node:sdb:/dev/ssd1 ceph-node:sdc:/dev/ssd2
-To prepare an OSD disk and activate it, execute the following::
- ceph-deploy osd create {osd-server-name}:/path/to/disk[:/path/to/journal] [{osd-server-name}:/path/to/disk[:/path/to/journal]]
- ceph-deploy osd create ceph-server:/dev/sdb1 ceph-server:/dev/sdb2
+.. note:: The journal example assumes you will use a partition on a separate
+ solid state drive (SSD). If you omit a journal drive or partition,
+ ``ceph-deploy`` will use create a separate partition for the journal
+ on the same drive. If you have already formatted your disks and created
+ partitions, you may also use partition syntax for your OSD disk.
-You must add a minimum of two OSDs for the placement groups in a cluster to achieve
-an ``active + clean`` state.
+You must add a minimum of two OSDs for the placement groups in a cluster to
+achieve an ``active + clean`` state.
Add a MDS
=========
-To use CephFS, you need at least one metadata server. Execute the following to
-create a metadata server::
+To use CephFS, you need at least one metadata node. Execute the following to
+create a metadata node::
- ceph-deploy mds create {server-name}
- ceph-deploy mds create ceph-server
+ ceph-deploy mds create {node-name}
+ ceph-deploy mds create ceph-node
-.. note:: Currently Ceph runs in production with one metadata server only. You
+.. note:: Currently Ceph runs in production with one metadata node only. You
may use more, but there is currently no commercial support for a cluster
- with multiple metadata servers.
+ with multiple metadata nodes.
Summary
diff --git a/doc/start/quick-start-preflight.rst b/doc/start/quick-start-preflight.rst
index fe319493825..bfef248c58a 100644
--- a/doc/start/quick-start-preflight.rst
+++ b/doc/start/quick-start-preflight.rst
@@ -7,33 +7,33 @@
Thank you for trying Ceph! Petabyte-scale data clusters are quite an
undertaking. Before delving deeper into Ceph, we recommend setting up a two-node
demo cluster to explore some of the functionality. This **Preflight Checklist**
-will help you prepare an admin host and a server host for use with
+will help you prepare an admin node and a server node for use with
``ceph-deploy``.
.. ditaa::
/----------------\ /----------------\
- | Admin Host |<------->| Server Host |
+ | Admin Node |<------->| Server Node |
| cCCC | | cCCC |
\----------------/ \----------------/
Before you can deploy Ceph using ``ceph-deploy``, you need to ensure that you
-have a few things set up first on your admin host and on hosts running Ceph
+have a few things set up first on your admin node and on nodes running Ceph
daemons.
Install an Operating System
===========================
-Install a recent release of Debian or Ubuntu (e.g., 12.04, 12.10) on your
-hosts. For additional details on operating systems or to use other operating
+Install a recent release of Debian or Ubuntu (e.g., 12.04, 12.10, 13.04) on your
+nodes. For additional details on operating systems or to use other operating
systems other than Debian or Ubuntu, see `OS Recommendations`_.
Install an SSH Server
=====================
-The ``ceph-deploy`` utility requires ``ssh``, so your server host(s) require an
+The ``ceph-deploy`` utility requires ``ssh``, so your server node(s) require an
SSH server. ::
sudo apt-get install openssh-server
@@ -42,7 +42,7 @@ SSH server. ::
Create a User
=============
-Create a user on hosts running Ceph daemons.
+Create a user on nodes running Ceph daemons.
.. tip:: We recommend a username that brute force attackers won't
guess easily (e.g., something other than ``root``, ``ceph``, etc).
@@ -54,7 +54,7 @@ Create a user on hosts running Ceph daemons.
sudo passwd ceph
-``ceph-deploy`` installs packages onto your hosts. This means that
+``ceph-deploy`` installs packages onto your nodes. This means that
the user you create requires passwordless ``sudo`` privileges.
.. note:: We **DO NOT** recommend enabling the ``root`` password
@@ -70,7 +70,7 @@ To provide full privileges to the user, add the following to
Configure SSH
=============
-Configure your admin machine with password-less SSH access to each host
+Configure your admin machine with password-less SSH access to each node
running Ceph daemons (leave the passphrase empty). ::
ssh-keygen
@@ -81,11 +81,11 @@ running Ceph daemons (leave the passphrase empty). ::
Your identification has been saved in /ceph-client/.ssh/id_rsa.
Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
-Copy the key to each host running Ceph daemons::
+Copy the key to each node running Ceph daemons::
ssh-copy-id ceph@ceph-server
-Modify your ~/.ssh/config file of your admin host so that it defaults
+Modify your ~/.ssh/config file of your admin node so that it defaults
to logging in as the user you created when no username is specified. ::
Host ceph-server
@@ -97,7 +97,7 @@ Install git
===========
To clone the ``ceph-deploy`` repository, you will need install ``git``
-on your admin host. ::
+on your admin node. ::
sudo apt-get install git
@@ -112,6 +112,7 @@ To begin working with ``ceph-deploy``, clone its repository. ::
If you do not specify a directory name, ``git clone`` will use the repository
name ``ceph-deploy``.
+
Install python-virtualenv
=========================
@@ -129,20 +130,33 @@ After you clone the repository, bootstrap ``ceph-deploy``. ::
cd ceph-deploy
./bootstrap
-Add ``ceph-deploy`` to your path so that you can execute it without
-remaining in ``ceph-deploy`` directory (e.g., ``/etc/environment``,
-``~/.pam_environment``). Once you have completed this pre-flight checklist, you
-are ready to begin using ``ceph-deploy``.
+Add ``ceph-deploy`` to your path (e.g., ``/etc/environment``,
+``~/.pam_environment``) so that you can execute it without remaining in the
+directory that contains ``ceph-deploy``.
+
+
+Ensure Connectivity
+===================
+Ensure that your Admin node has connectivity to the network and to your Server
+node (e.g., ensure ``iptables``, ``ufw`` or other tools that may prevent
+connections, traffic forwarding, etc. to allow what you need).
+
+.. tip:: The ``ceph-deploy`` tool is new and you may encounter some issues
+ without effective error messages.
+
+Once you have completed this pre-flight checklist, you are ready to begin using
+``ceph-deploy``.
Summary
=======
-Once you have passwordless ``ssh`` connectivity, passwordless ``sudo``, and
-a bootstrapped ``ceph-deploy``, proceed to the `Object Store Quick Start`_.
+Once you have passwordless ``ssh`` connectivity, passwordless ``sudo``, a
+bootstrapped ``ceph-deploy``, and appropriate connectivity, proceed to the
+`Object Store Quick Start`_.
.. tip:: The ``ceph-deploy`` utility can install Ceph packages on remote
- machines from the admin host!
+ machines from the admin node!
.. _Object Store Quick Start: ../quick-ceph-deploy
.. _OS Recommendations: ../../install/os-recommendations \ No newline at end of file