summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTommi Virtanen <tommi.virtanen@dreamhost.com>2011-09-06 13:19:44 -0700
committerTommi Virtanen <tommi.virtanen@dreamhost.com>2011-09-06 13:19:44 -0700
commit28539ccde8cd42bfbb6acf3c02c87b9b90bdfd0a (patch)
treeab0071f4326700a912c7ac76f1a5aa848c6a3f22
parentfd7a422a554b828c4b670d038a6c959ba4a09252 (diff)
downloadceph-28539ccde8cd42bfbb6acf3c02c87b9b90bdfd0a.tar.gz
doc: Document mkcephfs-style installation.
Signed-off-by: Tommi Virtanen <tommi.virtanen@dreamhost.com>
-rw-r--r--doc/architecture.rst7
-rw-r--r--doc/ops/filesystem.rst2
-rw-r--r--doc/ops/install.rst119
-rw-r--r--doc/ops/mycluster.conf37
4 files changed, 145 insertions, 20 deletions
diff --git a/doc/architecture.rst b/doc/architecture.rst
index 3afbe6bcc8b..05a3fdfa493 100644
--- a/doc/architecture.rst
+++ b/doc/architecture.rst
@@ -16,6 +16,8 @@ device. Qemu/KVM also has a direct RBD client, that avoids the kernel
overhead.
+.. _monitor:
+
Monitor cluster
===============
@@ -50,6 +52,9 @@ subgroup for an even number.
.. todo:: explain monmap
+.. _rados:
+
+
RADOS
=====
@@ -85,6 +90,8 @@ attributes should work (see :ref:`xattr`).
.. todo:: explain plugins ("classes")
+.. _cephfs:
+
Ceph filesystem
===============
diff --git a/doc/ops/filesystem.rst b/doc/ops/filesystem.rst
index 75b4d67a12c..ad302dff69c 100644
--- a/doc/ops/filesystem.rst
+++ b/doc/ops/filesystem.rst
@@ -5,6 +5,8 @@
.. todo:: Benefits of each, limits on non-btrfs ones, performance data when we have them, etc
+.. _btrfs:
+
Btrfs
-----
diff --git a/doc/ops/install.rst b/doc/ops/install.rst
index 401e3cbab16..22ff4baec9c 100644
--- a/doc/ops/install.rst
+++ b/doc/ops/install.rst
@@ -39,6 +39,12 @@ Installing Ceph using Chef
Installing Ceph using ``mkcephfs``
==================================
+Pick a host that has the Ceph software installed -- it does not have
+to be a part of your cluster, but it does need to have *matching
+versions* of the ``mkcephfs`` command and other Ceph tools
+installed. This will be your `admin host`.
+
+
Installing the packages
-----------------------
@@ -78,40 +84,113 @@ Run these commands on all nodes::
sudo apt-get install ceph
+.. todo:: For older distributions, you may need to make sure your apt-get may read .bz2 compressed files. This works for Debian Lenny 5.0.3: ``apt-get install bzip2``
+.. todo:: Ponder packages; ceph.deb currently pulls in gceph (ceph.deb
+ Recommends: ceph-client-tools ceph-fuse libceph1 librados2 librbd1
+ btrfs-tools gceph) (other interesting: ceph-client-tools ceph-fuse
+ libceph-dev librados-dev librbd-dev obsync python-ceph radosgw)
+.. todo:: Other operating system support.
+Creating a ``ceph.conf`` file
+-----------------------------
+On the `admin host`, create a file with a name like
+``mycluster.conf``.
-.. todo:: For older distributions, you may need to make sure your apt-get may read .bz2 compressed files. This works for Debian Lenny 5.0.3:
+Here's a template for a 3-node cluster, where all three machines run a
+:ref:`monitor <monitor>` and an :ref:`object store <rados>`, and the
+first one runs the :ref:`Ceph filesystem daemon <cephfs>`. Replace the
+hostnames and IP addresses with your own, and add/remove hosts as
+appropriate.
- $ apt-get install bzip2
+.. literalinclude:: mycluster.conf
+ :language: ini
-.. todo:: ponder packages
+Note how the ``host`` variables dictate what node runs what
+services. See :doc:`/ops/config` for more information.
- Package: ceph
- Recommends: ceph-client-tools, ceph-fuse, libceph1, librados2, librbd1, btrfs-tools, gceph
+.. todo:: More specific link for host= convention.
- Package: ceph-client-tools
- Package: ceph-fuse
- Package: libceph-dev
- Package: librados-dev
- Package: librbd-dev
- Package: obsync
- Package: python-ceph
- Package: radosgw
+.. todo:: Point to cluster design docs, once they are ready.
+.. todo:: At this point, either use 1 or 3 mons, point to :doc:`grow/mon`
-.. todo:: Other operating system support.
+Running ``mkcephfs``
+--------------------
-.. todo:: write me
+Verify that you can manage the nodes from the host you intend to run
+``mkcephfs`` on:
+
+- Make sure you can SSH_ from the `admin host` into all the nodes
+ using the short hostnames (``myserver`` not
+ ``myserver.mydept.example.com``), with no user specified
+ [#ssh_config]_.
+- Make sure you can run ``sudo`` without passphrase prompts on all
+ nodes [#sudo]_.
+
+.. _SSH: http://openssh.org/
+
+If you are not using :ref:`Btrfs <btrfs>`, enable :ref:`extended
+attributes <xattr>`.
+
+On each node, make sure the directory ``/srv/osd.N`` (with the
+appropriate ``N``) exists, and the right filesystem is mounted. If you
+are not using a separate filesystem for the file store, just run
+``sudo mkdir /srv/osd.N`` (with the right ``N``).
+
+Then, using the right path to the ``mycluster.conf`` file you prepared
+earlier, run::
+
+ mkcephfs -a -c mycluster.conf -k mycluster.keyring
+
+This will place an `admin key` into ``mycluster.keyring``. This will
+be used to manage the cluster. Treat it like a ``root`` password to
+your filesystem.
+
+.. todo:: Link to explanation of `admin key`.
+
+That should SSH into all the nodes, and set up Ceph for you.
+
+It does **not** copy the configuration, or start the services. Let's
+do that::
+
+ ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
+ ...
+
+ ssh myserver01 sudo /etc/init.d/ceph start
+ ssh myserver02 sudo /etc/init.d/ceph start
+ ssh myserver03 sudo /etc/init.d/ceph start
+ ...
+
+After a little while, the cluster should come up and reach a healthy
+state. We can check that::
+
+ ceph -k mycluster.keyring -c mycluster.conf health
+ 2011-09-06 12:33:51.561012 mon <- [health]
+ 2011-09-06 12:33:51.562164 mon2 -> 'HEALTH_OK' (0)
+
+.. todo:: Document "healthy"
+
+.. todo:: Improve output.
+
+
+
+.. rubric:: Footnotes
+
+.. [#ssh_config] Something like this in your ``~/.ssh_config`` may
+ help -- unfortunately you need an entry per node::
+
+ Host myserverNN
+ Hostname myserverNN.dept.example.com
+ User ubuntu
-Basically, everything somebody needs to go through to build a new
-cluster when not cheating via vstart or teuthology, but without
-mentioning all the design tradeoffs and options like journaling
-locations or filesystems
+.. [#sudo] The relevant ``sudoers`` syntax looks like this::
-At this point, either use 1 or 3 mons, point to :doc:`grow/mon`
+ %admin ALL=(ALL) NOPASSWD:ALL
diff --git a/doc/ops/mycluster.conf b/doc/ops/mycluster.conf
new file mode 100644
index 00000000000..454eca63bfb
--- /dev/null
+++ b/doc/ops/mycluster.conf
@@ -0,0 +1,37 @@
+[global]
+ auth supported = cephx
+ keyring = /etc/ceph/$name.keyring
+
+[mon]
+ mon data = /srv/mon.$id
+
+[mds]
+
+[osd]
+ osd data = /srv/osd.$id
+ osd journal = /srv/osd.$id.journal
+ osd journal size = 1000
+
+[mon.a]
+ host = myserver01
+ mon addr = 10.0.0.101:6789
+
+[mon.b]
+ host = myserver02
+ mon addr = 10.0.0.102:6789
+
+[mon.c]
+ host = myserver03
+ mon addr = 10.0.0.103:6789
+
+[osd.0]
+ host = myserver01
+
+[osd.1]
+ host = myserver02
+
+[osd.2]
+ host = myserver03
+
+[mds.a]
+ host = myserver01