summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/cephfs/index.rst24
-rw-r--r--doc/faq.rst11
-rw-r--r--doc/index.rst1
-rw-r--r--doc/rados/index.rst50
-rw-r--r--doc/radosgw/index.rst31
-rw-r--r--doc/rbd/rbd.rst6
-rw-r--r--doc/start/index.rst46
-rw-r--r--doc/start/quick-rgw.rst57
-rw-r--r--doc/start/rgw.conf2
-rwxr-xr-xqa/workunits/rbd/kernel.sh35
10 files changed, 152 insertions, 111 deletions
diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst
index c10651ccb9c..321299a3eaa 100644
--- a/doc/cephfs/index.rst
+++ b/doc/cephfs/index.rst
@@ -1,11 +1,11 @@
-=========
- Ceph FS
-=========
+=================
+ Ceph Filesystem
+=================
-The :term:`Ceph FS` file system is a POSIX-compliant file system that uses a
-Ceph Storage Cluster to store its data. Ceph FS uses the same Ceph Storage
-Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 and Swift
-APIs, or native bindings (librados).
+The :term:`Ceph Filesystem` (Ceph FS) is a POSIX-compliant filesystem that uses
+a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph
+Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3
+and Swift APIs, or native bindings (librados).
.. ditaa::
@@ -26,8 +26,8 @@ APIs, or native bindings (librados).
+---------------+ +---------------+ +---------------+
-Using Ceph FS requires at least one :term:`Ceph Metadata Server` in your
-Ceph Storage Cluster.
+Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in
+your Ceph Storage Cluster.
@@ -36,8 +36,8 @@ Ceph Storage Cluster.
<style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
<table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>
-To run Ceph FS, you must have a running Ceph Storage Cluster with at least
-one :term:`Ceph Metadata Server` running.
+To run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at
+least one :term:`Ceph Metadata Server` running.
.. toctree::
@@ -53,7 +53,7 @@ one :term:`Ceph Metadata Server` running.
</td><td><h3>Step 2: Mount Ceph FS</h3>
Once you have a healthy Ceph Storage Cluster with at least
-one Ceph Metadata Server, you may mount your Ceph FS filesystem.
+one Ceph Metadata Server, you may mount your Ceph Filesystem.
Ensure that you client has network connectivity and the proper
authentication keyring.
diff --git a/doc/faq.rst b/doc/faq.rst
deleted file mode 100644
index 0ee32054410..00000000000
--- a/doc/faq.rst
+++ /dev/null
@@ -1,11 +0,0 @@
-============================
- Frequently Asked Questions
-============================
-
-We provide answers to frequently asked questions from the ``ceph-users`` and
-``ceph-devel`` mailing lists, the IRC channel, and on the `Ceph.com`_ blog.
-Ceph FAQs now reside at the `Ceph Wiki`_.
-
-.. _Ceph.com: http://ceph.com
-.. _Ceph Wiki: http://wiki.ceph.com/03FAQs
-
diff --git a/doc/index.rst b/doc/index.rst
index fb6d261008b..ffd4bb60d94 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -46,5 +46,4 @@ cluster to ensure that the storage hosts are running smoothly.
architecture
Development <dev/index>
release-notes
- FAQ <faq>
Glossary <glossary>
diff --git a/doc/rados/index.rst b/doc/rados/index.rst
index f55657154e6..a1ef880b11d 100644
--- a/doc/rados/index.rst
+++ b/doc/rados/index.rst
@@ -1,25 +1,27 @@
-====================
- RADOS Object Store
-====================
-
-Ceph's :abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Object Store
-is the foundation for all Ceph clusters. When you use object store clients such
-as the CephFS filesystem, the RESTful Gateway or Ceph block devices, Ceph reads
-data from and writes data to the object store. Ceph's RADOS Object Stores
-consist of two types of daemons: Object Storage Daemons (OSDs) store data as
-objects on storage nodes; and Monitors maintain a master copy of the cluster
-map. A Ceph cluster may contain thousands of storage nodes. A minimal system
-will have at least two OSDs for data replication.
+======================
+ Ceph Storage Cluster
+======================
+
+The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments.
+Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph
+Storage Clusters consist of two types of daemons: a :term:`Ceph OSD Daemon`
+(OSD) stores data as objects on a storage node; and a :term:`Ceph Monitor`
+maintains a master copy of the cluster map. A Ceph Storage Cluster may contain
+thousands of storage nodes. A minimal system will have at least one
+Ceph Monitor and two Ceph OSD Daemons for data replication.
+
+The Ceph Filesystem, Ceph Object Storage and Ceph Block Devices read data from
+and write data to the Ceph Storage Cluster.
.. raw:: html
<style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
<table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Config and Deploy</h3>
-Once you have installed Ceph packages, you must configure. There are a a few
-required settings, but most configuration settings have default values.
-Following the initial configuration, you must deploy Ceph. Deployment consists
-of creating and initializing data directories, keys, etc.
+Ceph Storage Clusters have a few required settings, but most configuration
+settings have default values. A typical deployment uses a deployment tool
+to define a cluster and bootstrap a monitor. See `Deployment`_ for details
+on ``ceph-deploy.``
.. toctree::
:maxdepth: 2
@@ -31,7 +33,8 @@ of creating and initializing data directories, keys, etc.
</td><td><h3>Operations</h3>
-Once you have a deployed Ceph cluster, you may begin operating your cluster.
+Once you have a deployed a Ceph Storage Cluster, you may begin operating
+your cluster.
.. toctree::
:maxdepth: 2
@@ -54,9 +57,9 @@ Once you have a deployed Ceph cluster, you may begin operating your cluster.
</td><td><h3>APIs</h3>
-Most Ceph deployments use Ceph `block devices`_, the `gateway`_ and/or the
-`CephFS filesystem`_. You may also develop applications that talk directly to
-the Ceph object store.
+Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the
+`Ceph Filesystem`_. You may also develop applications that talk directly to
+the Ceph Storage Cluster.
.. toctree::
:maxdepth: 2
@@ -67,6 +70,7 @@ the Ceph object store.
</td></tr></tbody></table>
-.. _block devices: ../rbd/rbd
-.. _CephFS filesystem: ../cephfs/
-.. _gateway: ../radosgw/
+.. _Ceph Block Devices: ../rbd/rbd
+.. _Ceph Filesystem: ../cephfs/
+.. _Ceph Object Storage: ../radosgw/
+.. _Deployment: ../rados/deployment/ \ No newline at end of file
diff --git a/doc/radosgw/index.rst b/doc/radosgw/index.rst
index e90d690cb19..9251b2411e2 100644
--- a/doc/radosgw/index.rst
+++ b/doc/radosgw/index.rst
@@ -1,23 +1,24 @@
-===============
- RADOS Gateway
-===============
+=====================
+ Ceph Object Storage
+=====================
-RADOS Gateway is an object storage interface built on top of ``librados`` to
-provide applications with a RESTful gateway to RADOS clusters. The RADOS Gateway
-supports two interfaces:
+:term:`Ceph Object Storage` is an object storage interface built on top of
+``librgw`` and ``librados`` to provide applications with a RESTful gateway to
+Ceph Storage Clusters. Ceph Object Storage supports two interfaces:
-#. **S3-compatible:** Provides block storage functionality with an interface that
- is compatible with a large subset of the Amazon S3 RESTful API.
+#. **S3-compatible:** Provides object storage functionality with an interface
+ that is compatible with a large subset of the Amazon S3 RESTful API.
-#. **Swift-compatible:** Provides block storage functionality with an interface
+#. **Swift-compatible:** Provides object storage functionality with an interface
that is compatible with a large subset of the OpenStack Swift API.
-RADOS Gateway is a FastCGI module for interacting with ``librados``. Since it
+Ceph Object Storage uses the RADOS Gateway daemon (``radosgw``), which is a
+FastCGI module for interacting with ``librgw`` and ``librados``. Since it
provides interfaces compatible with OpenStack Swift and Amazon S3, RADOS Gateway
-has its own user management. RADOS Gateway can store data in the same RADOS
-cluster used to store data from Ceph FS clients or RADOS block devices.
-The S3 and Swift APIs share a common namespace, so you may write data with
-one API and retrieve it with the other.
+has its own user management. RADOS Gateway can store data in the same Ceph
+Storage Cluster used to store data from Ceph Filesystem clients or Ceph Block
+Device clients. The S3 and Swift APIs share a common namespace, so you may write
+data with one API and retrieve it with the other.
.. ditaa:: +------------------------+ +------------------------+
| S3 compatible API | | Swift compatible API |
@@ -29,7 +30,7 @@ one API and retrieve it with the other.
| OSDs | | Monitors |
+------------------------+ +------------------------+
-.. note:: RADOS Gateway does **NOT** use the CephFS metadata server.
+.. note:: Ceph Object Storage does **NOT** use the Ceph Metadata Server.
.. toctree::
diff --git a/doc/rbd/rbd.rst b/doc/rbd/rbd.rst
index 2e9e34bf250..896a7ed68ad 100644
--- a/doc/rbd/rbd.rst
+++ b/doc/rbd/rbd.rst
@@ -1,6 +1,6 @@
-===============
- Block Devices
-===============
+===================
+ Ceph Block Device
+===================
A block is a sequence of bytes (for example, a 512-byte block of data).
Block-based storage interfaces are the most common way to store data with
diff --git a/doc/start/index.rst b/doc/start/index.rst
index e6e6ed2842b..cee41996627 100644
--- a/doc/start/index.rst
+++ b/doc/start/index.rst
@@ -2,30 +2,32 @@
Getting Started
=================
-Whether you want to provide RESTful object services and/or block devices to a
-cloud solution, deploy a CephFS filesystem or use Ceph for another purpose, all
-Ceph clusters begin with setting up your host computers, network and the Ceph
-Object Store. A Ceph object store cluster has three essential daemons:
+Whether you want to provide :term:`Ceph Object Storage` and/or :term:`Ceph Block
+Device` services to :term:`Cloud Platforms`, deploy a :term:`Ceph Filesystem` or
+use Ceph for another purpose, all :term:`Ceph Storage Cluster` deployments begin
+with setting up each :term:`Ceph Node`, your network and the Ceph Storage
+Cluster. A Ceph Storage Cluster has three essential daemons:
.. ditaa:: +---------------+ +---------------+ +---------------+
| OSDs | | Monitor | | MDS |
+---------------+ +---------------+ +---------------+
-- **OSDs**: Object Storage Daemons (OSDs) store data, handle data replication,
- recovery, backfilling, rebalancing, and provide some monitoring information
- to Ceph monitors by checking other OSDs for a heartbeat. A cluster requires
- at least two OSDs to achieve an ``active + clean`` state.
+- **OSDs**: A :term:`Ceph OSD Daemon` (OSD) stores data, handles data
+ replication, recovery, backfilling, rebalancing, and provides some monitoring
+ information to Ceph Monitors by checking other Ceph OSD Daemons for a
+ heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to
+ achieve an ``active + clean`` state.
-- **Monitors**: Ceph monitors maintain maps of the cluster state, including
- the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH
- map. Ceph maintains a history (called an "epoch") of each state change in
- the monitors, OSDs, and PGs.
+- **Monitors**: A :term:`Ceph Monitor` maintains maps of the cluster state,
+ including the monitor map, the OSD map, the Placement Group (PG) map, and the
+ CRUSH map. Ceph maintains a history (called an "epoch") of each state change
+ in the Ceph Monitors, Ceph OSD Daemons, and PGs.
-- **MDSs**: Metadata Servers (MDSs) store metadata on behalf of the CephFS
- filesystem (i.e., Ceph block devices and Ceph gateways do not use MDS).
- Ceph MDS servers make it feasible for POSIX file system users to execute
- basic commands like ``ls``, ``find``, etc. without placing an enormous
- burden on the object store.
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS) stores metadata on behalf of
+ the :term:`Ceph Filesystem` (i.e., Ceph Block Devices and Ceph Object Storage
+ do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system
+ users to execute basic commands like ``ls``, ``find``, etc. without placing
+ an enormous burden on the Ceph Storage Cluster.
.. raw:: html
@@ -33,9 +35,9 @@ Object Store. A Ceph object store cluster has three essential daemons:
<style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
<table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Preflight</h3>
-Client and server machines may require some basic configuration work prior to
-deploying a Ceph cluster. You can also avail yourself of help from the Ceph
-community by getting involved.
+A :term:`Ceph Client` and a :term:`Ceph Node` may require some basic
+configuration work prior to deploying a Ceph Storage Cluster. You can also
+avail yourself of help from the Ceph community by getting involved.
.. toctree::
@@ -59,12 +61,12 @@ deploying a Ceph Storage Cluster.
</td><td><h3>Step 3: Ceph Client(s)</h3>
Most Ceph users don't store objects directly in the Ceph Storage Cluster. They typically use at least one of
-Ceph Block Devices, the Ceph FS filesystem, and Ceph Object Storage.
+Ceph Block Devices, the Ceph Filesystem, and Ceph Object Storage.
.. toctree::
Block Device Quick Start <quick-rbd>
- Ceph FS Quick Start <quick-cephfs>
+ Filesystem Quick Start <quick-cephfs>
Object Storage Quick Start <quick-rgw>
diff --git a/doc/start/quick-rgw.rst b/doc/start/quick-rgw.rst
index 947409f0bc9..cb329f61c8b 100644
--- a/doc/start/quick-rgw.rst
+++ b/doc/start/quick-rgw.rst
@@ -26,7 +26,7 @@ the following procedure:
sudo a2enmod fastcgi
#. Add a line for the ``ServerName`` in the Apache configuration file
- (e.g., ``/etc/apache2/httpd.conf`` or ``/etc/apache2/apache2.conf).
+ (e.g., ``/etc/apache2/httpd.conf`` or ``/etc/apache2/apache2.conf``).
Provide the fully qualified domain name of the server machine
(e.g., ``hostname -f``). ::
@@ -41,7 +41,7 @@ Install Ceph Object Storage
===========================
Once you have installed and configured Apache and FastCGI, you may install
-Ceph Object Storage. ::
+the Ceph Object Storage daemon (``radosgw``). ::
sudo apt-get install radosgw
@@ -84,7 +84,6 @@ On the admin node, perform the following steps:
ceph-deploy --overwrite-conf config push {hostname}
-
Create a Gateway Configuration File
===================================
@@ -103,6 +102,8 @@ follow the steps below to modify it (on your server node).
#. Replace the ``{email.address}`` entry with the email address for the
server administrator.
+#. Add a ``ServerAlias`` if you wish to use S3-style subdomains.
+
#. Save the contents to the ``/etc/apache2/sites-available`` directory on
the server machine.
@@ -177,7 +178,8 @@ for Apache on the server machine. ::
sudo a2enmod ssl
-Once you enable SSL, you should generate an SSL certificate. ::
+Once you enable SSL, you should use a trusted SSL certificate. You can
+generate a non-trusted SSL certificate using the following::
sudo mkdir /etc/apache2/ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt
@@ -187,6 +189,44 @@ Then, restart Apache. ::
service apache2 restart
+Add Wildcard to DNS
+===================
+
+To use Ceph with S3-style subdomains (e.g., ``bucket-name.domain-name.com``),
+you need to add a wildcard to the DNS record of the DNS server you use with the
+``radosgw`` daemon.
+
+.. tip:: The address of the DNS must also be specified in the Ceph
+ configuration file with the ``rgw dns name = {hostname}`` setting.
+
+For ``dnsmasq``, consider addding the following ``address`` setting with a dot
+(.) prepended to the host name::
+
+ address=/.{hostname-or-fqdn}/{host-ip-address}
+ address=/.ceph-node/192.168.0.1
+
+For ``bind``, consider adding the a wildcard to the DNS record::
+
+ $TTL 604800
+ @ IN SOA ceph-node. root.ceph-node. (
+ 2 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+ ;
+ @ IN NS ceph-node.
+ @ IN A 192.168.122.113
+ * IN CNAME @
+
+Restart your DNS server and ping your server with a subdomain to
+ensure that your Ceph Object Store ``radosgw`` daemon can process
+the subdomain requests. ::
+
+ ping mybucket.{fqdn}
+ ping mybucket.ceph-node
+
+
Restart Services
================
@@ -296,9 +336,16 @@ RGW's ``user:subuser`` tuple maps to the ``tenant:user`` tuple expected by Swift
`RGW Configuration`_ for Keystone integration details.
+Summary
+-------
+
+Once you have completed this Quick Start, you may use the Ceph Object Store
+tutorials. See the `S3-compatible`_ and `Swift-compatible`_ APIs for details.
.. _Create rgw.conf: ../../radosgw/config/index.html#create-rgw-conf
.. _Ceph Deploy Quick Start: ../quick-ceph-deploy
.. _Ceph Object Storage Manual Install: ../../radosgw/manual-install
-.. _RGW Configuration: ../../radosgw/config \ No newline at end of file
+.. _RGW Configuration: ../../radosgw/config
+.. _S3-compatible: ../../radosgw/s3
+.. _Swift-compatible: ../../radosgw/swift \ No newline at end of file
diff --git a/doc/start/rgw.conf b/doc/start/rgw.conf
index 3e4878834c6..e1bee998631 100644
--- a/doc/start/rgw.conf
+++ b/doc/start/rgw.conf
@@ -4,6 +4,8 @@ FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
<VirtualHost *:80>
ServerName {fqdn}
+ <!--Remove the comment. Add a server alias with *.{fqdn} for S3 subdomains-->
+ <!--ServerAlias *.{fqdn}-->
ServerAdmin {email.address}
DocumentRoot /var/www
RewriteEngine On
diff --git a/qa/workunits/rbd/kernel.sh b/qa/workunits/rbd/kernel.sh
index 3786de161ec..5416cc6a5e1 100755
--- a/qa/workunits/rbd/kernel.sh
+++ b/qa/workunits/rbd/kernel.sh
@@ -17,11 +17,16 @@ function get_device_dir {
}
function clean_up {
- rbd unmap /dev/rbd/rbd/testimg1 || true
- rbd unmap /dev/rbd/rbd/testimg1@snap1 || true
+ udevadm settle
+ [ -e /dev/rbd/rbd/testimg1@snap1 ] &&
+ rbd unmap /dev/rbd/rbd/testimg1@snap1 || true
+ if [ -e /dev/rbd/rbd/testimg1 ]; then
+ rbd unmap /dev/rbd/rbd/testimg1 || true
+ rbd snap purge testimg1 || true
+ fi
+ udevadm settle
sudo chown root /sys/bus/rbd/add /sys/bus/rbd/remove
- rbd snap purge testimg1 || true
- rbd rm testimg1 || true
+ rbd ls | grep testimg1 > /dev/null && rbd rm testimg1 || true
sudo rm -f $TMP_FILES
}
@@ -43,39 +48,32 @@ dd if=/dev/zero of=/tmp/img1 count=0 seek=150000
# import
rbd import /tmp/img1 testimg1
rbd map testimg1 --user $CEPH_ID $SECRET_ARGS
+# wait for udev to catch up
+udevadm settle
+
DEV_ID1=$(get_device_dir rbd testimg1 -)
echo "dev_id1 = $DEV_ID1"
cat /sys/bus/rbd/devices/$DEV_ID1/size
cat /sys/bus/rbd/devices/$DEV_ID1/size | grep 76800000
-# wait for udev to catch up
-while test ! -e /dev/rbd/rbd/testimg1
-do
- sleep 1
-done
sudo dd if=/dev/rbd/rbd/testimg1 of=/tmp/img1.export
cmp /tmp/img1 /tmp/img1.export
# snapshot
rbd snap create testimg1 --snap=snap1
-cat /sys/bus/rbd/devices/$DEV_ID1/snap_snap1/snap_size | grep 76800000
rbd map --snap=snap1 testimg1 --user $CEPH_ID $SECRET_ARGS
+# wait for udev to catch up
+udevadm settle
+
DEV_ID2=$(get_device_dir rbd testimg1 snap1)
cat /sys/bus/rbd/devices/$DEV_ID2/size | grep 76800000
-# wait for udev to catch up
-while test ! -e /dev/rbd/rbd/testimg1@snap1
-do
- sleep 1
-done
sudo dd if=/dev/rbd/rbd/testimg1@snap1 of=/tmp/img1.snap1
cmp /tmp/img1 /tmp/img1.snap1
# resize
rbd resize testimg1 --size=40 --allow-shrink
-echo 1 | sudo tee /sys/bus/rbd/devices/$DEV_ID1/refresh
cat /sys/bus/rbd/devices/$DEV_ID1/size | grep 41943040
-echo 1 | sudo tee /sys/bus/rbd/devices/$DEV_ID2/refresh
cat /sys/bus/rbd/devices/$DEV_ID2/size | grep 76800000
sudo dd if=/dev/rbd/rbd/testimg1 of=/tmp/img1.small
@@ -85,9 +83,8 @@ cmp /tmp/img1.trunc /tmp/img1.small
# rollback and check data again
rbd snap rollback --snap=snap1 testimg1
-echo 1 | sudo tee /sys/bus/rbd/devices/$DEV_ID1/refresh
-cat /sys/bus/rbd/devices/$DEV_ID1/snap_snap1/snap_size | grep 76800000
cat /sys/bus/rbd/devices/$DEV_ID1/size | grep 76800000
+cat /sys/bus/rbd/devices/$DEV_ID2/size | grep 76800000
sudo rm -f /tmp/img1.snap1 /tmp/img1.export
sudo dd if=/dev/rbd/rbd/testimg1@snap1 of=/tmp/img1.snap1