summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--ceph.spec.in3
-rw-r--r--debian/ceph-test.install1
-rw-r--r--debian/ceph.install1
-rw-r--r--debian/librados-dev.install1
-rw-r--r--doc/faq.rst35
-rw-r--r--doc/start/quick-cephfs.rst13
-rw-r--r--doc/start/quick-rbd.rst9
7 files changed, 55 insertions, 8 deletions
diff --git a/ceph.spec.in b/ceph.spec.in
index 4724dbb9e95..a260d32e0f7 100644
--- a/ceph.spec.in
+++ b/ceph.spec.in
@@ -345,6 +345,7 @@ fi
%{_bindir}/rbd
%{_bindir}/ceph-debugpack
%{_bindir}/ceph-coverage
+%{_bindir}/ceph_mon_store_converter
%{_initrddir}/ceph
/sbin/mkcephfs
/sbin/mount.ceph
@@ -417,6 +418,7 @@ fi
%{_includedir}/rados/buffer.h
%{_includedir}/rados/page.h
%{_includedir}/rados/crc32c.h
+%{_includedir}/rados/rados_types.h
%dir %{_includedir}/rbd
%{_includedir}/rbd/librbd.h
%{_includedir}/rbd/librbd.hpp
@@ -570,6 +572,7 @@ fi
%{_bindir}/ceph_test_rados_open_pools_parallel
%{_bindir}/ceph_test_rados_watch_notify
%{_bindir}/ceph_test_signal_handlers
+%{_bindir}/ceph_test_store_tool
%{_bindir}/ceph_test_timers
%{_bindir}/ceph_tpbench
%{_bindir}/ceph_xattr_bench
diff --git a/debian/ceph-test.install b/debian/ceph-test.install
index 1aba361ee9a..63cb379e156 100644
--- a/debian/ceph-test.install
+++ b/debian/ceph-test.install
@@ -56,6 +56,7 @@ usr/bin/ceph_test_rados_list_parallel
usr/bin/ceph_test_rados_open_pools_parallel
usr/bin/ceph_test_rados_watch_notify
usr/bin/ceph_test_signal_handlers
+usr/bin/ceph_test_store_tool
usr/bin/ceph_test_timers
usr/bin/ceph_tpbench
usr/bin/ceph_xattr_bench
diff --git a/debian/ceph.install b/debian/ceph.install
index fb70d9b9380..b942679fd73 100644
--- a/debian/ceph.install
+++ b/debian/ceph.install
@@ -6,6 +6,7 @@ usr/bin/ceph-run
usr/bin/ceph-mon
usr/bin/ceph-osd
usr/bin/ceph-debugpack
+usr/bin/ceph_mon_store_converter
sbin/ceph-disk-prepare usr/sbin/
sbin/ceph-disk-activate usr/sbin/
sbin/ceph-create-keys usr/sbin/
diff --git a/debian/librados-dev.install b/debian/librados-dev.install
index ecc29c7cf36..876382b0a3c 100644
--- a/debian/librados-dev.install
+++ b/debian/librados-dev.install
@@ -6,5 +6,6 @@ usr/include/rados/librados.hpp
usr/include/rados/buffer.h
usr/include/rados/page.h
usr/include/rados/crc32c.h
+usr/include/rados/rados_types.h
usr/bin/librados-config
usr/share/man/man8/librados-config.8
diff --git a/doc/faq.rst b/doc/faq.rst
index e702b22377f..cec5808d7e8 100644
--- a/doc/faq.rst
+++ b/doc/faq.rst
@@ -51,6 +51,40 @@ Ceph also runs on Fedora and Enterprise Linux derivates (RHEL, CentOS) using
You can also download Ceph source `tarballs`_ and build Ceph for your
distribution. See `Installation`_ for details.
+.. _try-ceph:
+
+How Can I Give Ceph a Try?
+==========================
+
+Follow our `Quick Start`_ guides. They will get you up an running quickly
+without requiring deeper knowledge of Ceph. Our `Quick Start`_ guides will also
+help you avoid a few issues related to limited deployments. If you choose to
+stray from the Quick Starts, there are a few things you need to know.
+
+We recommend using at least two hosts, and a recent Linux kernel. In older
+kernels, Ceph can deadlock if you try to mount CephFS or RBD client services on
+the same host that runs your test Ceph cluster. This is not a Ceph-related
+issue. It's related to memory pressure and needing to relieve free memory.
+Recent kernels with up-to-date ``glibc`` and ``syncfs(2)`` reduce this issue
+considerably. However, a memory pool large enough to handle incoming requests is
+the only thing that guarantees against the deadlock occuring. When you run Ceph
+clients on a Ceph cluster machine, loopback NFS can experience a similar problem
+related to buffer cache management in the kernel. You can avoid these scenarios
+entirely by using a separate client host, which is more realistic for deployment
+scenarios anyway.
+
+We recommend using at least two OSDs with at least two replicas of the data.
+OSDs report other OSDs to the monitor, and also interact with other OSDs when
+replicating data. If you have only one OSD, a second OSD cannot check its
+heartbeat. Also, if an OSD expects another OSD to tell it which placement groups
+it should have, the lack of another OSD prevents this from occurring. So a
+placement group can remain stuck "stale" forever. These are not likely
+production issues.
+
+Finally, `Quick Start`_ guides are a way to get you up and running quickly. To
+build performant systems, you'll need a drive for each OSD, and you will likely
+benefit by writing the OSD journal to a separate drive from the OSD data.
+
How Many OSDs Can I Run per Host?
=================================
@@ -346,3 +380,4 @@ Documentation for the build procedure.
.. _Striping: ../architecture##how-ceph-clients-stripe-data
.. _https://github.com/ceph/ceph/blob/master/doc/faq.rst: https://github.com/ceph/ceph/blob/master/doc/faq.rst
.. _Filesystem Recommendations: ../rados/configuration/filesystem-recommendations
+.. _Quick Start: ../start \ No newline at end of file
diff --git a/doc/start/quick-cephfs.rst b/doc/start/quick-cephfs.rst
index 6674087f922..5e17c4d39a4 100644
--- a/doc/start/quick-cephfs.rst
+++ b/doc/start/quick-cephfs.rst
@@ -5,8 +5,6 @@
To use this guide, you must have executed the procedures in the `5-minute
Quick Start`_ guide first. Execute this quick start on the client machine.
-.. important:: Mount the CephFS filesystem on the client machine,
- not the cluster machine.
Kernel Driver
=============
@@ -15,7 +13,12 @@ Mount Ceph FS as a kernel driver. ::
sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
-
+
+
+.. note:: Mount the CephFS filesystem on the client machine,
+ not the cluster machine. See `FAQ`_ for details.
+
+
Filesystem in User Space (FUSE)
===============================
@@ -24,6 +27,7 @@ Mount Ceph FS as with FUSE. Replace {username} with your username. ::
sudo mkdir /home/{username}/cephfs
sudo ceph-fuse -m {ip-address-of-monitor}:6789 /home/{username}/cephfs
+
Additional Information
======================
@@ -33,4 +37,5 @@ details on running CephFS in a production environment.
.. _5-minute Quick Start: ../quick-start
.. _CephFS: ../../cephfs/
-.. _Inktank: http://inktank.com \ No newline at end of file
+.. _Inktank: http://inktank.com
+.. _FAQ: ../../faq#try-ceph
diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst
index e7561ec9dd6..7300547e5ea 100644
--- a/doc/start/quick-rbd.rst
+++ b/doc/start/quick-rbd.rst
@@ -5,9 +5,6 @@
To use this guide, you must have executed the procedures in the `5-minute
Quick Start`_ guide first. Execute this quick start on the client machine.
-.. important:: Mount the block device on the client machine,
- not the server machine.
-
#. Create a block device image. ::
rbd create foo --size 4096
@@ -29,7 +26,11 @@ Quick Start`_ guide first. Execute this quick start on the client machine.
sudo mkdir /mnt/myrbd
sudo mount /dev/rbd/rbd/foo /mnt/myrbd
+.. note:: Mount the block device on the client machine,
+ not the server machine. See `FAQ`_ for details.
+
See `block devices`_ for additional details.
.. _5-minute Quick Start: ../quick-start
-.. _block devices: ../../rbd/rbd \ No newline at end of file
+.. _block devices: ../../rbd/rbd
+.. _FAQ: ../../faq#try-ceph