summaryrefslogtreecommitdiff
path: root/doc/rbd/rbd.rst
blob: 2e9e34bf250dd70ebb6ba10fe7de65b2a5be389a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
===============
 Block Devices
===============

A block is a sequence of bytes (for example, a 512-byte block of data).
Block-based storage interfaces are the most common way to store data with
rotating media such as hard disks, CDs, floppy disks, and even traditional
9-track tape. The ubiquity of block device interfaces makes a virtual block
device an ideal candidate to interact with a mass data storage system like Ceph.

Ceph block devices are thin-provisioned, resizable and store data striped over
multiple OSDs in a Ceph cluster.  Ceph block devices leverage
:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities
such as snapshotting, replication and consistency. Ceph's 
:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD) 
interact with OSDs using kernel modules or the ``librbd`` library.

.. ditaa::  +------------------------+ +------------------------+
            |     Kernel Module      | |        librbd          |
            +------------------------+-+------------------------+
            |                   RADOS Protocol                  |
            +------------------------+-+------------------------+
            |          OSDs          | |        Monitors        |
            +------------------------+ +------------------------+

.. note:: Kernel modules can use Linux page caching. For ``librbd``-based 
   applications, Ceph supports `RBD Caching`_.

Ceph's block devices deliver high performance with infinite scalability to
`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `Qemu`_, and
cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
libvirt and Qemu to integrate with Ceph block devices. You can use the same cluster
to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block
devices simultaneously.

.. important:: To use RBD, you must have a running Ceph cluster.

.. toctree::
	:maxdepth: 1

	Commands <rados-rbd-cmds>
	Kernel Modules <rbd-ko>
	Snapshots<rbd-snapshot>
	QEMU <qemu-rbd>
	libvirt <libvirt>
	Cache Settings <rbd-config-ref/>
	OpenStack <rbd-openstack>
	CloudStack <rbd-cloudstack>
	Manpage rbd <../../man/8/rbd>
	Manpage rbd-fuse <../../man/8/rbd-fuse>
	Manpage ceph-rbdnamer <../../man/8/ceph-rbdnamer>
	librbd <librbdpy>
	

.. _RBD Caching: ../rbd-config-ref/
.. _kernel modules: ../rbd-ko/
.. _Qemu: ../qemu-rbd/
.. _OpenStack: ../rbd-openstack
.. _CloudStack: ../rbd-cloudstack
.. _Ceph RADOS Gateway: ../../radosgw/
.. _Ceph FS filesystem: ../../cephfs/