Storage: RBD

From Proxmox VE
Revision as of 10:34, 28 April 2021 by Api (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Storage pool type: rbd

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. RADOS block devices implement a feature rich block level storage, and you get the following advantages:

  • thin provisioning

  • resizable volumes

  • distributed and redundant (striped over multiple OSDs)

  • full snapshot and clone capabilities

  • self healing

  • no single point of failure

  • scalable to the exabyte level

  • kernel and user space implementation available

Note For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible.


This backend supports the common storage properties nodes, disable, content, and the following rbd specific properties:


List of monitor daemon IPs. Optional, only needed if Ceph is not running on the PVE cluster.


Ceph pool name.


RBD user ID. Optional, only needed if Ceph is not running on the PVE cluster. Note that only the user ID should be used. The "client." type prefix must be left out.


Enforce access to rados block devices through the krbd kernel module. Optional.

Note Containers will use krbd independent of the option value.
Configuration Example for a external Ceph cluster (/etc/pve/storage.cfg)
rbd: ceph-external
        pool ceph-external
        content images
        username admin
Tip You can use the rbd utility to do low-level management tasks.


If you use cephx authentication, you need to copy the keyfile from your external Ceph cluster to a Proxmox VE host.

Create the directory /etc/pve/priv/ceph with

mkdir /etc/pve/priv/ceph

Then copy the keyring

scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring

The keyring must be named to match your <STORAGE_ID>. Copying the keyring generally requires root privileges.

If Ceph is installed locally on the PVE cluster, this is done automatically by pveceph or in the GUI.

Storage Features

The rbd backend is a block level storage, and implements full snapshot and clone functionality.

Table 1. Storage features for backend rbd
Content types Image formats Shared Snapshots Clones

images rootdir





See Also