Storage: RBD: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
m (Protected "Storage: RBD" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite)))
No edit summary
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<!--PVE_IMPORT_START_MARKER-->
<!-- Do not edit - this is autogenerated content -->
{{#pvedocs:pve-storage-rbd-plain.html}}
{{#pvedocs:pve-storage-rbd-plain.html}}
[[Category:Reference Documentation]]
[[Category:Reference Documentation]]
<pvehide>
Storage pool type: rbd
Ceph is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
thin provisioning
resizable volumes
distributed and redundant (striped over multiple OSDs)
full snapshot and clone capabilities
self healing
no single point of failure
scalable to the exabyte level
kernel and user space implementation available
For smaller deployments, it is also possible to run Ceph
services directly on your Proxmox VE nodes. Recent hardware has plenty
of CPU power and RAM, so running storage services and VMs on same node
is possible.
Configuration
This backend supports the common storage properties nodes,
disable, content, and the following rbd specific properties:
monhost
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
Proxmox VE cluster.
pool
Ceph pool name.
username
RBD user ID. Optional, only needed if Ceph is not running on the Proxmox VE cluster.
Note that only the user ID should be used. The "client." type prefix must be
left out.
krbd
Enforce access to rados block devices through the krbd kernel module. Optional.
Containers will use krbd independent of the option value.
Configuration Example for a external Ceph cluster (/etc/pve/storage.cfg)
rbd: ceph-external
        monhost 10.1.1.20 10.1.1.21 10.1.1.22
        pool ceph-external
        content images
        username admin
You can use the rbd utility to do low-level management tasks.
Authentication
If Ceph is installed locally on the Proxmox VE cluster, the following is done
automatically when adding the storage.
If you use cephx authentication, which is enabled by default, you need to
provide the keyring from the external Ceph cluster.
To configure the storage via the CLI, you first need to make the file
containing the keyring available. One way is to copy the file from the external
Ceph cluster directly to one of the Proxmox VE nodes. The following example will
copy it to the /root directory of the node on which we run it:
# scp &lt;external cephserver&gt;:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
Then use the pvesm CLI tool to configure the external RBD storage, use the
--keyring parameter, which needs to be a path to the keyring file that you
copied.  For example:
# pvesm add rbd &lt;name&gt; --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
When configuring an external RBD storage via the GUI, you can copy and paste
the keyring into the appropriate field.
The keyring will be stored at
# /etc/pve/priv/ceph/&lt;STORAGE_ID&gt;.keyring
Creating a keyring with only the needed capabilities is recommend when
connecting to an external cluster. For further information on Ceph user
management, see the Ceph docs.[Ceph User Management]
Ceph client configuration (optional)
Connecting to an external Ceph storage doesn&#8217;t always allow setting
client-specific options in the config DB on the external cluster. You can add a
ceph.conf beside the Ceph keyring to change the Ceph client configuration for
the storage.
The ceph.conf needs to have the same name as the storage.
# /etc/pve/priv/ceph/&lt;STORAGE_ID&gt;.conf
See the RBD configuration reference [RBD configuration reference
https://docs.ceph.com/en/quincy/rbd/rbd-config-ref/] for possible settings.
Do not change these settings lightly. Proxmox VE is merging the
&lt;STORAGE_ID&gt;.conf with the storage configuration.
Storage Features
The rbd backend is a block level storage, and implements full
snapshot and clone functionality.
Table 1. Storage features for backend rbd
Content types
Image formats
Shared
Snapshots
Clones
images rootdir
raw
yes
yes
yes
See Also
Storage
</pvehide>
<!--PVE_IMPORT_END_MARKER-->

Latest revision as of 09:11, 23 March 2023

Storage pool type: rbd

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. RADOS block devices implement a feature rich block level storage, and you get the following advantages:

  • thin provisioning

  • resizable volumes

  • distributed and redundant (striped over multiple OSDs)

  • full snapshot and clone capabilities

  • self healing

  • no single point of failure

  • scalable to the exabyte level

  • kernel and user space implementation available

Note For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible.

Configuration

This backend supports the common storage properties nodes, disable, content, and the following rbd specific properties:

monhost

List of monitor daemon IPs. Optional, only needed if Ceph is not running on the Proxmox VE cluster.

pool

Ceph pool name.

username

RBD user ID. Optional, only needed if Ceph is not running on the Proxmox VE cluster. Note that only the user ID should be used. The "client." type prefix must be left out.

krbd

Enforce access to rados block devices through the krbd kernel module. Optional.

Note Containers will use krbd independent of the option value.
Configuration Example for a external Ceph cluster (/etc/pve/storage.cfg)
rbd: ceph-external
        monhost 10.1.1.20 10.1.1.21 10.1.1.22
        pool ceph-external
        content images
        username admin
Tip You can use the rbd utility to do low-level management tasks.

Authentication

Note If Ceph is installed locally on the Proxmox VE cluster, the following is done automatically when adding the storage.

If you use cephx authentication, which is enabled by default, you need to provide the keyring from the external Ceph cluster.

To configure the storage via the CLI, you first need to make the file containing the keyring available. One way is to copy the file from the external Ceph cluster directly to one of the Proxmox VE nodes. The following example will copy it to the /root directory of the node on which we run it:

# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring

Then use the pvesm CLI tool to configure the external RBD storage, use the --keyring parameter, which needs to be a path to the keyring file that you copied. For example:

# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring

When configuring an external RBD storage via the GUI, you can copy and paste the keyring into the appropriate field.

The keyring will be stored at

# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
Tip Creating a keyring with only the needed capabilities is recommend when connecting to an external cluster. For further information on Ceph user management, see the Ceph docs.
[Ceph User Management]

Ceph client configuration (optional)

Connecting to an external Ceph storage doesn’t always allow setting client-specific options in the config DB on the external cluster. You can add a ceph.conf beside the Ceph keyring to change the Ceph client configuration for the storage.

The ceph.conf needs to have the same name as the storage.

# /etc/pve/priv/ceph/<STORAGE_ID>.conf

See the RBD configuration reference
[RBD configuration reference https://docs.ceph.com/en/quincy/rbd/rbd-config-ref/]
for possible settings.

Note Do not change these settings lightly. Proxmox VE is merging the <STORAGE_ID>.conf with the storage configuration.

Storage Features

The rbd backend is a block level storage, and implements full snapshot and clone functionality.

Table 1. Storage features for backend rbd
Content types Image formats Shared Snapshots Clones

images rootdir

raw

yes

yes

yes

See Also