Storage: RBD: Difference between revisions
m (Protected "Storage: RBD" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite))) |
No edit summary |
||
Line 1: | Line 1: | ||
<!--PVE_IMPORT_START_MARKER--> | |||
<!-- Do not edit - this is autogenerated content --> | |||
{{#pvedocs:pve-storage-rbd-plain.html}} | |||
[[Category:Reference Documentation]] | |||
<pvehide> | |||
Storage pool type: rbd | |||
Ceph is a distributed object store and file system | |||
designed to provide excellent performance, reliability and | |||
scalability. RADOS block devices implement a feature rich block level | |||
storage, and you get the following advantages: | |||
thin provisioning | |||
resizable volumes | |||
distributed and redundant (striped over multiple OSDs) | |||
full snapshot and clone capabilities | |||
self healing | |||
no single point of failure | |||
scalable to the exabyte level | |||
kernel and user space implementation available | |||
For smaller deployments, it is also possible to run Ceph | |||
services directly on your Proxmox VE nodes. Recent hardware has plenty | |||
of CPU power and RAM, so running storage services and VMs on same node | |||
is possible. | |||
Configuration | |||
This backend supports the common storage properties nodes, | |||
disable, content, and the following rbd specific properties: | |||
monhost | |||
List of monitor daemon IPs. | |||
pool | |||
Ceph pool name. | |||
username | |||
RBD user Id. | |||
krbd | |||
Access rbd through krbd kernel module. This is required if you want to | |||
use the storage for containers. | |||
Configuration Example (/etc/pve/storage.cfg) | |||
rbd: ceph3 | |||
monhost 10.1.1.20 10.1.1.21 10.1.1.22 | |||
pool ceph3 | |||
content images | |||
username admin | |||
You can use the rbd utility to do low-level management tasks. | |||
Authentication | |||
If you use cephx authentication, you need to copy the keyfile from | |||
Ceph to Proxmox VE host. | |||
Create the directory /etc/pve/priv/ceph with | |||
mkdir /etc/pve/priv/ceph | |||
Then copy the keyring | |||
scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring | |||
The keyring must be named to match your <STORAGE_ID>. Copying the | |||
keyring generally requires root privileges. | |||
Storage Features | |||
The rbd backend is a block level storage, and implements full | |||
snapshot and clone functionality. | |||
Table 1. Storage features for backend rbd | |||
Content types | |||
Image formats | |||
Shared | |||
Snapshots | |||
Clones | |||
images rootdir | |||
raw | |||
yes | |||
yes | |||
yes | |||
See Also | |||
Storage | |||
</pvehide> | |||
<!--PVE_IMPORT_END_MARKER--> | |||
{{#pvedocs:pve-storage-rbd-plain.html}} | {{#pvedocs:pve-storage-rbd-plain.html}} | ||
[[Category:Reference Documentation]] | [[Category:Reference Documentation]] |
Revision as of 08:59, 17 October 2016
Storage pool type: rbd
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. RADOS block devices implement a feature rich block level storage, and you get the following advantages:
-
thin provisioning
-
resizable volumes
-
distributed and redundant (striped over multiple OSDs)
-
full snapshot and clone capabilities
-
self healing
-
no single point of failure
-
scalable to the exabyte level
-
kernel and user space implementation available
For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. |
Configuration
This backend supports the common storage properties nodes, disable, content, and the following rbd specific properties:
- monhost
-
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the Proxmox VE cluster.
- pool
-
Ceph pool name.
- username
-
RBD user ID. Optional, only needed if Ceph is not running on the Proxmox VE cluster. Note that only the user ID should be used. The "client." type prefix must be left out.
- krbd
-
Enforce access to rados block devices through the krbd kernel module. Optional.
Containers will use krbd independent of the option value. |
rbd: ceph-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 pool ceph-external content images username admin
You can use the rbd utility to do low-level management tasks. |
Authentication
If Ceph is installed locally on the Proxmox VE cluster, the following is done automatically when adding the storage. |
If you use cephx authentication, which is enabled by default, you need to provide the keyring from the external Ceph cluster.
To configure the storage via the CLI, you first need to make the file containing the keyring available. One way is to copy the file from the external Ceph cluster directly to one of the Proxmox VE nodes. The following example will copy it to the /root directory of the node on which we run it:
# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
Then use the pvesm CLI tool to configure the external RBD storage, use the --keyring parameter, which needs to be a path to the keyring file that you copied. For example:
# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
When configuring an external RBD storage via the GUI, you can copy and paste the keyring into the appropriate field.
The keyring will be stored at
# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
Creating a keyring with only the needed capabilities is recommend when
connecting to an external cluster. For further information on Ceph user
management, see the Ceph docs. [Ceph User Management] |
Ceph client configuration (optional)
Connecting to an external Ceph storage doesn’t always allow setting client-specific options in the config DB on the external cluster. You can add a ceph.conf beside the Ceph keyring to change the Ceph client configuration for the storage.
The ceph.conf needs to have the same name as the storage.
# /etc/pve/priv/ceph/<STORAGE_ID>.conf
See the RBD configuration reference
[RBD configuration reference
https://docs.ceph.com/en/quincy/rbd/rbd-config-ref/]
for possible settings.
Do not change these settings lightly. Proxmox VE is merging the <STORAGE_ID>.conf with the storage configuration. |
Storage Features
The rbd backend is a block level storage, and implements full snapshot and clone functionality.
Content types | Image formats | Shared | Snapshots | Clones |
---|---|---|---|---|
images rootdir |
raw |
yes |
yes |
yes |
See Also
Storage pool type: rbd
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. RADOS block devices implement a feature rich block level storage, and you get the following advantages:
-
thin provisioning
-
resizable volumes
-
distributed and redundant (striped over multiple OSDs)
-
full snapshot and clone capabilities
-
self healing
-
no single point of failure
-
scalable to the exabyte level
-
kernel and user space implementation available
For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible. |
Configuration
This backend supports the common storage properties nodes, disable, content, and the following rbd specific properties:
- monhost
-
List of monitor daemon IPs. Optional, only needed if Ceph is not running on the Proxmox VE cluster.
- pool
-
Ceph pool name.
- username
-
RBD user ID. Optional, only needed if Ceph is not running on the Proxmox VE cluster. Note that only the user ID should be used. The "client." type prefix must be left out.
- krbd
-
Enforce access to rados block devices through the krbd kernel module. Optional.
Containers will use krbd independent of the option value. |
rbd: ceph-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 pool ceph-external content images username admin
You can use the rbd utility to do low-level management tasks. |
Authentication
If Ceph is installed locally on the Proxmox VE cluster, the following is done automatically when adding the storage. |
If you use cephx authentication, which is enabled by default, you need to provide the keyring from the external Ceph cluster.
To configure the storage via the CLI, you first need to make the file containing the keyring available. One way is to copy the file from the external Ceph cluster directly to one of the Proxmox VE nodes. The following example will copy it to the /root directory of the node on which we run it:
# scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /root/rbd.keyring
Then use the pvesm CLI tool to configure the external RBD storage, use the --keyring parameter, which needs to be a path to the keyring file that you copied. For example:
# pvesm add rbd <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content images --keyring /root/rbd.keyring
When configuring an external RBD storage via the GUI, you can copy and paste the keyring into the appropriate field.
The keyring will be stored at
# /etc/pve/priv/ceph/<STORAGE_ID>.keyring
Creating a keyring with only the needed capabilities is recommend when
connecting to an external cluster. For further information on Ceph user
management, see the Ceph docs. [Ceph User Management] |
Ceph client configuration (optional)
Connecting to an external Ceph storage doesn’t always allow setting client-specific options in the config DB on the external cluster. You can add a ceph.conf beside the Ceph keyring to change the Ceph client configuration for the storage.
The ceph.conf needs to have the same name as the storage.
# /etc/pve/priv/ceph/<STORAGE_ID>.conf
See the RBD configuration reference
[RBD configuration reference
https://docs.ceph.com/en/quincy/rbd/rbd-config-ref/]
for possible settings.
Do not change these settings lightly. Proxmox VE is merging the <STORAGE_ID>.conf with the storage configuration. |
Storage Features
The rbd backend is a block level storage, and implements full snapshot and clone functionality.
Content types | Image formats | Shared | Snapshots | Clones |
---|---|---|---|---|
images rootdir |
raw |
yes |
yes |
yes |