Storage: CephFS: Difference between revisions
No edit summary |
No edit summary |
||
(2 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
<pvehide> | <pvehide> | ||
Storage pool type: cephfs | Storage pool type: cephfs | ||
CephFS implements a POSIX-compliant filesystem using a Ceph | CephFS implements a POSIX-compliant filesystem, using a Ceph | ||
storage cluster to store its data. As CephFS builds | storage cluster to store its data. As CephFS builds upon Ceph, it shares most of | ||
its properties | its properties. This includes redundancy, scalability, self-healing, and high | ||
availability. | availability. | ||
Proxmox VE can manage | Proxmox VE can manage Ceph setups, which makes | ||
configuring a CephFS storage easier. As | configuring a CephFS storage easier. As modern hardware offers a lot of | ||
and RAM, running storage services and VMs on same node is possible without a | processing power and RAM, running storage services and VMs on same node is | ||
possible without a significant performance impact. | |||
To use the CephFS storage plugin you | To use the CephFS storage plugin, you must replace the stock Debian Ceph client, | ||
by adding our Ceph repository. | |||
Once added, run | Once added, run apt update, followed by apt dist-upgrade, in order to get | ||
packages. | the newest packages. | ||
Please ensure that there are no other Ceph repositories configured. | |||
Otherwise the installation will fail or there will be mixed package versions on | |||
versions on the node, leading to unexpected behavior. | the node, leading to unexpected behavior. | ||
Configuration | Configuration | ||
This backend supports the common storage properties nodes, | This backend supports the common storage properties nodes, | ||
disable, content, | disable, content, as well as the following cephfs specific properties: | ||
fs-name | |||
Name of the Ceph FS. | |||
monhost | monhost | ||
List of monitor daemon addresses. Optional, only needed if Ceph is not running | List of monitor daemon addresses. Optional, only needed if Ceph is not running | ||
on the | on the Proxmox VE cluster. | ||
path | path | ||
The local mount point. Optional, defaults to /mnt/pve/<STORAGE_ID>/. | The local mount point. Optional, defaults to /mnt/pve/<STORAGE_ID>/. | ||
username | username | ||
Ceph user id. Optional, only needed if Ceph is not running on the | Ceph user id. Optional, only needed if Ceph is not running on the Proxmox VE cluster, | ||
where it defaults to admin. | where it defaults to admin. | ||
subdir | subdir | ||
Line 36: | Line 38: | ||
Access CephFS through FUSE, instead of the kernel client. Optional, defaults | Access CephFS through FUSE, instead of the kernel client. Optional, defaults | ||
to 0. | to 0. | ||
Configuration | Configuration example for an external Ceph cluster (/etc/pve/storage.cfg) | ||
cephfs: cephfs-external | cephfs: cephfs-external | ||
monhost 10.1.1.20 10.1.1.21 10.1.1.22 | monhost 10.1.1.20 10.1.1.21 10.1.1.22 | ||
Line 42: | Line 44: | ||
content backup | content backup | ||
username admin | username admin | ||
Don’t forget to | fs-name cephfs | ||
Don’t forget to set up the client’s secret key file, if cephx was not | |||
disabled. | |||
Authentication | Authentication | ||
If you use | If Ceph is installed locally on the Proxmox VE cluster, the following is done | ||
the secret from | automatically when adding the storage. | ||
If you use cephx authentication, which is enabled by default, you need to | |||
provide the secret from the external Ceph cluster. | |||
To configure the storage via the CLI, you first need to make the file | |||
scp | containing the secret available. One way is to copy the file from the external | ||
Ceph cluster directly to one of the Proxmox VE nodes. The following example will | |||
secret | copy it to the /root directory of the node on which we run it: | ||
# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret | |||
[client.userid] section. | Then use the pvesm CLI tool to configure the external RBD storage, use the | ||
A secret can be received from the | --keyring parameter, which needs to be a path to the secret file that you | ||
copied. For example: | |||
access the cluster. For further | # pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret | ||
When configuring an external RBD storage via the GUI, you can copy and paste | |||
ceph auth get-key client.userid > cephfs.secret | the secret into the appropriate field. | ||
The secret is only the key itself, as opposed to the rbd backend which also | |||
contains a [client.userid] section. | |||
The secret will be stored at | |||
# /etc/pve/priv/ceph/<STORAGE_ID>.secret | |||
A secret can be received from the Ceph cluster (as Ceph admin) by issuing the | |||
command below, where userid is the client ID that has been configured to | |||
access the cluster. For further information on Ceph user management, see the | |||
Ceph docs.[cephusermgmt] | |||
# ceph auth get-key client.userid > cephfs.secret | |||
Storage Features | Storage Features | ||
The cephfs backend is a POSIX-compliant filesystem on top of a Ceph cluster. | The cephfs backend is a POSIX-compliant filesystem, on top of a Ceph cluster. | ||
Table 1. Storage features for backend cephfs | Table 1. Storage features for backend cephfs | ||
Content types | Content types | ||
Line 75: | Line 85: | ||
yes[1] | yes[1] | ||
no | no | ||
[1] | [1] While no known bugs exist, snapshots are not yet guaranteed to be stable, | ||
they lack testing. | as they lack sufficient testing. | ||
See Also | See Also | ||
Storage | Storage | ||
</pvehide> | </pvehide> | ||
<!--PVE_IMPORT_END_MARKER--> | <!--PVE_IMPORT_END_MARKER--> |
Latest revision as of 10:29, 22 June 2023
Storage pool type: cephfs
CephFS implements a POSIX-compliant filesystem, using a Ceph storage cluster to store its data. As CephFS builds upon Ceph, it shares most of its properties. This includes redundancy, scalability, self-healing, and high availability.
Proxmox VE can manage Ceph setups, which makes configuring a CephFS storage easier. As modern hardware offers a lot of processing power and RAM, running storage services and VMs on same node is possible without a significant performance impact. |
To use the CephFS storage plugin, you must replace the stock Debian Ceph client, by adding our Ceph repository. Once added, run apt update, followed by apt dist-upgrade, in order to get the newest packages.
Please ensure that there are no other Ceph repositories configured. Otherwise the installation will fail or there will be mixed package versions on the node, leading to unexpected behavior. |
Configuration
This backend supports the common storage properties nodes, disable, content, as well as the following cephfs specific properties:
- fs-name
-
Name of the Ceph FS.
- monhost
-
List of monitor daemon addresses. Optional, only needed if Ceph is not running on the Proxmox VE cluster.
- path
-
The local mount point. Optional, defaults to /mnt/pve/<STORAGE_ID>/.
- username
-
Ceph user id. Optional, only needed if Ceph is not running on the Proxmox VE cluster, where it defaults to admin.
- subdir
-
CephFS subdirectory to mount. Optional, defaults to /.
- fuse
-
Access CephFS through FUSE, instead of the kernel client. Optional, defaults to 0.
cephfs: cephfs-external monhost 10.1.1.20 10.1.1.21 10.1.1.22 path /mnt/pve/cephfs-external content backup username admin fs-name cephfs
Don’t forget to set up the client’s secret key file, if cephx was not disabled. |
Authentication
If Ceph is installed locally on the Proxmox VE cluster, the following is done automatically when adding the storage. |
If you use cephx authentication, which is enabled by default, you need to provide the secret from the external Ceph cluster.
To configure the storage via the CLI, you first need to make the file containing the secret available. One way is to copy the file from the external Ceph cluster directly to one of the Proxmox VE nodes. The following example will copy it to the /root directory of the node on which we run it:
# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret
Then use the pvesm CLI tool to configure the external RBD storage, use the --keyring parameter, which needs to be a path to the secret file that you copied. For example:
# pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret
When configuring an external RBD storage via the GUI, you can copy and paste the secret into the appropriate field.
The secret is only the key itself, as opposed to the rbd backend which also contains a [client.userid] section.
The secret will be stored at
# /etc/pve/priv/ceph/<STORAGE_ID>.secret
A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
command below, where userid is the client ID that has been configured to
access the cluster. For further information on Ceph user management, see the
Ceph docs.
[cephusermgmt]
# ceph auth get-key client.userid > cephfs.secret
Storage Features
The cephfs backend is a POSIX-compliant filesystem, on top of a Ceph cluster.
Content types | Image formats | Shared | Snapshots | Clones |
---|---|---|---|---|
vztmpl iso backup snippets |
none |
yes |
yes[1] |
no |
[1] While no known bugs exist, snapshots are not yet guaranteed to be stable, as they lack sufficient testing.
See Also