Storage: CephFS: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
(Created page with "<!--PVE_IMPORT_START_MARKER--> <!-- Do not edit - this is autogenerated content --> {{#pvedocs:pve-storage-cephfs-plain.html}} Category:Reference Documentation <pvehid...")
 
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 5: Line 5:
<pvehide>
<pvehide>
Storage pool type: cephfs
Storage pool type: cephfs
CephFS implements a POSIX-compliant filesystem using a Ceph
CephFS implements a POSIX-compliant filesystem, using a Ceph
storage cluster to store its data. As CephFS builds on Ceph it shares most of
storage cluster to store its data. As CephFS builds upon Ceph, it shares most of
its properties, this includes redundancy, scalability, self healing and high
its properties. This includes redundancy, scalability, self-healing, and high
availability.
availability.
Proxmox VE can manage ceph setups, which makes
Proxmox VE can manage Ceph setups, which makes
configuring a CephFS storage easier. As recent hardware has plenty of CPU power
configuring a CephFS storage easier. As modern hardware offers a lot of
and RAM, running storage services and VMs on same node is possible without a
processing power and RAM, running storage services and VMs on same node is
big performance impact.
possible without a significant performance impact.
To use the CephFS storage plugin you need update the debian stock Ceph client.
To use the CephFS storage plugin, you must replace the stock Debian Ceph client,
Add our Ceph repository Ceph repository.
by adding our Ceph repository.
Once added, run an apt update and apt dist-upgrade cycle to get the newest
Once added, run apt update, followed by apt dist-upgrade, in order to get
packages.
the newest packages.
You need to make sure that there is no other Ceph repository configured,
Please ensure that there are no other Ceph repositories configured.
otherwise the installation will fail or there will be mixed package
Otherwise the installation will fail or there will be mixed package versions on
versions on the node, leading to unexpected behavior.
the node, leading to unexpected behavior.
Configuration
Configuration
This backend supports the common storage properties nodes,
This backend supports the common storage properties nodes,
disable, content, and the following cephfs specific properties:
disable, content, as well as the following cephfs specific properties:
fs-name
Name of the Ceph FS.
monhost
monhost
List of monitor daemon addresses. Optional, only needed if Ceph is not running
List of monitor daemon addresses. Optional, only needed if Ceph is not running
on the PVE cluster.
on the Proxmox VE cluster.
path
path
The local mount point. Optional, defaults to /mnt/pve/&lt;STORAGE_ID&gt;/.
The local mount point. Optional, defaults to /mnt/pve/&lt;STORAGE_ID&gt;/.
username
username
Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster
Ceph user id. Optional, only needed if Ceph is not running on the Proxmox VE cluster,
where it defaults to admin.
where it defaults to admin.
subdir
subdir
Line 36: Line 38:
Access CephFS through FUSE, instead of the kernel client. Optional, defaults
Access CephFS through FUSE, instead of the kernel client. Optional, defaults
to 0.
to 0.
Configuration Example for a external Ceph cluster (/etc/pve/storage.cfg)
Configuration example for an external Ceph cluster (/etc/pve/storage.cfg)
cephfs: cephfs-external
cephfs: cephfs-external
         monhost 10.1.1.20 10.1.1.21 10.1.1.22
         monhost 10.1.1.20 10.1.1.21 10.1.1.22
Line 42: Line 44:
         content backup
         content backup
         username admin
         username admin
Don&#8217;t forget to setup the client secret key file if cephx was not turned
        fs-name cephfs
off.
Don&#8217;t forget to set up the client&#8217;s secret key file, if cephx was not
disabled.
Authentication
Authentication
If you use the, by-default enabled, cephx authentication, you need to copy
If Ceph is installed locally on the Proxmox VE cluster, the following is done
the secret from your external Ceph cluster to a Proxmox VE host.
automatically when adding the storage.
Create the directory /etc/pve/priv/ceph with
If you use cephx authentication, which is enabled by default, you need to
mkdir /etc/pve/priv/ceph
provide the secret from the external Ceph cluster.
Then copy the secret
To configure the storage via the CLI, you first need to make the file
scp cephfs.secret &lt;proxmox&gt;:/etc/pve/priv/ceph/&lt;STORAGE_ID&gt;.secret
containing the secret available. One way is to copy the file from the external
The secret must be named to match your &lt;STORAGE_ID&gt;. Copying the
Ceph cluster directly to one of the Proxmox VE nodes. The following example will
secret generally requires root privileges. The file must only contain the
copy it to the /root directory of the node on which we run it:
secret key itself, opposed to the rbd backend which also contains a
# scp &lt;external cephserver&gt;:/etc/ceph/cephfs.secret /root/cephfs.secret
[client.userid] section.
Then use the pvesm CLI tool to configure the external RBD storage, use the
A secret can be received from the ceph cluster (as ceph admin) by issuing the
--keyring parameter, which needs to be a path to the secret file that you
following command. Replace the userid with the actual client ID configured to
copied.  For example:
access the cluster. For further ceph user management see the Ceph docs
# pvesm add cephfs &lt;name&gt; --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret
[Ceph user management http://docs.ceph.com/docs/luminous/rados/operations/user-management/].
When configuring an external RBD storage via the GUI, you can copy and paste
ceph auth get-key client.userid &gt; cephfs.secret
the secret into the appropriate field.
If Ceph is installed locally on the PVE cluster, i.e., setup with pveceph,
The secret is only the key itself, as opposed to the rbd backend which also
this is done automatically.
contains a [client.userid] section.
The secret will be stored at
# /etc/pve/priv/ceph/&lt;STORAGE_ID&gt;.secret
A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
command below, where userid is the client ID that has been configured to
access the cluster. For further information on Ceph user management, see the
Ceph docs.[cephusermgmt]
# ceph auth get-key client.userid &gt; cephfs.secret
Storage Features
Storage Features
The cephfs backend is a POSIX-compliant filesystem on top of a Ceph cluster.
The cephfs backend is a POSIX-compliant filesystem, on top of a Ceph cluster.
Table 1. Storage features for backend cephfs
Table 1. Storage features for backend cephfs
Content types
Content types
Line 75: Line 85:
yes[1]
yes[1]
no
no
[1] Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as
[1] While no known bugs exist, snapshots are not yet guaranteed to be stable,
they lack testing.
as they lack sufficient testing.
See Also
See Also
Storage
Storage
</pvehide>
</pvehide>
<!--PVE_IMPORT_END_MARKER-->
<!--PVE_IMPORT_END_MARKER-->

Latest revision as of 10:29, 22 June 2023

Storage pool type: cephfs

CephFS implements a POSIX-compliant filesystem, using a Ceph storage cluster to store its data. As CephFS builds upon Ceph, it shares most of its properties. This includes redundancy, scalability, self-healing, and high availability.

Tip Proxmox VE can manage Ceph setups, which makes configuring a CephFS storage easier. As modern hardware offers a lot of processing power and RAM, running storage services and VMs on same node is possible without a significant performance impact.

To use the CephFS storage plugin, you must replace the stock Debian Ceph client, by adding our Ceph repository. Once added, run apt update, followed by apt dist-upgrade, in order to get the newest packages.

Warning Please ensure that there are no other Ceph repositories configured. Otherwise the installation will fail or there will be mixed package versions on the node, leading to unexpected behavior.

Configuration

This backend supports the common storage properties nodes, disable, content, as well as the following cephfs specific properties:

fs-name

Name of the Ceph FS.

monhost

List of monitor daemon addresses. Optional, only needed if Ceph is not running on the Proxmox VE cluster.

path

The local mount point. Optional, defaults to /mnt/pve/<STORAGE_ID>/.

username

Ceph user id. Optional, only needed if Ceph is not running on the Proxmox VE cluster, where it defaults to admin.

subdir

CephFS subdirectory to mount. Optional, defaults to /.

fuse

Access CephFS through FUSE, instead of the kernel client. Optional, defaults to 0.

Configuration example for an external Ceph cluster (/etc/pve/storage.cfg)
cephfs: cephfs-external
        monhost 10.1.1.20 10.1.1.21 10.1.1.22
        path /mnt/pve/cephfs-external
        content backup
        username admin
        fs-name cephfs
Note Don’t forget to set up the client’s secret key file, if cephx was not disabled.

Authentication

Note If Ceph is installed locally on the Proxmox VE cluster, the following is done automatically when adding the storage.

If you use cephx authentication, which is enabled by default, you need to provide the secret from the external Ceph cluster.

To configure the storage via the CLI, you first need to make the file containing the secret available. One way is to copy the file from the external Ceph cluster directly to one of the Proxmox VE nodes. The following example will copy it to the /root directory of the node on which we run it:

# scp <external cephserver>:/etc/ceph/cephfs.secret /root/cephfs.secret

Then use the pvesm CLI tool to configure the external RBD storage, use the --keyring parameter, which needs to be a path to the secret file that you copied. For example:

# pvesm add cephfs <name> --monhost "10.1.1.20 10.1.1.21 10.1.1.22" --content backup --keyring /root/cephfs.secret

When configuring an external RBD storage via the GUI, you can copy and paste the secret into the appropriate field.

The secret is only the key itself, as opposed to the rbd backend which also contains a [client.userid] section.

The secret will be stored at

# /etc/pve/priv/ceph/<STORAGE_ID>.secret

A secret can be received from the Ceph cluster (as Ceph admin) by issuing the command below, where userid is the client ID that has been configured to access the cluster. For further information on Ceph user management, see the Ceph docs.
[cephusermgmt]

# ceph auth get-key client.userid > cephfs.secret

Storage Features

The cephfs backend is a POSIX-compliant filesystem, on top of a Ceph cluster.

Table 1. Storage features for backend cephfs
Content types Image formats Shared Snapshots Clones

vztmpl iso backup snippets

none

yes

yes[1]

no

[1] While no known bugs exist, snapshots are not yet guaranteed to be stable, as they lack sufficient testing.

See Also