Difference between revisions of "Deploy Hyper-Converged Ceph Cluster"

From Proxmox VE
Jump to navigation Jump to search
(10 intermediate revisions by the same user not shown)
Line 4: Line 4:
 
[[Category:Reference Documentation]]
 
[[Category:Reference Documentation]]
 
<pvehide>
 
<pvehide>
Proxmox VE unifies your compute and storage systems, i.e. you can use the
+
Proxmox VE unifies your compute and storage systems, that is, you can use the same
same physical nodes within a cluster for both computing (processing
+
physical nodes within a cluster for both computing (processing VMs and
VMs and containers) and replicated storage. The traditional silos of
+
containers) and replicated storage. The traditional silos of compute and
compute and storage resources can be wrapped up into a single
+
storage resources can be wrapped up into a single hyper-converged appliance.
hyper-converged appliance. Separate storage networks (SANs) and
+
Separate storage networks (SANs) and connections via network attached storage
connections via network (NAS) disappear. With the integration of Ceph,
+
(NAS) disappear. With the integration of Ceph, an open source software-defined
an open source software-defined storage platform, Proxmox VE has the
+
storage platform, Proxmox VE has the ability to run and manage Ceph storage directly
ability to run and manage Ceph storage directly on the hypervisor
+
on the hypervisor nodes.
nodes.
 
 
Ceph is a distributed object store and file system designed to provide
 
Ceph is a distributed object store and file system designed to provide
 
excellent performance, reliability and scalability.
 
excellent performance, reliability and scalability.
For small to mid sized deployments, it is possible to install a Ceph server for
+
Some advantages of Ceph on Proxmox VE are:
RADOS Block Devices (RBD) directly on your Proxmox VE cluster nodes, see
+
Easy setup and management via CLI and GUI
Ceph RADOS Block Devices (RBD). Recent
+
Thin provisioning
hardware has plenty of CPU power and RAM, so running storage services
+
Snapshot support
 +
Self healing
 +
Scalable to the exabyte level
 +
Setup pools with different performance and redundancy characteristics
 +
Data is replicated, making it fault tolerant
 +
Runs on commodity hardware
 +
No need for hardware RAID controllers
 +
Open source
 +
For small to medium-sized deployments, it is possible to install a Ceph server for
 +
RADOS Block Devices (RBD) directly on your Proxmox VE cluster nodes (see
 +
Ceph RADOS Block Devices (RBD)). Recent
 +
hardware has a lot of CPU power and RAM, so running storage services
 
and VMs on the same node is possible.
 
and VMs on the same node is possible.
To simplify management, we provide pveceph - a tool to install and
+
To simplify management, we provide pveceph - a tool for installing and
manage Ceph services on Proxmox VE nodes.
+
managing Ceph services on Proxmox VE nodes.
Ceph consists of a couple of Daemons
+
Ceph consists of multiple Daemons, for use as an RBD storage:
[Ceph intro http://docs.ceph.com/docs/master/start/intro/], for use as
 
a RBD storage:
 
 
Ceph Monitor (ceph-mon)
 
Ceph Monitor (ceph-mon)
 
Ceph Manager (ceph-mgr)
 
Ceph Manager (ceph-mgr)
 
Ceph OSD (ceph-osd; Object Storage Daemon)
 
Ceph OSD (ceph-osd; Object Storage Daemon)
We recommend to get familiar with the Ceph vocabulary.
+
We highly recommend to get familiar with Ceph
[Ceph glossary http://docs.ceph.com/docs/luminous/glossary]
+
[Ceph intro https://docs.ceph.com/en/octopus/start/intro/],
 +
its architecture
 +
[Ceph architecture https://docs.ceph.com/en/octopus/architecture/]
 +
and vocabulary
 +
[Ceph glossary https://docs.ceph.com/en/octopus/glossary].
 
Precondition
 
Precondition
To build a Proxmox Ceph Cluster there should be at least three (preferably)
+
To build a hyper-converged Proxmox + Ceph Cluster, you must use at least
identical servers for the setup.
+
three (preferably) identical servers for the setup.
A 10Gb network, exclusively used for Ceph, is recommended. A meshed
 
network setup is also an option if there are no 10Gb switches
 
available, see wiki .
 
 
Check also the recommendations from
 
Check also the recommendations from
 
Ceph&#8217;s website.
 
Ceph&#8217;s website.
Installation of Ceph Packages
+
CPU
On each node run the installation script as follows:
+
A high CPU core frequency reduces latency and should be preferred. As a simple
 +
rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
 +
provide enough resources for stable and durable Ceph performance.
 +
Memory
 +
Especially in a hyper-converged setup, the memory consumption needs to be
 +
carefully monitored. In addition to the predicted memory usage of virtual
 +
machines and containers, you must also account for having enough memory
 +
available for Ceph to provide excellent and stable performance.
 +
As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used
 +
by an OSD. Especially during recovery, re-balancing or backfilling.
 +
The daemon itself will use additional memory. The Bluestore backend of the
 +
daemon requires by default 3-5 GiB of memory (adjustable). In contrast, the
 +
legacy Filestore backend uses the OS page cache and the memory consumption is
 +
generally related to PGs of an OSD daemon.
 +
Network
 +
We recommend a network bandwidth of at least 10 GbE or more, which is used
 +
exclusively for Ceph. A meshed network setup
 +
[Full Mesh Network for Ceph https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server]
 +
is also an option if there are no 10 GbE switches available.
 +
The volume of traffic, especially during recovery, will interfere with other
 +
services on the same network and may even break the Proxmox VE cluster stack.
 +
Furthermore, you should estimate your bandwidth needs. While one HDD might not
 +
saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will
 +
even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even
 +
more bandwidth will ensure that this isn&#8217;t your bottleneck and won&#8217;t be anytime
 +
soon. 25, 40 or even 100 Gbps are possible.
 +
Disks
 +
When planning the size of your Ceph cluster, it is important to take the
 +
recovery time into consideration. Especially with small clusters, recovery
 +
might take long. It is recommended that you use SSDs instead of HDDs in small
 +
setups to reduce recovery time, minimizing the likelihood of a subsequent
 +
failure event during recovery.
 +
In general, SSDs will provide more IOPS than spinning disks. With this in mind,
 +
in addition to the higher cost, it may make sense to implement a
 +
class based separation of pools. Another way to
 +
speed up OSDs is to use a faster disk as a journal or
 +
DB/Write-Ahead-Log device, see
 +
creating Ceph OSDs.
 +
If a faster disk is used for multiple OSDs, a proper balance between OSD
 +
and WAL / DB (or journal) disk must be selected, otherwise the faster disk
 +
becomes the bottleneck for all linked OSDs.
 +
Aside from the disk type, Ceph performs best with an even sized and distributed
 +
amount of disks per node. For example, 4 x 500 GB disks within each node is
 +
better than a mixed setup with a single 1 TB and three 250 GB disk.
 +
You also need to balance OSD count and single OSD capacity. More capacity
 +
allows you to increase storage density, but it also means that a single OSD
 +
failure forces Ceph to recover more data at once.
 +
Avoid RAID
 +
As Ceph handles data object redundancy and multiple parallel writes to disks
 +
(OSDs) on its own, using a RAID controller normally doesn’t improve
 +
performance or availability. On the contrary, Ceph is designed to handle whole
 +
disks on it&#8217;s own, without any abstraction in between. RAID controllers are not
 +
designed for the Ceph workload and may complicate things and sometimes even
 +
reduce performance, as their write and caching algorithms may interfere with
 +
the ones from Ceph.
 +
Avoid RAID controllers. Use host bus adapter (HBA) instead.
 +
The above recommendations should be seen as a rough guidance for choosing
 +
hardware. Therefore, it is still essential to adapt it to your specific needs.
 +
You should test your setup and monitor health and performance continuously.
 +
Initial Ceph Installation &amp; Configuration
 +
Using the Web-based Wizard
 +
With Proxmox VE you have the benefit of an easy to use installation wizard
 +
for Ceph. Click on one of your cluster nodes and navigate to the Ceph
 +
section in the menu tree. If Ceph is not already installed, you will see a
 +
prompt offering to do so.
 +
The wizard is divided into multiple sections, where each needs to
 +
finish successfully, in order to use Ceph.
 +
First you need to chose which Ceph version you want to install. Prefer the one
 +
from your other nodes, or the newest if this is the first node you install
 +
Ceph.
 +
After starting the installation, the wizard will download and install all the
 +
required packages from Proxmox VE&#8217;s Ceph repository.
 +
After finishing the installation step, you will need to create a configuration.
 +
This step is only needed once per cluster, as this configuration is distributed
 +
automatically to all remaining cluster members through Proxmox VE&#8217;s clustered
 +
configuration file system (pmxcfs).
 +
The configuration step includes the following settings:
 +
Public Network: You can set up a dedicated network for Ceph. This
 +
setting is required. Separating your Ceph traffic is highly recommended.
 +
Otherwise, it could cause trouble with other latency dependent services,
 +
for example, cluster communication may decrease Ceph&#8217;s performance.
 +
Cluster Network: As an optional step, you can go even further and
 +
separate the OSD replication &amp; heartbeat traffic
 +
as well. This will relieve the public network and could lead to
 +
significant performance improvements, especially in large clusters.
 +
You have two more options which are considered advanced and therefore
 +
should only changed if you know what you are doing.
 +
Number of replicas: Defines how often an object is replicated
 +
Minimum replicas: Defines the minimum number of required replicas
 +
for I/O to be marked as complete.
 +
Additionally, you need to choose your first monitor node. This step is required.
 +
That&#8217;s it. You should now see a success page as the last step, with further
 +
instructions on how to proceed. Your system is now ready to start using Ceph.
 +
To get started, you will need to create some additional monitors,
 +
OSDs and at least one pool.
 +
The rest of this chapter will guide you through getting the most out of
 +
your Proxmox VE based Ceph setup. This includes the aforementioned tips and
 +
more, such as CephFS, which is a helpful addition to your
 +
new Ceph cluster.
 +
CLI Installation of Ceph Packages
 +
Alternatively to the the recommended Proxmox VE  Ceph installation wizard available
 +
in the web-interface, you can use the following CLI command on each node:
 
pveceph install
 
pveceph install
 
This sets up an apt package repository in
 
This sets up an apt package repository in
 
/etc/apt/sources.list.d/ceph.list and installs the required software.
 
/etc/apt/sources.list.d/ceph.list and installs the required software.
Creating initial Ceph configuration
+
Initial Ceph configuration via CLI
After installation of packages, you need to create an initial Ceph
+
Use the Proxmox VE Ceph installation wizard (recommended) or run the
configuration on just one node, based on your network (10.10.10.0/24
+
following command on one node:
in the following example) dedicated for Ceph:
 
 
pveceph init --network 10.10.10.0/24
 
pveceph init --network 10.10.10.0/24
This creates an initial config at /etc/pve/ceph.conf. That file is
+
This creates an initial configuration at /etc/pve/ceph.conf with a
automatically distributed to all Proxmox VE nodes by using
+
dedicated network for Ceph. This file is automatically distributed to
pmxcfs. The command also creates a symbolic link
+
all Proxmox VE nodes, using pmxcfs. The command also
from /etc/ceph/ceph.conf pointing to that file. So you can simply run
+
creates a symbolic link at /etc/ceph/ceph.conf, which points to that file.
Ceph commands without the need to specify a configuration file.
+
Thus, you can simply run Ceph commands without the need to specify a
Creating Ceph Monitors
+
configuration file.
 +
Ceph Monitor
 
The Ceph Monitor (MON)
 
The Ceph Monitor (MON)
[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
+
[Ceph Monitor https://docs.ceph.com/en/octopus/start/intro/]
maintains a master copy of the cluster map. For HA you need to have at least 3
+
maintains a master copy of the cluster map. For high availability, you need at
monitors.
+
least 3 monitors. One monitor will already be installed if you
 +
used the installation wizard. You won&#8217;t need more than 3 monitors, as long
 +
as your cluster is small to medium-sized. Only really large clusters will
 +
require more than this.
 +
Create Monitors
 
On each node where you want to place a monitor (three monitors are recommended),
 
On each node where you want to place a monitor (three monitors are recommended),
create it by using the Ceph &#8594; Monitor tab in the GUI or run.
+
create one by using the Ceph &#8594; Monitor tab in the GUI or run:
pveceph createmon
+
pveceph mon create
This will also install the needed Ceph Manager (ceph-mgr) by default. If you
+
Destroy Monitors
do not want to install a manager, specify the -exclude-manager option.
+
To remove a Ceph Monitor via the GUI, first select a node in the tree view and
Creating Ceph Manager
+
go to the Ceph &#8594; Monitor panel. Select the MON and click the Destroy
The Manager daemon runs alongside the monitors. It provides interfaces for
+
button.
monitoring the cluster. Since the Ceph luminous release the
+
To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
ceph-mgr [Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
+
is running. Then execute the following command:
is required. During monitor installation the ceph manager will be installed as
+
pveceph mon destroy
well.
+
At least three Monitors are needed for quorum.
 +
Ceph Manager
 +
The Manager daemon runs alongside the monitors. It provides an interface to
 +
monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
 +
[Ceph Manager https://docs.ceph.com/en/octopus/mgr/] daemon is
 +
required.
 +
Create Manager
 +
Multiple Managers can be installed, but only one Manager is active at any given
 +
time.
 +
pveceph mgr create
 
It is recommended to install the Ceph Manager on the monitor nodes. For
 
It is recommended to install the Ceph Manager on the monitor nodes. For
 
high availability install more then one manager.
 
high availability install more then one manager.
pveceph createmgr
+
Destroy Manager
Creating Ceph OSDs
+
To remove a Ceph Manager via the GUI, first select a node in the tree view and
via GUI or via CLI as follows:
+
go to the Ceph &#8594; Monitor panel. Select the Manager and click the
pveceph createosd /dev/sd[X]
+
Destroy button.
We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly
+
To remove a Ceph Monitor via the CLI, first connect to the node on which the
among your, at least three nodes (4 OSDs on each node).
+
Manager is running. Then execute the following command:
 +
pveceph mgr destroy
 +
While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
 +
as it handles important features like PG-autoscaling, device health monitoring,
 +
telemetry and more.
 +
Ceph OSDs
 +
Ceph Object Storage Daemons store objects for Ceph over the
 +
network. It is recommended to use one OSD per physical disk.
 +
Create OSDs
 +
You can create an OSD either via the Proxmox VE web-interface or via the CLI using
 +
pveceph. For example:
 +
pveceph osd create /dev/sd[X]
 +
We recommend a Ceph cluster with at least three nodes and at least 12
 +
OSDs, evenly distributed among the nodes.
 +
If the disk was in use before (for example, for ZFS or as an OSD) you first need
 +
to zap all traces of that usage. To remove the partition table, boot sector and
 +
any other OSD leftover, you can use the following command:
 +
ceph-volume lvm zap /dev/sd[X] --destroy
 +
The above command will destroy all data on the disk!
 
Ceph Bluestore
 
Ceph Bluestore
 
Starting with the Ceph Kraken release, a new Ceph OSD storage type was
 
Starting with the Ceph Kraken release, a new Ceph OSD storage type was
introduced, the so called Bluestore
+
introduced called Bluestore
[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/]. In
+
[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
Ceph luminous this store is the default when creating OSDs.
+
This is the default when creating OSDs since Ceph Luminous.
pveceph createosd /dev/sd[X]
+
pveceph osd create /dev/sd[X]
In order to select a disk in the GUI, to be more failsafe, the disk needs
+
Block.db and block.wal
to have a
 
GPT [GPT partition table https://en.wikipedia.org/wiki/GUID_Partition_Table]
 
partition table. You can create this with gdisk /dev/sd(x). If there is no
 
GPT, you cannot select the disk as DB/WAL.
 
 
If you want to use a separate DB/WAL device for your OSDs, you can specify it
 
If you want to use a separate DB/WAL device for your OSDs, you can specify it
through the -wal_dev option.
+
through the -db_dev and -wal_dev options. The WAL is placed with the DB, if
pveceph createosd /dev/sd[X] -wal_dev /dev/sd[Y]
+
not specified separately.
The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
+
pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
internal journal or write-ahead log. It is recommended to use a fast SSDs or
+
You can directly choose the size of those with the -db_size and -wal_size
 +
parameters respectively. If they are not given, the following values (in order)
 +
will be used:
 +
bluestore_block_{db,wal}_size from Ceph configuration&#8230;
 +
&#8230; database, section osd
 +
&#8230; database, section global
 +
&#8230; file, section osd
 +
&#8230; file, section global
 +
10% (DB)/1% (WAL) of OSD size
 +
The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
 +
internal journal or write-ahead log. It is recommended to use a fast SSD or
 
NVRAM for better performance.
 
NVRAM for better performance.
 
Ceph Filestore
 
Ceph Filestore
Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can
+
Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
still be used and might give better performance in small setups, when backed by
+
Starting with Ceph Nautilus, Proxmox VE does not support creating such OSDs with
a NVMe SSD or similar.
+
pveceph anymore. If you still want to create filestore OSDs, use
pveceph createosd /dev/sd[X] -bluestore 0
+
ceph-volume directly.
In order to select a disk in the GUI, the disk needs to have a
+
ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
GPT [GPT] partition table. You can
+
Destroy OSDs
create this with gdisk /dev/sd(x). If there is no GPT, you cannot select the
+
To remove an OSD via the GUI, first select a Proxmox VE node in the tree view and go
disk as journal. Currently the journal size is fixed to 5 GB.
+
to the Ceph &#8594; OSD panel. Then select the OSD to destroy and click the OUT
If you want to use a dedicated SSD journal disk:
+
button. Once the OSD status has changed from in to out, click the STOP
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] -bluestore 0
+
button. Finally, after the status has changed from up to down, select
Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
+
Destroy from the More drop-down menu.
journal disk.
+
To remove an OSD via the CLI run the following commands.
pveceph createosd /dev/sdf -journal_dev /dev/sdb -bluestore 0
+
ceph osd out &lt;ID&gt;
This partitions the disk (data and journal partition), creates
+
systemctl stop ceph-osd@&lt;ID&gt;.service
filesystems and starts the OSD, afterwards it is running and fully
+
The first command instructs Ceph not to include the OSD in the data
functional.
+
distribution. The second command stops the OSD service. Until this time, no
This command refuses to initialize disk when it detects existing data. So
+
data is lost.
if you want to overwrite a disk you should remove existing data first. You can
+
The following command destroys the OSD. Specify the -cleanup option to
do that using: ceph-disk zap /dev/sd[X]
+
additionally destroy the partition table.
You can create OSDs containing both journal and data partitions or you
+
pveceph osd destroy &lt;ID&gt;
can place the journal on a dedicated SSD. Using a SSD journal disk is
+
The above command will destroy all data on the disk!
highly recommended to achieve good performance.
+
Ceph Pools
Creating Ceph Pools
+
A pool is a logical group for storing objects. It holds a collection of objects,
A pool is a logical group for storing objects. It holds Placement
+
known as Placement Groups (PG, pg_num).
Groups (PG), a collection of objects.
+
Create and Edit Pools
When no options are given, we set a
+
You can create and edit pools from the command line or the web-interface of any
default of 64 PGs, a size of 3 replicas and a min_size of 2 replicas
+
Proxmox VE host under Ceph &#8594; Pools.
for serving objects in a degraded state.
+
When no options are given, we set a default of 128 PGs, a size of 3
The default number of PGs works for 2-6 disks. Ceph throws a
+
replicas and a min_size of 2 replicas, to ensure no data loss occurs if
"HEALTH_WARNING" if you have too few or too many PGs in your cluster.
+
any OSD fails.
It is advised to calculate the PG number depending on your setup, you can find
+
Do not set a min_size of 1. A replicated pool with min_size of 1
the formula and the PG
+
allows I/O on an object when it has only 1 replica, which could lead to data
calculator [PG calculator http://ceph.com/pgcalc/] online. While PGs
+
loss, incomplete PGs or unfound objects.
can be increased later on, they can never be decreased.
+
It is advised that you either enable the PG-Autoscaler or calculate the PG
You can create pools through command line or on the GUI on each PVE host under
+
number based on your setup. You can find the formula and the PG calculator
Ceph &#8594; Pools.
+
[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/] online. From Ceph Nautilus
pveceph createpool &lt;name&gt;
+
onward, you can change the number of PGs
If you would like to automatically get also a storage definition for your pool,
+
[Placement Groups https://docs.ceph.com/en/octopus/rados/operations/placement-groups/] after the setup.
active the checkbox "Add storages" on the GUI or use the command line option
+
The PG autoscaler [Automated Scaling https://docs.ceph.com/en/octopus/rados/operations/placement-groups/#automated-scaling] can
--add_storages on pool creation.
+
automatically scale the PG count for a pool in the background. Setting the
 +
Target Size or Target Ratio advanced parameters helps the PG-Autoscaler to
 +
make better decisions.
 +
Example for creating a pool over the CLI
 +
pveceph pool create &lt;pool-name&gt; --add_storages
 +
If you would also like to automatically define a storage for your
 +
pool, keep the &#8216;Add as Storage&#8217; checkbox checked in the web-interface, or use the
 +
command line option --add_storages at pool creation.
 +
Pool Options
 +
The following options are available on pool creation, and partially also when
 +
editing a pool.
 +
Name
 +
The name of the pool. This must be unique and can&#8217;t be changed afterwards.
 +
Size
 +
The number of replicas per object. Ceph always tries to have this many
 +
copies of an object. Default: 3.
 +
PG Autoscale Mode
 +
The automatic PG scaling mode [autoscaler] of
 +
the pool. If set to warn, it produces a warning message when a pool
 +
has a non-optimal PG count. Default: warn.
 +
Add as Storage
 +
Configure a VM or container storage using the new pool.
 +
Default: true (only visible on creation).
 +
Advanced Options
 +
Min. Size
 +
The minimum number of replicas per object. Ceph will reject I/O on
 +
the pool if a PG has less than this many replicas. Default: 2.
 +
Crush Rule
 +
The rule to use for mapping object placement in the cluster. These
 +
rules define how data is placed within the cluster. See
 +
Ceph CRUSH &amp; device classes for information on
 +
device-based rules.
 +
# of PGs
 +
The number of placement groups [placement_groups] that
 +
the pool should have at the beginning. Default: 128.
 +
Target Ratio
 +
The ratio of data that is expected in the pool. The PG
 +
autoscaler uses the ratio relative to other ratio sets. It takes precedence
 +
over the target size if both are set.
 +
Target Size
 +
The estimated amount of data expected in the pool. The PG
 +
autoscaler uses this size to estimate the optimal PG count.
 +
Min. # of PGs
 +
The minimum number of placement groups. This setting is used to
 +
fine-tune the lower bound of the PG count for that pool. The PG autoscaler
 +
will not merge PGs below this threshold.
 
Further information on Ceph pool handling can be found in the Ceph pool
 
Further information on Ceph pool handling can be found in the Ceph pool
 
operation [Ceph pool operation
 
operation [Ceph pool operation
http://docs.ceph.com/docs/luminous/rados/operations/pools/]
+
https://docs.ceph.com/en/octopus/rados/operations/pools/]
 
manual.
 
manual.
 +
Erasure Coded Pools
 +
Erasure coding (EC) is a form of &#8216;forward error correction&#8217; codes that allows
 +
to recover from a certain amount of data loss. Erasure coded pools can offer
 +
more usable space compared to replicated pools, but they do that for the price
 +
of performance.
 +
For comparision: in classic, replicated pools, multiple replicas of the data
 +
are stored (size) while in erasure coded pool, data is split into k data
 +
chunks with additional m coding (checking) chunks. Those coding chunks can be
 +
used to recreate data should data chunks be missing.
 +
The number of coding chunks, m, defines how many OSDs can be lost without
 +
losing any data. The total amount of objects stored is k + m.
 +
Creating EC Pools
 +
You can create erasuce coded (EC) through using the pveceph CLI tooling. As
 +
EC code work different than replicated pools, planning a setup and the pool
 +
parameters used needs to adapt.
 +
The default min_size of an EC pool depends on the m parameter. If m = 1,
 +
the min_size of the EC pool will be k. The min_size will be k + 1 if
 +
m &gt; 1. The Ceph documentation recommends a conservative min_size of k + 2
 +
[Ceph Erasure Coded Pool Recovery
 +
https://docs.ceph.com/en/octopus/rados/operations/erasure-code/#erasure-coded-pool-recovery].
 +
If there are less than min_size OSDs available, any IO to the pool will be
 +
blocked until there are enough OSDs available again.
 +
When planning an erasure coded pool, keep an eye on the min_size as it
 +
defines how many OSDs need to be available. Otherwise, IO will be blocked.
 +
For example, an EC pool with k = 2 and m = 1 will have size = 3,
 +
min_size = 2 and will stay operational if one OSD fails. If the pool is
 +
configured with k = 2, m = 2, it will have a size = 4 and min_size = 3
 +
and stay operational if one OSD is lost.
 +
To create a new EC pool, run the following command:
 +
pceveph pool create &lt;pool-name&gt; --erasure-coding k=2,m=1
 +
Optional parameters are failure-domain and device-class. If you
 +
need to change any EC profile settings used by the pool, you will have to
 +
create a new pool with a new profile.
 +
This will create a new EC pool plus the needed replicated pool to store the RBD
 +
omap and other metadata. In the end, there will be a &lt;pool name&gt;-data and
 +
&lt;pool name&gt;-metada pool. The default behavior is to create a matching storage
 +
configuration as well. If that behavior is not wanted, you can disable it by
 +
providing the --add_storages 0 parameter.  When configuring the storage
 +
configuration manually, keep in mind that the data-pool parameter needs to be
 +
set. Only then will the EC pool be used to store the data objects. For example:
 +
The optional parameters --size, --min_size and --crush_rule will be
 +
used for the replicated metadata pool, but not for the erasure coded data pool.
 +
If you need to change the min_size on the data pool, you can do it later.
 +
The size and crush_rule parameters cannot be changed on erasure coded
 +
pools.
 +
If there is a need to further customize the EC profile, you can do so by
 +
creating it with the Ceph tools directly [Ceph Erasure Code Profile
 +
https://docs.ceph.com/en/octopus/rados/operations/erasure-code/#erasure-code-profiles], and
 +
specify the profile to use with the profile parameter.
 +
For example:
 +
pceveph pool create &lt;pool-name&gt; --erasure-coding profile=&lt;profile-name&gt;
 +
Adding EC Pools as Storage
 +
You can also add an already existing EC pool as storage to Proxmox VE, it works the
 +
same as adding any RBD pool but requires to pass the extra data-pool
 +
option.
 +
pvesm add rbd &lt;storage-name&gt; --pool &lt;replicated-pool&gt; --data-pool &lt;ec-pool&gt;
 +
Do not forget to add the keyring and monhost option for any external
 +
ceph clusters, not managed by the local Proxmox VE cluster.
 +
Destroy Pools
 +
To destroy a pool via the GUI, select a node in the tree view and go to the
 +
Ceph &#8594; Pools panel. Select the pool to destroy and click the Destroy
 +
button. To confirm the destruction of the pool, you need to enter the pool name.
 +
Run the following command to destroy a pool. Specify the -remove_storages to
 +
also remove the associated storage.
 +
pveceph pool destroy &lt;name&gt;
 +
Pool deletion runs in the background and can take some time.
 +
You will notice the data usage in the cluster decreasing throughout this
 +
process.
 +
PG Autoscaler
 +
The PG autoscaler allows the cluster to consider the amount of (expected) data
 +
stored in each pool and to choose the appropriate pg_num values automatically.
 +
It is available since Ceph Nautilus.
 +
You may need to activate the PG autoscaler module before adjustments can take
 +
effect.
 +
ceph mgr module enable pg_autoscaler
 +
The autoscaler is configured on a per pool basis and has the following modes:
 +
warn
 +
A health warning is issued if the suggested pg_num value differs too
 +
much from the current value.
 +
on
 +
The pg_num is adjusted automatically with no need for any manual
 +
interaction.
 +
off
 +
No automatic pg_num adjustments are made, and no warning will be issued
 +
if the PG count is not optimal.
 +
The scaling factor can be adjusted to facilitate future data storage with the
 +
target_size, target_size_ratio and the pg_num_min options.
 +
By default, the autoscaler considers tuning the PG count of a pool if
 +
it is off by a factor of 3. This will lead to a considerable shift in data
 +
placement and might introduce a high load on the cluster.
 +
You can find a more in-depth introduction to the PG autoscaler on Ceph&#8217;s Blog -
 +
New in
 +
Nautilus: PG merging and autotuning.
 
Ceph CRUSH &amp; device classes
 
Ceph CRUSH &amp; device classes
The foundation of Ceph is its algorithm, Controlled Replication
+
The [CRUSH
Under Scalable Hashing
+
https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf] (Controlled
(CRUSH [CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
+
Replication Under Scalable Hashing) algorithm is at the
CRUSH calculates where to store to and retrieve data from, this has the
+
foundation of Ceph.
advantage that no central index service is needed. CRUSH works with a map of
+
CRUSH calculates where to store and retrieve data from. This has the
 +
advantage that no central indexing service is needed. CRUSH works using a map of
 
OSDs, buckets (device locations) and rulesets (data replication) for pools.
 
OSDs, buckets (device locations) and rulesets (data replication) for pools.
 
Further information can be found in the Ceph documentation, under the
 
Further information can be found in the Ceph documentation, under the
section CRUSH map [CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
+
section CRUSH map [CRUSH map https://docs.ceph.com/en/octopus/rados/operations/crush-map/].
 
This map can be altered to reflect different replication hierarchies. The object
 
This map can be altered to reflect different replication hierarchies. The object
replicas can be separated (eg. failure domains), while maintaining the desired
+
replicas can be separated (e.g., failure domains), while maintaining the desired
 
distribution.
 
distribution.
A common use case is to use different classes of disks for different Ceph pools.
+
A common configuration is to use different classes of disks for different Ceph
For this reason, Ceph introduced the device classes with luminous, to
+
pools. For this reason, Ceph introduced device classes with luminous, to
 
accommodate the need for easy ruleset generation.
 
accommodate the need for easy ruleset generation.
 
The device classes can be seen in the ceph osd tree output. These classes
 
The device classes can be seen in the ceph osd tree output. These classes
Line 173: Line 459:
 
  -7      2.56848    host sumi3
 
  -7      2.56848    host sumi3
 
  14  nvme 0.72769        osd.14
 
  14  nvme 0.72769        osd.14
To let a pool distribute its objects only on a specific device class, you need
+
To instruct a pool to only distribute objects on a specific device class, you
to create a ruleset with the specific class first.
+
first need to create a ruleset for the device class:
 
ceph osd crush rule create-replicated &lt;rule-name&gt; &lt;root&gt; &lt;failure-domain&gt; &lt;class&gt;
 
ceph osd crush rule create-replicated &lt;rule-name&gt; &lt;root&gt; &lt;failure-domain&gt; &lt;class&gt;
 
&lt;rule-name&gt;
 
&lt;rule-name&gt;
Line 183: Line 469:
 
at which failure-domain the objects should be distributed (usually host)
 
at which failure-domain the objects should be distributed (usually host)
 
&lt;class&gt;
 
&lt;class&gt;
what type of OSD backing store to use (eg. nvme, ssd, hdd)
+
what type of OSD backing store to use (e.g., nvme, ssd, hdd)
 
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
 
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
 
ceph osd pool set &lt;pool-name&gt; crush_rule &lt;rule-name&gt;
 
ceph osd pool set &lt;pool-name&gt; crush_rule &lt;rule-name&gt;
If the pool already contains objects, all of these have to be moved
+
If the pool already contains objects, these must be moved accordingly.
accordingly. Depending on your setup this may introduce a big performance hit on
+
Depending on your setup, this may introduce a big performance impact on your
your cluster. As an alternative, you can create a new pool and move disks
+
cluster. As an alternative, you can create a new pool and move disks separately.
separately.
 
 
Ceph Client
 
Ceph Client
You can then configure Proxmox VE to use such pools to store VM or
+
Following the setup from the previous sections, you can configure Proxmox VE to use
Container images. Simply use the GUI too add a new RBD storage (see
+
such pools to store VM and Container images. Simply use the GUI to add a new
section Ceph RADOS Block Devices (RBD)).
+
RBD storage (see section
You also need to copy the keyring to a predefined location for a external Ceph
+
Ceph RADOS Block Devices (RBD)).
 +
You also need to copy the keyring to a predefined location for an external Ceph
 
cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
 
cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
 
done automatically.
 
done automatically.
The file name needs to be &lt;storage_id&gt; + `.keyring - &lt;storage_id&gt; is
+
The filename needs to be &lt;storage_id&gt; + `.keyring, where &lt;storage_id&gt; is
the expression after rbd: in /etc/pve/storage.cfg which is
+
the expression after rbd: in /etc/pve/storage.cfg. In the following example,
my-ceph-storage in the following example:
+
my-ceph-storage is the &lt;storage_id&gt;:
 
mkdir /etc/pve/priv/ceph
 
mkdir /etc/pve/priv/ceph
 
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
 
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
 +
CephFS
 +
Ceph also provides a filesystem, which runs on top of the same object storage as
 +
RADOS block devices do. A Metadata Server (MDS) is used to map the
 +
RADOS backed objects to files and directories, allowing Ceph to provide a
 +
POSIX-compliant, replicated filesystem. This allows you to easily configure a
 +
clustered, highly available, shared filesystem. Ceph&#8217;s Metadata Servers
 +
guarantee that files are evenly distributed over the entire Ceph cluster. As a
 +
result, even cases of high load will not overwhelm a single host, which can be
 +
an issue with traditional shared filesystem approaches, for example NFS.
 +
Proxmox VE supports both creating a hyper-converged CephFS and using an existing
 +
CephFS as storage to save backups, ISO files, and container
 +
templates.
 +
Metadata Server (MDS)
 +
CephFS needs at least one Metadata Server to be configured and running, in order
 +
to function. You can create an MDS through the Proxmox VE web GUI&#8217;s Node
 +
-&gt; CephFS panel or from the command line with:
 +
pveceph mds create
 +
Multiple metadata servers can be created in a cluster, but with the default
 +
settings, only one can be active at a time. If an MDS or its node becomes
 +
unresponsive (or crashes), another standby MDS will get promoted to active.
 +
You can speed up the handover between the active and standby MDS by using
 +
the hotstandby parameter option on creation, or if you have already created it
 +
you may set/add:
 +
mds standby replay = true
 +
in the respective MDS section of /etc/pve/ceph.conf. With this enabled, the
 +
specified MDS will remain in a warm state, polling the active one, so that it
 +
can take over faster in case of any issues.
 +
This active polling will have an additional performance impact on your
 +
system and the active MDS.
 +
Multiple Active MDS
 +
Since Luminous (12.2.x) you can have multiple active metadata servers
 +
running at once, but this is normally only useful if you have a high amount of
 +
clients running in parallel. Otherwise the MDS is rarely the bottleneck in a
 +
system. If you want to set this up, please refer to the Ceph documentation.
 +
[Configuring multiple active MDS daemons
 +
https://docs.ceph.com/en/octopus/cephfs/multimds/]
 +
Create CephFS
 +
With Proxmox VE&#8217;s integration of CephFS, you can easily create a CephFS using the
 +
web interface, CLI or an external API interface. Some prerequisites are required
 +
for this to work:
 +
Prerequisites for a successful CephFS setup:
 +
Install Ceph packages - if this was already done some
 +
time ago, you may want to rerun it on an up-to-date system to
 +
ensure that all CephFS related packages get installed.
 +
Setup Monitors
 +
Setup your OSDs
 +
Setup at least one MDS
 +
After this is complete, you can simply create a CephFS through
 +
either the Web GUI&#8217;s Node -&gt; CephFS panel or the command line tool pveceph,
 +
for example:
 +
pveceph fs create --pg_num 128 --add-storage
 +
This creates a CephFS named cephfs, using a pool for its data named
 +
cephfs_data with 128 placement groups and a pool for its metadata named
 +
cephfs_metadata with one quarter of the data pool&#8217;s placement groups (32).
 +
Check the Proxmox VE managed Ceph pool chapter or visit the
 +
Ceph documentation for more information regarding an appropriate placement group
 +
number (pg_num) for your setup [placement_groups].
 +
Additionally, the --add-storage parameter will add the CephFS to the Proxmox VE
 +
storage configuration after it has been created successfully.
 +
Destroy CephFS
 +
Destroying a CephFS will render all of its data unusable. This cannot be
 +
undone!
 +
To completely and gracefully remove a CephFS, the following steps are
 +
necessary:
 +
Disconnect every non-Proxmox VE client (e.g. unmount the CephFS in guests).
 +
Disable all related CephFS Proxmox VE storage entries (to prevent it from being
 +
  automatically mounted).
 +
Remove all used resources from guests (e.g. ISOs) that are on the CephFS you
 +
  want to destroy.
 +
Unmount the CephFS storages on all cluster nodes manually with
 +
umount /mnt/pve/&lt;STORAGE-NAME&gt;
 +
Where &lt;STORAGE-NAME&gt; is the name of the CephFS storage in your Proxmox VE.
 +
Now make sure that no metadata server (MDS) is running for that CephFS,
 +
  either by stopping or destroying them. This can be done through the web
 +
  interface or via the command line interface, for the latter you would issue
 +
  the following command:
 +
pveceph stop --service mds.NAME
 +
to stop them, or
 +
pveceph mds destroy NAME
 +
to destroy them.
 +
Note that standby servers will automatically be promoted to active when an
 +
active MDS is stopped or removed, so it is best to first stop all standby
 +
servers.
 +
Now you can destroy the CephFS with
 +
pveceph fs destroy NAME --remove-storages --remove-pools
 +
This will automatically destroy the underlying ceph pools as well as remove
 +
the storages from pve config.
 +
After these steps, the CephFS should be completely removed and if you have
 +
other CephFS instances, the stopped metadata servers can be started again
 +
to act as standbys.
 +
Ceph maintenance
 +
Replace OSDs
 +
One of the most common maintenance tasks in Ceph is to replace the disk of an
 +
OSD. If a disk is already in a failed state, then you can go ahead and run
 +
through the steps in Destroy OSDs. Ceph will recreate
 +
those copies on the remaining OSDs if possible. This rebalancing will start as
 +
soon as an OSD failure is detected or an OSD was actively stopped.
 +
With the default size/min_size (3/2) of a pool, recovery only starts when
 +
&#8216;size + 1` nodes are available. The reason for this is that the Ceph object
 +
balancer CRUSH defaults to a full node as
 +
`failure domain&#8217;.
 +
To replace a functioning disk from the GUI, go through the steps in
 +
Destroy OSDs. The only addition is to wait until
 +
the cluster shows HEALTH_OK before stopping the OSD to destroy it.
 +
On the command line, use the following commands:
 +
ceph osd out osd.&lt;id&gt;
 +
You can check with the command below if the OSD can be safely removed.
 +
ceph osd safe-to-destroy osd.&lt;id&gt;
 +
Once the above check tells you that it is safe to remove the OSD, you can
 +
continue with the following commands:
 +
systemctl stop ceph-osd@&lt;id&gt;.service
 +
pveceph osd destroy &lt;id&gt;
 +
Replace the old disk with the new one and use the same procedure as described
 +
in Create OSDs.
 +
Trim/Discard
 +
It is good practice to run fstrim (discard) regularly on VMs and containers.
 +
This releases data blocks that the filesystem isn’t using anymore. It reduces
 +
data usage and resource load. Most modern operating systems issue such discard
 +
commands to their disks regularly. You only need to ensure that the Virtual
 +
Machines enable the disk discard option.
 +
Scrub &amp; Deep Scrub
 +
Ceph ensures data integrity by scrubbing placement groups. Ceph checks every
 +
object in a PG for its health. There are two forms of Scrubbing, daily
 +
cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
 +
the objects and uses checksums to ensure data integrity. If a running scrub
 +
interferes with business (performance) needs, you can adjust the time when
 +
scrubs [Ceph scrubbing https://docs.ceph.com/en/octopus/rados/configuration/osd-config-ref/#scrubbing]
 +
are executed.
 +
Ceph Monitoring and Troubleshooting
 +
It is important to continuously monitor the health of a Ceph deployment from the
 +
beginning, either by using the Ceph tools or by accessing
 +
the status through the Proxmox VE API.
 +
The following Ceph commands can be used to see if the cluster is healthy
 +
(HEALTH_OK), if there are warnings (HEALTH_WARN), or even errors
 +
(HEALTH_ERR). If the cluster is in an unhealthy state, the status commands
 +
below will also give you an overview of the current events and actions to take.
 +
# single time output
 +
pve# ceph -s
 +
# continuously output status changes (press CTRL+C to stop)
 +
pve# ceph -w
 +
To get a more detailed view, every Ceph service has a log file under
 +
/var/log/ceph/. If more detail is required, the log level can be
 +
adjusted [Ceph log and debugging https://docs.ceph.com/en/octopus/rados/troubleshooting/log-and-debug/].
 +
You can find more information about troubleshooting
 +
[Ceph troubleshooting https://docs.ceph.com/en/octopus/rados/troubleshooting/]
 +
a Ceph cluster on the official website.
 
</pvehide>
 
</pvehide>
 
<!--PVE_IMPORT_END_MARKER-->
 
<!--PVE_IMPORT_END_MARKER-->

Revision as of 10:20, 4 May 2022

screenshot/gui-ceph-status-dashboard.png

Proxmox VE unifies your compute and storage systems, that is, you can use the same physical nodes within a cluster for both computing (processing VMs and containers) and replicated storage. The traditional silos of compute and storage resources can be wrapped up into a single hyper-converged appliance. Separate storage networks (SANs) and connections via network attached storage (NAS) disappear. With the integration of Ceph, an open source software-defined storage platform, Proxmox VE has the ability to run and manage Ceph storage directly on the hypervisor nodes.

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability.

Some advantages of Ceph on Proxmox VE are:
  • Easy setup and management via CLI and GUI

  • Thin provisioning

  • Snapshot support

  • Self healing

  • Scalable to the exabyte level

  • Setup pools with different performance and redundancy characteristics

  • Data is replicated, making it fault tolerant

  • Runs on commodity hardware

  • No need for hardware RAID controllers

  • Open source

For small to medium-sized deployments, it is possible to install a Ceph server for RADOS Block Devices (RBD) directly on your Proxmox VE cluster nodes (see Ceph RADOS Block Devices (RBD)). Recent hardware has a lot of CPU power and RAM, so running storage services and VMs on the same node is possible.

To simplify management, we provide pveceph - a tool for installing and managing Ceph services on Proxmox VE nodes.

Ceph consists of multiple Daemons, for use as an RBD storage:
  • Ceph Monitor (ceph-mon)

  • Ceph Manager (ceph-mgr)

  • Ceph OSD (ceph-osd; Object Storage Daemon)

Tip We highly recommend to get familiar with Ceph
[Ceph intro https://docs.ceph.com/en/octopus/start/intro/]
, its architecture
[Ceph architecture https://docs.ceph.com/en/octopus/architecture/]
and vocabulary
[Ceph glossary https://docs.ceph.com/en/octopus/glossary]
.

Precondition

To build a hyper-converged Proxmox + Ceph Cluster, you must use at least three (preferably) identical servers for the setup.

Check also the recommendations from Ceph’s website.

CPU

A high CPU core frequency reduces latency and should be preferred. As a simple rule of thumb, you should assign a CPU core (or thread) to each Ceph service to provide enough resources for stable and durable Ceph performance.

Memory

Especially in a hyper-converged setup, the memory consumption needs to be carefully monitored. In addition to the predicted memory usage of virtual machines and containers, you must also account for having enough memory available for Ceph to provide excellent and stable performance.

As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. Especially during recovery, re-balancing or backfilling.

The daemon itself will use additional memory. The Bluestore backend of the daemon requires by default 3-5 GiB of memory (adjustable). In contrast, the legacy Filestore backend uses the OS page cache and the memory consumption is generally related to PGs of an OSD daemon.

Network

We recommend a network bandwidth of at least 10 GbE or more, which is used exclusively for Ceph. A meshed network setup
[Full Mesh Network for Ceph https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server]
is also an option if there are no 10 GbE switches available.

The volume of traffic, especially during recovery, will interfere with other services on the same network and may even break the Proxmox VE cluster stack.

Furthermore, you should estimate your bandwidth needs. While one HDD might not saturate a 1 Gb link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate 10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth will ensure that this isn’t your bottleneck and won’t be anytime soon. 25, 40 or even 100 Gbps are possible.

Disks

When planning the size of your Ceph cluster, it is important to take the recovery time into consideration. Especially with small clusters, recovery might take long. It is recommended that you use SSDs instead of HDDs in small setups to reduce recovery time, minimizing the likelihood of a subsequent failure event during recovery.

In general, SSDs will provide more IOPS than spinning disks. With this in mind, in addition to the higher cost, it may make sense to implement a class based separation of pools. Another way to speed up OSDs is to use a faster disk as a journal or DB/Write-Ahead-Log device, see creating Ceph OSDs. If a faster disk is used for multiple OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.

Aside from the disk type, Ceph performs best with an even sized and distributed amount of disks per node. For example, 4 x 500 GB disks within each node is better than a mixed setup with a single 1 TB and three 250 GB disk.

You also need to balance OSD count and single OSD capacity. More capacity allows you to increase storage density, but it also means that a single OSD failure forces Ceph to recover more data at once.

Avoid RAID

As Ceph handles data object redundancy and multiple parallel writes to disks (OSDs) on its own, using a RAID controller normally doesn’t improve performance or availability. On the contrary, Ceph is designed to handle whole disks on it’s own, without any abstraction in between. RAID controllers are not designed for the Ceph workload and may complicate things and sometimes even reduce performance, as their write and caching algorithms may interfere with the ones from Ceph.

Warning Avoid RAID controllers. Use host bus adapter (HBA) instead.
Note The above recommendations should be seen as a rough guidance for choosing hardware. Therefore, it is still essential to adapt it to your specific needs. You should test your setup and monitor health and performance continuously.

Initial Ceph Installation & Configuration

Using the Web-based Wizard

screenshot/gui-node-ceph-install.png

With Proxmox VE you have the benefit of an easy to use installation wizard for Ceph. Click on one of your cluster nodes and navigate to the Ceph section in the menu tree. If Ceph is not already installed, you will see a prompt offering to do so.

The wizard is divided into multiple sections, where each needs to finish successfully, in order to use Ceph.

First you need to chose which Ceph version you want to install. Prefer the one from your other nodes, or the newest if this is the first node you install Ceph.

After starting the installation, the wizard will download and install all the required packages from Proxmox VE’s Ceph repository.

screenshot/gui-node-ceph-install-wizard-step0.png

After finishing the installation step, you will need to create a configuration. This step is only needed once per cluster, as this configuration is distributed automatically to all remaining cluster members through Proxmox VE’s clustered configuration file system (pmxcfs).

The configuration step includes the following settings:

  • Public Network: You can set up a dedicated network for Ceph. This setting is required. Separating your Ceph traffic is highly recommended. Otherwise, it could cause trouble with other latency dependent services, for example, cluster communication may decrease Ceph’s performance.

  • Cluster Network: As an optional step, you can go even further and separate the OSD replication & heartbeat traffic as well. This will relieve the public network and could lead to significant performance improvements, especially in large clusters.

screenshot/gui-node-ceph-install-wizard-step2.png

You have two more options which are considered advanced and therefore should only changed if you know what you are doing.

  • Number of replicas: Defines how often an object is replicated

  • Minimum replicas: Defines the minimum number of required replicas for I/O to be marked as complete.

Additionally, you need to choose your first monitor node. This step is required.

That’s it. You should now see a success page as the last step, with further instructions on how to proceed. Your system is now ready to start using Ceph. To get started, you will need to create some additional monitors, OSDs and at least one pool.

The rest of this chapter will guide you through getting the most out of your Proxmox VE based Ceph setup. This includes the aforementioned tips and more, such as CephFS, which is a helpful addition to your new Ceph cluster.

CLI Installation of Ceph Packages

Alternatively to the the recommended Proxmox VE Ceph installation wizard available in the web-interface, you can use the following CLI command on each node:

pveceph install

This sets up an apt package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.

Initial Ceph configuration via CLI

Use the Proxmox VE Ceph installation wizard (recommended) or run the following command on one node:

pveceph init --network 10.10.10.0/24

This creates an initial configuration at /etc/pve/ceph.conf with a dedicated network for Ceph. This file is automatically distributed to all Proxmox VE nodes, using pmxcfs. The command also creates a symbolic link at /etc/ceph/ceph.conf, which points to that file. Thus, you can simply run Ceph commands without the need to specify a configuration file.

Ceph Monitor

screenshot/gui-ceph-monitor.png

The Ceph Monitor (MON)
[Ceph Monitor https://docs.ceph.com/en/octopus/start/intro/]
maintains a master copy of the cluster map. For high availability, you need at least 3 monitors. One monitor will already be installed if you used the installation wizard. You won’t need more than 3 monitors, as long as your cluster is small to medium-sized. Only really large clusters will require more than this.

Create Monitors

On each node where you want to place a monitor (three monitors are recommended), create one by using the Ceph → Monitor tab in the GUI or run:

pveceph mon create

Destroy Monitors

To remove a Ceph Monitor via the GUI, first select a node in the tree view and go to the Ceph → Monitor panel. Select the MON and click the Destroy button.

To remove a Ceph Monitor via the CLI, first connect to the node on which the MON is running. Then execute the following command:

pveceph mon destroy
Note At least three Monitors are needed for quorum.

Ceph Manager

The Manager daemon runs alongside the monitors. It provides an interface to monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
[Ceph Manager https://docs.ceph.com/en/octopus/mgr/]
daemon is required.

Create Manager

Multiple Managers can be installed, but only one Manager is active at any given time.

pveceph mgr create
Note It is recommended to install the Ceph Manager on the monitor nodes. For high availability install more then one manager.

Destroy Manager

To remove a Ceph Manager via the GUI, first select a node in the tree view and go to the Ceph → Monitor panel. Select the Manager and click the Destroy button.

To remove a Ceph Monitor via the CLI, first connect to the node on which the Manager is running. Then execute the following command:

pveceph mgr destroy
Note While a manager is not a hard-dependency, it is crucial for a Ceph cluster, as it handles important features like PG-autoscaling, device health monitoring, telemetry and more.

Ceph OSDs

screenshot/gui-ceph-osd-status.png

Ceph Object Storage Daemons store objects for Ceph over the network. It is recommended to use one OSD per physical disk.

Create OSDs

You can create an OSD either via the Proxmox VE web-interface or via the CLI using pveceph. For example:

pveceph osd create /dev/sd[X]
Tip We recommend a Ceph cluster with at least three nodes and at least 12 OSDs, evenly distributed among the nodes.

If the disk was in use before (for example, for ZFS or as an OSD) you first need to zap all traces of that usage. To remove the partition table, boot sector and any other OSD leftover, you can use the following command:

ceph-volume lvm zap /dev/sd[X] --destroy
Warning The above command will destroy all data on the disk!
Ceph Bluestore

Starting with the Ceph Kraken release, a new Ceph OSD storage type was introduced called Bluestore
[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/]
. This is the default when creating OSDs since Ceph Luminous.

pveceph osd create /dev/sd[X]
Block.db and block.wal

If you want to use a separate DB/WAL device for your OSDs, you can specify it through the -db_dev and -wal_dev options. The WAL is placed with the DB, if not specified separately.

pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]

You can directly choose the size of those with the -db_size and -wal_size parameters respectively. If they are not given, the following values (in order) will be used:

  • bluestore_block_{db,wal}_size from Ceph configuration…

    • … database, section osd

    • … database, section global

    • … file, section osd

    • … file, section global

  • 10% (DB)/1% (WAL) of OSD size

Note The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s internal journal or write-ahead log. It is recommended to use a fast SSD or NVRAM for better performance.
Ceph Filestore

Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs. Starting with Ceph Nautilus, Proxmox VE does not support creating such OSDs with pveceph anymore. If you still want to create filestore OSDs, use ceph-volume directly.

ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]

Destroy OSDs

To remove an OSD via the GUI, first select a Proxmox VE node in the tree view and go to the Ceph → OSD panel. Then select the OSD to destroy and click the OUT button. Once the OSD status has changed from in to out, click the STOP button. Finally, after the status has changed from up to down, select Destroy from the More drop-down menu.

To remove an OSD via the CLI run the following commands.

ceph osd out <ID>
systemctl stop ceph-osd@<ID>.service
Note The first command instructs Ceph not to include the OSD in the data distribution. The second command stops the OSD service. Until this time, no data is lost.

The following command destroys the OSD. Specify the -cleanup option to additionally destroy the partition table.

pveceph osd destroy <ID>
Warning The above command will destroy all data on the disk!

Ceph Pools

screenshot/gui-ceph-pools.png

A pool is a logical group for storing objects. It holds a collection of objects, known as Placement Groups (PG, pg_num).

Create and Edit Pools

You can create and edit pools from the command line or the web-interface of any Proxmox VE host under Ceph → Pools.

When no options are given, we set a default of 128 PGs, a size of 3 replicas and a min_size of 2 replicas, to ensure no data loss occurs if any OSD fails.

Warning Do not set a min_size of 1. A replicated pool with min_size of 1 allows I/O on an object when it has only 1 replica, which could lead to data loss, incomplete PGs or unfound objects.

It is advised that you either enable the PG-Autoscaler or calculate the PG number based on your setup. You can find the formula and the PG calculator
[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/]
online. From Ceph Nautilus onward, you can change the number of PGs
[Placement Groups https://docs.ceph.com/en/octopus/rados/operations/placement-groups/]
after the setup.

The PG autoscaler
[Automated Scaling https://docs.ceph.com/en/octopus/rados/operations/placement-groups/#automated-scaling]
can automatically scale the PG count for a pool in the background. Setting the Target Size or Target Ratio advanced parameters helps the PG-Autoscaler to make better decisions.

Example for creating a pool over the CLI
pveceph pool create <pool-name> --add_storages
Tip If you would also like to automatically define a storage for your pool, keep the ‘Add as Storage’ checkbox checked in the web-interface, or use the command line option --add_storages at pool creation.

Pool Options

screenshot/gui-ceph-pool-create.png

The following options are available on pool creation, and partially also when editing a pool.

Name

The name of the pool. This must be unique and can’t be changed afterwards.

Size

The number of replicas per object. Ceph always tries to have this many copies of an object. Default: 3.

PG Autoscale Mode

The automatic PG scaling mode
[autoscaler]
of the pool. If set to warn, it produces a warning message when a pool has a non-optimal PG count. Default: warn.

Add as Storage

Configure a VM or container storage using the new pool. Default: true (only visible on creation).

Advanced Options
Min. Size

The minimum number of replicas per object. Ceph will reject I/O on the pool if a PG has less than this many replicas. Default: 2.

Crush Rule

The rule to use for mapping object placement in the cluster. These rules define how data is placed within the cluster. See Ceph CRUSH & device classes for information on device-based rules.

# of PGs

The number of placement groups
[placement_groups]
that the pool should have at the beginning. Default: 128.

Target Ratio

The ratio of data that is expected in the pool. The PG autoscaler uses the ratio relative to other ratio sets. It takes precedence over the target size if both are set.

Target Size

The estimated amount of data expected in the pool. The PG autoscaler uses this size to estimate the optimal PG count.

Min. # of PGs

The minimum number of placement groups. This setting is used to fine-tune the lower bound of the PG count for that pool. The PG autoscaler will not merge PGs below this threshold.

Further information on Ceph pool handling can be found in the Ceph pool operation
[Ceph pool operation https://docs.ceph.com/en/octopus/rados/operations/pools/]
manual.

Erasure Coded Pools

Erasure coding (EC) is a form of ‘forward error correction’ codes that allows to recover from a certain amount of data loss. Erasure coded pools can offer more usable space compared to replicated pools, but they do that for the price of performance.

For comparison: in classic, replicated pools, multiple replicas of the data are stored (size) while in erasure coded pool, data is split into k data chunks with additional m coding (checking) chunks. Those coding chunks can be used to recreate data should data chunks be missing.

The number of coding chunks, m, defines how many OSDs can be lost without losing any data. The total amount of objects stored is k + m.

Creating EC Pools

Erasure coded (EC) pools can be created with the pveceph CLI tooling. Planning an EC pool needs to account for the fact, that they work differently than replicated pools.

The default min_size of an EC pool depends on the m parameter. If m = 1, the min_size of the EC pool will be k. The min_size will be k + 1 if m > 1. The Ceph documentation recommends a conservative min_size of k + 2
[Ceph Erasure Coded Pool Recovery https://docs.ceph.com/en/octopus/rados/operations/erasure-code/#erasure-coded-pool-recovery]
.

If there are less than min_size OSDs available, any IO to the pool will be blocked until there are enough OSDs available again.

Note When planning an erasure coded pool, keep an eye on the min_size as it defines how many OSDs need to be available. Otherwise, IO will be blocked.

For example, an EC pool with k = 2 and m = 1 will have size = 3, min_size = 2 and will stay operational if one OSD fails. If the pool is configured with k = 2, m = 2, it will have a size = 4 and min_size = 3 and stay operational if one OSD is lost.

To create a new EC pool, run the following command:

pveceph pool create <pool-name> --erasure-coding k=2,m=1

Optional parameters are failure-domain and device-class. If you need to change any EC profile settings used by the pool, you will have to create a new pool with a new profile.

This will create a new EC pool plus the needed replicated pool to store the RBD omap and other metadata. In the end, there will be a <pool name>-data and <pool name>-metada pool. The default behavior is to create a matching storage configuration as well. If that behavior is not wanted, you can disable it by providing the --add_storages 0 parameter. When configuring the storage configuration manually, keep in mind that the data-pool parameter needs to be set. Only then will the EC pool be used to store the data objects. For example:

Note The optional parameters --size, --min_size and --crush_rule will be used for the replicated metadata pool, but not for the erasure coded data pool. If you need to change the min_size on the data pool, you can do it later. The size and crush_rule parameters cannot be changed on erasure coded pools.

If there is a need to further customize the EC profile, you can do so by creating it with the Ceph tools directly
[Ceph Erasure Code Profile https://docs.ceph.com/en/octopus/rados/operations/erasure-code/#erasure-code-profiles]
, and specify the profile to use with the profile parameter.

For example:

pveceph pool create <pool-name> --erasure-coding profile=<profile-name>

Adding EC Pools as Storage

You can add an already existing EC pool as storage to Proxmox VE. It works the same way as adding an RBD pool but requires the extra data-pool option.

pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
Tip Do not forget to add the keyring and monhost option for any external Ceph clusters, not managed by the local Proxmox VE cluster.

Destroy Pools

To destroy a pool via the GUI, select a node in the tree view and go to the Ceph → Pools panel. Select the pool to destroy and click the Destroy button. To confirm the destruction of the pool, you need to enter the pool name.

Run the following command to destroy a pool. Specify the -remove_storages to also remove the associated storage.

pveceph pool destroy <name>
Note Pool deletion runs in the background and can take some time. You will notice the data usage in the cluster decreasing throughout this process.

PG Autoscaler

The PG autoscaler allows the cluster to consider the amount of (expected) data stored in each pool and to choose the appropriate pg_num values automatically. It is available since Ceph Nautilus.

You may need to activate the PG autoscaler module before adjustments can take effect.

ceph mgr module enable pg_autoscaler

The autoscaler is configured on a per pool basis and has the following modes:

warn

A health warning is issued if the suggested pg_num value differs too much from the current value.

on

The pg_num is adjusted automatically with no need for any manual interaction.

off

No automatic pg_num adjustments are made, and no warning will be issued if the PG count is not optimal.

The scaling factor can be adjusted to facilitate future data storage with the target_size, target_size_ratio and the pg_num_min options.

Warning By default, the autoscaler considers tuning the PG count of a pool if it is off by a factor of 3. This will lead to a considerable shift in data placement and might introduce a high load on the cluster.

You can find a more in-depth introduction to the PG autoscaler on Ceph’s Blog - New in Nautilus: PG merging and autotuning.

Ceph CRUSH & device classes

screenshot/gui-ceph-config.png

The
[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]
(Controlled Replication Under Scalable Hashing) algorithm is at the foundation of Ceph.

CRUSH calculates where to store and retrieve data from. This has the advantage that no central indexing service is needed. CRUSH works using a map of OSDs, buckets (device locations) and rulesets (data replication) for pools.

Note Further information can be found in the Ceph documentation, under the section CRUSH map
[CRUSH map https://docs.ceph.com/en/octopus/rados/operations/crush-map/]
.

This map can be altered to reflect different replication hierarchies. The object replicas can be separated (e.g., failure domains), while maintaining the desired distribution.

A common configuration is to use different classes of disks for different Ceph pools. For this reason, Ceph introduced device classes with luminous, to accommodate the need for easy ruleset generation.

The device classes can be seen in the ceph osd tree output. These classes represent their own root bucket, which can be seen with the below command.

ceph osd crush tree --show-shadow

Example output form the above command:

ID  CLASS WEIGHT  TYPE NAME
-16  nvme 2.18307 root default~nvme
-13  nvme 0.72769     host sumi1~nvme
 12  nvme 0.72769         osd.12
-14  nvme 0.72769     host sumi2~nvme
 13  nvme 0.72769         osd.13
-15  nvme 0.72769     host sumi3~nvme
 14  nvme 0.72769         osd.14
 -1       7.70544 root default
 -3       2.56848     host sumi1
 12  nvme 0.72769         osd.12
 -5       2.56848     host sumi2
 13  nvme 0.72769         osd.13
 -7       2.56848     host sumi3
 14  nvme 0.72769         osd.14

To instruct a pool to only distribute objects on a specific device class, you first need to create a ruleset for the device class:

ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>

<rule-name>

name of the rule, to connect with a pool (seen in GUI & CLI)

<root>

which crush root it should belong to (default Ceph root "default")

<failure-domain>

at which failure-domain the objects should be distributed (usually host)

<class>

what type of OSD backing store to use (e.g., nvme, ssd, hdd)

Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.

ceph osd pool set <pool-name> crush_rule <rule-name>
Tip If the pool already contains objects, these must be moved accordingly. Depending on your setup, this may introduce a big performance impact on your cluster. As an alternative, you can create a new pool and move disks separately.

Ceph Client

screenshot/gui-ceph-log.png

Following the setup from the previous sections, you can configure Proxmox VE to use such pools to store VM and Container images. Simply use the GUI to add a new RBD storage (see section Ceph RADOS Block Devices (RBD)).

You also need to copy the keyring to a predefined location for an external Ceph cluster. If Ceph is installed on the Proxmox nodes itself, then this will be done automatically.

Note The filename needs to be <storage_id> + `.keyring, where <storage_id> is the expression after rbd: in /etc/pve/storage.cfg. In the following example, my-ceph-storage is the <storage_id>:
mkdir /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring

CephFS

Ceph also provides a filesystem, which runs on top of the same object storage as RADOS block devices do. A Metadata Server (MDS) is used to map the RADOS backed objects to files and directories, allowing Ceph to provide a POSIX-compliant, replicated filesystem. This allows you to easily configure a clustered, highly available, shared filesystem. Ceph’s Metadata Servers guarantee that files are evenly distributed over the entire Ceph cluster. As a result, even cases of high load will not overwhelm a single host, which can be an issue with traditional shared filesystem approaches, for example NFS.

screenshot/gui-node-ceph-cephfs-panel.png

Proxmox VE supports both creating a hyper-converged CephFS and using an existing CephFS as storage to save backups, ISO files, and container templates.

Metadata Server (MDS)

CephFS needs at least one Metadata Server to be configured and running, in order to function. You can create an MDS through the Proxmox VE web GUI’s Node -> CephFS panel or from the command line with:

pveceph mds create

Multiple metadata servers can be created in a cluster, but with the default settings, only one can be active at a time. If an MDS or its node becomes unresponsive (or crashes), another standby MDS will get promoted to active. You can speed up the handover between the active and standby MDS by using the hotstandby parameter option on creation, or if you have already created it you may set/add:

mds standby replay = true

in the respective MDS section of /etc/pve/ceph.conf. With this enabled, the specified MDS will remain in a warm state, polling the active one, so that it can take over faster in case of any issues.

Note This active polling will have an additional performance impact on your system and the active MDS.
Multiple Active MDS

Since Luminous (12.2.x) you can have multiple active metadata servers running at once, but this is normally only useful if you have a high amount of clients running in parallel. Otherwise the MDS is rarely the bottleneck in a system. If you want to set this up, please refer to the Ceph documentation.
[Configuring multiple active MDS daemons https://docs.ceph.com/en/octopus/cephfs/multimds/]

Create CephFS

With Proxmox VE’s integration of CephFS, you can easily create a CephFS using the web interface, CLI or an external API interface. Some prerequisites are required for this to work:

Prerequisites for a successful CephFS setup:

After this is complete, you can simply create a CephFS through either the Web GUI’s Node -> CephFS panel or the command line tool pveceph, for example:

pveceph fs create --pg_num 128 --add-storage

This creates a CephFS named cephfs, using a pool for its data named cephfs_data with 128 placement groups and a pool for its metadata named cephfs_metadata with one quarter of the data pool’s placement groups (32). Check the Proxmox VE managed Ceph pool chapter or visit the Ceph documentation for more information regarding an appropriate placement group number (pg_num) for your setup
[placement_groups]
. Additionally, the --add-storage parameter will add the CephFS to the Proxmox VE storage configuration after it has been created successfully.

Destroy CephFS

Warning Destroying a CephFS will render all of its data unusable. This cannot be undone!

To completely and gracefully remove a CephFS, the following steps are necessary:

  • Disconnect every non-Proxmox VE client (e.g. unmount the CephFS in guests).

  • Disable all related CephFS Proxmox VE storage entries (to prevent it from being automatically mounted).

  • Remove all used resources from guests (e.g. ISOs) that are on the CephFS you want to destroy.

  • Unmount the CephFS storages on all cluster nodes manually with

    umount /mnt/pve/<STORAGE-NAME>

    Where <STORAGE-NAME> is the name of the CephFS storage in your Proxmox VE.

  • Now make sure that no metadata server (MDS) is running for that CephFS, either by stopping or destroying them. This can be done through the web interface or via the command line interface, for the latter you would issue the following command:

    pveceph stop --service mds.NAME

    to stop them, or

    pveceph mds destroy NAME

    to destroy them.

    Note that standby servers will automatically be promoted to active when an active MDS is stopped or removed, so it is best to first stop all standby servers.

  • Now you can destroy the CephFS with

    pveceph fs destroy NAME --remove-storages --remove-pools

    This will automatically destroy the underlying Ceph pools as well as remove the storages from pve config.

After these steps, the CephFS should be completely removed and if you have other CephFS instances, the stopped metadata servers can be started again to act as standbys.

Ceph maintenance

Replace OSDs

One of the most common maintenance tasks in Ceph is to replace the disk of an OSD. If a disk is already in a failed state, then you can go ahead and run through the steps in Destroy OSDs. Ceph will recreate those copies on the remaining OSDs if possible. This rebalancing will start as soon as an OSD failure is detected or an OSD was actively stopped.

Note With the default size/min_size (3/2) of a pool, recovery only starts when ‘size + 1` nodes are available. The reason for this is that the Ceph object balancer CRUSH defaults to a full node as `failure domain’.

To replace a functioning disk from the GUI, go through the steps in Destroy OSDs. The only addition is to wait until the cluster shows HEALTH_OK before stopping the OSD to destroy it.

On the command line, use the following commands:

ceph osd out osd.<id>

You can check with the command below if the OSD can be safely removed.

ceph osd safe-to-destroy osd.<id>

Once the above check tells you that it is safe to remove the OSD, you can continue with the following commands:

systemctl stop ceph-osd@<id>.service
pveceph osd destroy <id>

Replace the old disk with the new one and use the same procedure as described in Create OSDs.

Trim/Discard

It is good practice to run fstrim (discard) regularly on VMs and containers. This releases data blocks that the filesystem isn’t using anymore. It reduces data usage and resource load. Most modern operating systems issue such discard commands to their disks regularly. You only need to ensure that the Virtual Machines enable the disk discard option.

Scrub & Deep Scrub

Ceph ensures data integrity by scrubbing placement groups. Ceph checks every object in a PG for its health. There are two forms of Scrubbing, daily cheap metadata checks and weekly deep data checks. The weekly deep scrub reads the objects and uses checksums to ensure data integrity. If a running scrub interferes with business (performance) needs, you can adjust the time when scrubs
[Ceph scrubbing https://docs.ceph.com/en/octopus/rados/configuration/osd-config-ref/#scrubbing]
are executed.

Ceph Monitoring and Troubleshooting

It is important to continuously monitor the health of a Ceph deployment from the beginning, either by using the Ceph tools or by accessing the status through the Proxmox VE API.

The following Ceph commands can be used to see if the cluster is healthy (HEALTH_OK), if there are warnings (HEALTH_WARN), or even errors (HEALTH_ERR). If the cluster is in an unhealthy state, the status commands below will also give you an overview of the current events and actions to take.

# single time output
pve# ceph -s
# continuously output status changes (press CTRL+C to stop)
pve# ceph -w

To get a more detailed view, every Ceph service has a log file under /var/log/ceph/. If more detail is required, the log level can be adjusted
[Ceph log and debugging https://docs.ceph.com/en/octopus/rados/troubleshooting/log-and-debug/]
.

You can find more information about troubleshooting
[Ceph troubleshooting https://docs.ceph.com/en/octopus/rados/troubleshooting/]
a Ceph cluster on the official website.