Deploy Hyper-Converged Ceph Cluster: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
No edit summary
 
(12 intermediate revisions by the same user not shown)
Line 4: Line 4:
[[Category:Reference Documentation]]
[[Category:Reference Documentation]]
<pvehide>
<pvehide>
Proxmox VE unifies your compute and storage systems, i.e. you can use the same
Introduction
Proxmox VE unifies your compute and storage systems, that is, you can use the same
physical nodes within a cluster for both computing (processing VMs and
physical nodes within a cluster for both computing (processing VMs and
containers) and replicated storage. The traditional silos of compute and
containers) and replicated storage. The traditional silos of compute and
storage resources can be wrapped up into a single hyper-converged appliance.
storage resources can be wrapped up into a single hyper-converged appliance.
Separate storage networks (SANs) and connections via network attached storages
Separate storage networks (SANs) and connections via network attached storage
(NAS) disappear. With the integration of Ceph, an open source software-defined
(NAS) disappear. With the integration of Ceph, an open source software-defined
storage platform, Proxmox VE has the ability to run and manage Ceph storage directly
storage platform, Proxmox VE has the ability to run and manage Ceph storage directly
Line 15: Line 16:
excellent performance, reliability and scalability.
excellent performance, reliability and scalability.
Some advantages of Ceph on Proxmox VE are:
Some advantages of Ceph on Proxmox VE are:
Easy setup and management with CLI and GUI support
Easy setup and management via CLI and GUI
Thin provisioning
Thin provisioning
Snapshots support
Snapshot support
Self healing
Self healing
Scalable to the exabyte level
Scalable to the exabyte level
Provides block, file system, and object storage
Setup pools with different performance and redundancy characteristics
Setup pools with different performance and redundancy characteristics
Data is replicated, making it fault tolerant
Data is replicated, making it fault tolerant
Runs on economical commodity hardware
Runs on commodity hardware
No need for hardware RAID controllers
No need for hardware RAID controllers
Open source
Open source
For small to mid sized deployments, it is possible to install a Ceph server for
For small to medium-sized deployments, it is possible to install a Ceph server
RADOS Block Devices (RBD) directly on your Proxmox VE cluster nodes, see
for using RADOS Block Devices (RBD) or CephFS directly on your Proxmox VE cluster
Ceph RADOS Block Devices (RBD). Recent
nodes (see Ceph RADOS Block Devices (RBD)).
hardware has plenty of CPU power and RAM, so running storage services
Recent hardware has a lot of CPU power and RAM, so running storage services and
and VMs on the same node is possible.
virtual guests on the same node is possible.
To simplify management, we provide pveceph - a tool to install and
To simplify management, Proxmox VE provides you native integration to install and
manage Ceph services on Proxmox VE nodes.
manage Ceph services on Proxmox VE nodes either via the built-in web interface, or
Ceph consists of a couple of Daemons [Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
using the pveceph command line tool.
Ceph Monitor (ceph-mon)
Terminology
Ceph Manager (ceph-mgr)
Ceph consists of multiple Daemons, for use as an RBD storage:
Ceph OSD (ceph-osd; Object Storage Daemon)
Ceph Monitor (ceph-mon, or MON)
We highly recommend to get familiar with Ceph&#8217;s architecture
Ceph Manager (ceph-mgr, or MGS)
[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
Ceph Metadata Service (ceph-mds, or MDS)
Ceph Object Storage Daemon (ceph-osd, or OSD)
We highly recommend to get familiar with Ceph
[Ceph intro https://docs.ceph.com/en/quincy/start/],
its architecture
[Ceph architecture https://docs.ceph.com/en/quincy/architecture/]
and vocabulary
and vocabulary
[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
[Ceph glossary https://docs.ceph.com/en/quincy/glossary].
Precondition
Recommendations for a Healthy Ceph Cluster
To build a hyper-converged Proxmox + Ceph Cluster there should be at least
To build a hyper-converged Proxmox + Ceph Cluster, you must use at least three
three (preferably) identical servers for the setup.
(preferably) identical servers for the setup.
Check also the recommendations from
Check also the recommendations from
Ceph&#8217;s website.
Ceph&#8217;s website.
The recommendations below should be seen as a rough guidance for choosing
hardware. Therefore, it is still essential to adapt it to your specific needs.
You should test your setup and monitor health and performance continuously.
CPU
CPU
Higher CPU core frequency reduce latency and should be preferred. As a simple
Ceph services can be classified into two categories:
rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
Intensive CPU usage, benefiting from high CPU base frequencies and multiple
provide enough resources for stable and durable Ceph performance.
  cores. Members of that category are:
Object Storage Daemon (OSD) services
Meta Data Service (MDS) used for CephFS
Moderate CPU usage, not needing multiple CPU cores. These are:
Monitor (MON) services
Manager (MGR) services
As a simple rule of thumb, you should assign at least one CPU core (or thread)
to each Ceph service to provide the minimum resources required for stable and
durable Ceph performance.
For example, if you plan to run a Ceph monitor, a Ceph manager and 6 Ceph OSDs
services on a node you should reserve 8 CPU cores purely for Ceph when targeting
basic and stable performance.
Note that OSDs CPU usage depend mostly from the disks performance. The higher
the possible IOPS (IO Operations per Second) of a disk, the more CPU
can be utilized by a OSD service.
For modern enterprise SSD disks, like NVMe&#8217;s that can permanently sustain a high
IOPS load over 100&#8217;000 with sub millisecond latency, each OSD can use multiple
CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is
likely for very high performance disks.
Memory
Memory
Especially in a hyper-converged setup, the memory consumption needs to be
Especially in a hyper-converged setup, the memory consumption needs to be
carefully monitored. In addition to the intended workload from virtual machines
carefully planned out and monitored. In addition to the predicted memory usage
and container, Ceph needs enough memory available to provide good and stable
of virtual machines and containers, you must also account for having enough
performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
memory available for Ceph to provide excellent and stable performance.
will be used by an OSD. OSD caching will use additional memory.
As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used
by an OSD. While the usage might be less under normal conditions, it will use
most during critical operations like recovery, re-balancing or backfilling.
That means that you should avoid maxing out your available memory already on
normal operation, but rather leave some headroom to cope with outages.
The OSD service itself will use additional memory. The Ceph BlueStore backend of
the daemon requires by default 3-5 GiB of memory (adjustable).
Network
Network
We recommend a network bandwidth of at least 10 GbE or more, which is used
We recommend a network bandwidth of at least 10 Gbps, or more, to be used
exclusively for Ceph. A meshed network setup
exclusively for Ceph traffic. A meshed network setup
[Full Mesh Network for Ceph https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server]
[Full Mesh Network for Ceph https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server]
is also an option if there are no 10 GbE switches available.
is also an option for three to five node clusters, if there are no 10+ Gbps
The volume of traffic, especially during recovery, will interfere with other
switches available.
services on the same network and may even break the Proxmox VE cluster stack.
The volume of traffic, especially during recovery, will interfere
Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
with other services on the same network, especially the latency sensitive Proxmox VE
link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
corosync cluster stack can be affected, resulting in possible loss of cluster
10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
quorum. Moving the Ceph traffic to dedicated and physical separated networks
will ensure that it isn&#8217;t your bottleneck and won&#8217;t be anytime soon, 25, 40 or
will avoid such interference, not only for corosync, but also for the networking
even 100 GBps are possible.
services provided by any virtual guests.
For estimating your bandwidth needs, you need to take the performance of your
disks into account.. While a single HDD might not saturate a 1 Gb link, multiple
HDD OSDs per node can already saturate 10 Gbps too.
If modern NVMe-attached SSDs are used, a single one can already saturate 10 Gbps
of bandwidth, or more. For such high-performance setups we recommend at least
a 25 Gpbs, while even 40 Gbps or 100+ Gbps might be required to utilize the full
performance potential of the underlying disks.
If unsure, we recommend using three (physical) separate networks for
high-performance setups:
one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster
  traffic.
one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the
  ceph server and ceph client storage traffic. Depending on your needs this can
  also be used to host the virtual guest traffic and the VM live-migration
  traffic.
one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync
  cluster communication.
Disks
Disks
When planning the size of your Ceph cluster, it is important to take the
When planning the size of your Ceph cluster, it is important to take the
recovery time into consideration. Especially with small clusters, the recovery
recovery time into consideration. Especially with small clusters, recovery
might take long. It is recommended that you use SSDs instead of HDDs in small
might take long. It is recommended that you use SSDs instead of HDDs in small
setups to reduce recovery time, minimizing the likelihood of a subsequent
setups to reduce recovery time, minimizing the likelihood of a subsequent
failure event during recovery.
failure event during recovery.
In general SSDs will provide more IOPs than spinning disks. This fact and the
In general, SSDs will provide more IOPS than spinning disks. With this in mind,
higher cost may make a class based separation of
in addition to the higher cost, it may make sense to implement a
pools appealing. Another possibility to speedup OSDs is to use a faster disk
class based separation of pools. Another way to
as journal or DB/Write-Ahead-Log device, see
speed up OSDs is to use a faster disk as a journal or
creating Ceph OSDs. If a faster disk is used for multiple
DB/Write-Ahead-Log device, see
OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
creating Ceph OSDs.
selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
If a faster disk is used for multiple OSDs, a proper balance between OSD
Aside from the disk type, Ceph best performs with an even sized and distributed
and WAL / DB (or journal) disk must be selected, otherwise the faster disk
amount of disks per node. For example, 4 x 500 GB disks with in each node is
becomes the bottleneck for all linked OSDs.
better than a mixed setup with a single 1 TB and three 250 GB disk.
Aside from the disk type, Ceph performs best with an evenly sized, and an evenly
One also need to balance OSD count and single OSD capacity. More capacity
distributed amount of disks per node. For example, 4 x 500 GB disks within each
allows to increase storage density, but it also means that a single OSD
node is better than a mixed setup with a single 1 TB and three 250 GB disk.
failure forces ceph to recover more data at once.
You also need to balance OSD count and single OSD capacity. More capacity
allows you to increase storage density, but it also means that a single OSD
failure forces Ceph to recover more data at once.
Avoid RAID
Avoid RAID
As Ceph handles data object redundancy and multiple parallel writes to disks
As Ceph handles data object redundancy and multiple parallel writes to disks
(OSDs) on its own, using a RAID controller normally doesn’t improve
(OSDs) on its own, using a RAID controller normally doesn’t improve
performance or availability. On the contrary, Ceph is designed to handle whole
performance or availability. On the contrary, Ceph is designed to handle whole
disks on it&#8217;s own, without any abstraction in between. RAID controller are not
disks on it&#8217;s own, without any abstraction in between. RAID controllers are not
designed for the Ceph use case and may complicate things and sometimes even
designed for the Ceph workload and may complicate things and sometimes even
reduce performance, as their write and caching algorithms may interfere with
reduce performance, as their write and caching algorithms may interfere with
the ones from Ceph.
the ones from Ceph.
Avoid RAID controller, use host bus adapter (HBA) instead.
Avoid RAID controllers. Use host bus adapter (HBA) instead.
Above recommendations should be seen as a rough guidance for choosing
Initial Ceph Installation &amp; Configuration
hardware. Therefore, it is still essential to adapt it to your specific needs,
Using the Web-based Wizard
test your setup and monitor health and performance continuously.
Initial Ceph installation &amp; configuration
With Proxmox VE you have the benefit of an easy to use installation wizard
With Proxmox VE you have the benefit of an easy to use installation wizard
for Ceph. Click on one of your cluster nodes and navigate to the Ceph
for Ceph. Click on one of your cluster nodes and navigate to the Ceph
section in the menu tree. If Ceph is not already installed you will be
section in the menu tree. If Ceph is not already installed, you will see a
offered to do so now.
prompt offering to do so.
The wizard is divided into different sections, where each needs to be
The wizard is divided into multiple sections, where each needs to
finished successfully in order to use Ceph. After starting the installation
finish successfully, in order to use Ceph.
the wizard will download and install all required packages from Proxmox VE&#8217;s ceph
First you need to chose which Ceph version you want to install. Prefer the one
repository.
from your other nodes, or the newest if this is the first node you install
After finishing the first step, you will need to create a configuration.
Ceph.
After starting the installation, the wizard will download and install all the
required packages from Proxmox VE&#8217;s Ceph repository.
After finishing the installation step, you will need to create a configuration.
This step is only needed once per cluster, as this configuration is distributed
This step is only needed once per cluster, as this configuration is distributed
automatically to all remaining cluster members through Proxmox VE&#8217;s clustered
automatically to all remaining cluster members through Proxmox VE&#8217;s clustered
configuration file system (pmxcfs).
configuration file system (pmxcfs).
The configuration step includes the following settings:
The configuration step includes the following settings:
Public Network: You should setup a dedicated network for Ceph, this
Public Network: This network will be used for public storage communication
setting is required. Separating your Ceph traffic is highly recommended,
  (e.g., for virtual machines using a Ceph RBD backed disk, or a CephFS mount),
because it could lead to troubles with other latency dependent services,
  and communication between the different Ceph services. This setting is
e.g., cluster communication may decrease Ceph&#8217;s performance, if not done.
  required.
Cluster Network: As an optional step you can go even further and
  Separating your Ceph traffic from the Proxmox VE cluster communication (corosync),
separate the OSD replication &amp; heartbeat traffic
  and possible the front-facing (public) networks of your virtual guests, is
as well. This will relieve the public network and could lead to
  highly recommended. Otherwise, Ceph&#8217;s high-bandwidth IO-traffic could cause
significant performance improvements especially in big clusters.
  interference with other low-latency dependent services.
You have two more options which are considered advanced and therefore
Cluster Network: Specify to separate the OSD replication
should only changed if you are an expert.
  and heartbeat traffic as well. This setting is optional.
Number of replicas: Defines the how often a object is replicated
  Using a physically separated network is recommended, as it will relieve the
Minimum replicas: Defines the minimum number of required replicas
  Ceph public and the virtual guests network, while also providing a significant
  for I/O to be marked as complete.
  Ceph performance improvements.
Additionally you need to choose your first monitor node, this is required.
  The Ceph cluster network can be configured and moved to another physically
That&#8217;s it, you should see a success page as the last step with further
  separated network at a later time.
instructions on how to go on. You are now prepared to start using Ceph,
You have two more options which are considered advanced and therefore should
even though you will need to create additional monitors,
only changed if you know what you are doing.
create some OSDs and at least one pool.
Number of replicas: Defines how often an object is replicated.
The rest of this chapter will guide you on how to get the most out of
Minimum replicas: Defines the minimum number of required replicas for I/O to
your Proxmox VE based Ceph setup, this will include aforementioned and
  be marked as complete.
more like CephFS which is a very handy addition to your
Additionally, you need to choose your first monitor node. This step is required.
That&#8217;s it. You should now see a success page as the last step, with further
instructions on how to proceed. Your system is now ready to start using Ceph.
To get started, you will need to create some additional monitors,
OSDs and at least one pool.
The rest of this chapter will guide you through getting the most out of
your Proxmox VE based Ceph setup. This includes the aforementioned tips and
more, such as CephFS, which is a helpful addition to your
new Ceph cluster.
new Ceph cluster.
Installation of Ceph Packages
CLI Installation of Ceph Packages
Use Proxmox VE Ceph installation wizard (recommended) or run the following
Alternatively to the the recommended Proxmox VE Ceph installation wizard available
command on each node:
in the web interface, you can use the following CLI command on each node:
pveceph install
pveceph install
This sets up an apt package repository in
This sets up an apt package repository in
/etc/apt/sources.list.d/ceph.list and installs the required software.
/etc/apt/sources.list.d/ceph.list and installs the required software.
Creating initial Ceph configuration
Initial Ceph configuration via CLI
Use the Proxmox VE Ceph installation wizard (recommended) or run the
Use the Proxmox VE Ceph installation wizard (recommended) or run the
following command on one node:
following command on one node:
pveceph init --network 10.10.10.0/24
pveceph init --network 10.10.10.0/24
This creates an initial configuration at /etc/pve/ceph.conf with a
This creates an initial configuration at /etc/pve/ceph.conf with a
dedicated network for ceph. That file is automatically distributed to
dedicated network for Ceph. This file is automatically distributed to
all Proxmox VE nodes by using pmxcfs. The command also
all Proxmox VE nodes, using pmxcfs. The command also
creates a symbolic link from /etc/ceph/ceph.conf pointing to that file.
creates a symbolic link at /etc/ceph/ceph.conf, which points to that file.
So you can simply run Ceph commands without the need to specify a
Thus, you can simply run Ceph commands without the need to specify a
configuration file.
configuration file.
Creating Ceph Monitors
Ceph Monitor
The Ceph Monitor (MON)
The Ceph Monitor (MON)
[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
[Ceph Monitor https://docs.ceph.com/en/quincy/rados/configuration/mon-config-ref/]
maintains a master copy of the cluster map. For high availability you need to
maintains a master copy of the cluster map. For high availability, you need at
have at least 3 monitors. One monitor will already be installed if you
least 3 monitors. One monitor will already be installed if you
used the installation wizard. You wont need more than 3 monitors as long
used the installation wizard. You won&#8217;t need more than 3 monitors, as long
as your cluster is small to midsize, only really large clusters will
as your cluster is small to medium-sized. Only really large clusters will
need more than that.
require more than this.
Create Monitors
On each node where you want to place a monitor (three monitors are recommended),
On each node where you want to place a monitor (three monitors are recommended),
create it by using the Ceph &#8594; Monitor tab in the GUI or run.
create one by using the Ceph &#8594; Monitor tab in the GUI or run:
pveceph createmon
pveceph mon create
This will also install the needed Ceph Manager (ceph-mgr) by default. If you
Destroy Monitors
do not want to install a manager, specify the -exclude-manager option.
To remove a Ceph Monitor via the GUI, first select a node in the tree view and
Creating Ceph Manager
go to the Ceph &#8594; Monitor panel. Select the MON and click the Destroy
The Manager daemon runs alongside the monitors, providing an interface for
button.
monitoring the cluster. Since the Ceph luminous release the
To remove a Ceph Monitor via the CLI, first connect to the node on which the MON
ceph-mgr [Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
is running. Then execute the following command:
is required. During monitor installation the ceph manager will be installed as
pveceph mon destroy
well.
At least three Monitors are needed for quorum.
Ceph Manager
The Manager daemon runs alongside the monitors. It provides an interface to
monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
[Ceph Manager https://docs.ceph.com/en/quincy/mgr/] daemon is
required.
Create Manager
Multiple Managers can be installed, but only one Manager is active at any given
time.
pveceph mgr create
It is recommended to install the Ceph Manager on the monitor nodes. For
It is recommended to install the Ceph Manager on the monitor nodes. For
high availability install more then one manager.
high availability install more then one manager.
pveceph createmgr
Destroy Manager
Creating Ceph OSDs
To remove a Ceph Manager via the GUI, first select a node in the tree view and
via GUI or via CLI as follows:
go to the Ceph &#8594; Monitor panel. Select the Manager and click the
pveceph createosd /dev/sd[X]
Destroy button.
We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly
To remove a Ceph Monitor via the CLI, first connect to the node on which the
among your, at least three nodes (4 OSDs on each node).
Manager is running. Then execute the following command:
If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
pveceph mgr destroy
sector and any OSD leftover the following command should be sufficient.
While a manager is not a hard-dependency, it is crucial for a Ceph cluster,
as it handles important features like PG-autoscaling, device health monitoring,
telemetry and more.
Ceph OSDs
Ceph Object Storage Daemons store objects for Ceph over the
network. It is recommended to use one OSD per physical disk.
Create OSDs
You can create an OSD either via the Proxmox VE web interface or via the CLI using
pveceph. For example:
pveceph osd create /dev/sd[X]
We recommend a Ceph cluster with at least three nodes and at least 12
OSDs, evenly distributed among the nodes.
If the disk was in use before (for example, for ZFS or as an OSD) you first need
to zap all traces of that usage. To remove the partition table, boot sector and
any other OSD leftover, you can use the following command:
ceph-volume lvm zap /dev/sd[X] --destroy
ceph-volume lvm zap /dev/sd[X] --destroy
The above command will destroy data on the disk!
The above command will destroy all data on the disk!
Ceph Bluestore
Ceph Bluestore
Starting with the Ceph Kraken release, a new Ceph OSD storage type was
Starting with the Ceph Kraken release, a new Ceph OSD storage type was
introduced, the so called Bluestore
introduced called Bluestore
[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/].
This is the default when creating OSDs since Ceph Luminous.
This is the default when creating OSDs since Ceph Luminous.
pveceph createosd /dev/sd[X]
pveceph osd create /dev/sd[X]
Block.db and block.wal
Block.db and block.wal
If you want to use a separate DB/WAL device for your OSDs, you can specify it
If you want to use a separate DB/WAL device for your OSDs, you can specify it
through the -db_dev and -wal_dev options. The WAL is placed with the DB, if not
through the -db_dev and -wal_dev options. The WAL is placed with the DB, if
specified separately.
not specified separately.
pveceph createosd /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
You can directly choose the size for those with the -db_size and -wal_size
You can directly choose the size of those with the -db_size and -wal_size
paremeters respectively. If they are not given the following values (in order)
parameters respectively. If they are not given, the following values (in order)
will be used:
will be used:
bluestore_block_{db,wal}_size from ceph configuration&#8230;
bluestore_block_{db,wal}_size from Ceph configuration&#8230;
&#8230; database, section osd
&#8230; database, section osd
&#8230; database, section global
&#8230; database, section global
Line 201: Line 286:
&#8230; file, section global
&#8230; file, section global
10% (DB)/1% (WAL) of OSD size
10% (DB)/1% (WAL) of OSD size
The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s
internal journal or write-ahead log. It is recommended to use a fast SSD or
internal journal or write-ahead log. It is recommended to use a fast SSD or
NVRAM for better performance.
NVRAM for better performance.
Ceph Filestore
Ceph Filestore
Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs.
Starting with Ceph Nautilus, Proxmox VE does not support creating such OSDs with
Starting with Ceph Nautilus, Proxmox VE does not support creating such OSDs with
pveceph anymore. If you still want to create filestore OSDs, use
pveceph anymore. If you still want to create filestore OSDs, use
ceph-volume directly.
ceph-volume directly.
ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
Creating Ceph Pools
Destroy OSDs
A pool is a logical group for storing objects. It holds Placement
To remove an OSD via the GUI, first select a Proxmox VE node in the tree view and go
Groups (PG, pg_num), a collection of objects.
to the Ceph &#8594; OSD panel. Then select the OSD to destroy and click the OUT
button. Once the OSD status has changed from in to out, click the STOP
button. Finally, after the status has changed from up to down, select
Destroy from the More drop-down menu.
To remove an OSD via the CLI run the following commands.
ceph osd out &lt;ID&gt;
systemctl stop ceph-osd@&lt;ID&gt;.service
The first command instructs Ceph not to include the OSD in the data
distribution. The second command stops the OSD service. Until this time, no
data is lost.
The following command destroys the OSD. Specify the -cleanup option to
additionally destroy the partition table.
pveceph osd destroy &lt;ID&gt;
The above command will destroy all data on the disk!
Ceph Pools
A pool is a logical group for storing objects. It holds a collection of objects,
known as Placement Groups (PG, pg_num).
Create and Edit Pools
You can create and edit pools from the command line or the web interface of any
Proxmox VE host under Ceph &#8594; Pools.
When no options are given, we set a default of 128 PGs, a size of 3
When no options are given, we set a default of 128 PGs, a size of 3
replicas and a min_size of 2 replicas for serving objects in a degraded
replicas and a min_size of 2 replicas, to ensure no data loss occurs if
state.
any OSD fails.
The default number of PGs works for 2-5 disks. Ceph throws a
Do not set a min_size of 1. A replicated pool with min_size of 1
HEALTH_WARNING if you have too few or too many PGs in your cluster.
allows I/O on an object when it has only 1 replica, which could lead to data
It is advised to calculate the PG number depending on your setup, you can find
loss, incomplete PGs or unfound objects.
the formula and the PG calculator [PG calculator
It is advised that you either enable the PG-Autoscaler or calculate the PG
http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
number based on your setup. You can find the formula and the PG calculator
never be decreased.
[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/] online. From Ceph Nautilus
You can create pools through command line or on the GUI on each PVE host under
onward, you can change the number of PGs
Ceph &#8594; Pools.
[Placement Groups https://docs.ceph.com/en/quincy/rados/operations/placement-groups/] after the setup.
pveceph createpool &lt;name&gt;
The PG autoscaler [Automated Scaling https://docs.ceph.com/en/quincy/rados/operations/placement-groups/#automated-scaling] can
If you would like to automatically get also a storage definition for your pool,
automatically scale the PG count for a pool in the background. Setting the
active the checkbox "Add storages" on the GUI or use the command line option
Target Size or Target Ratio advanced parameters helps the PG-Autoscaler to
--add_storages on pool creation.
make better decisions.
Example for creating a pool over the CLI
pveceph pool create &lt;pool-name&gt; --add_storages
If you would also like to automatically define a storage for your
pool, keep the &#8216;Add as Storage&#8217; checkbox checked in the web interface, or use the
command-line option --add_storages at pool creation.
Pool Options
The following options are available on pool creation, and partially also when
editing a pool.
Name
The name of the pool. This must be unique and can&#8217;t be changed afterwards.
Size
The number of replicas per object. Ceph always tries to have this many
copies of an object. Default: 3.
PG Autoscale Mode
The automatic PG scaling mode [autoscaler] of
the pool. If set to warn, it produces a warning message when a pool
has a non-optimal PG count. Default: warn.
Add as Storage
Configure a VM or container storage using the new pool.
Default: true (only visible on creation).
Advanced Options
Min. Size
The minimum number of replicas per object. Ceph will reject I/O on
the pool if a PG has less than this many replicas. Default: 2.
Crush Rule
The rule to use for mapping object placement in the cluster. These
rules define how data is placed within the cluster. See
Ceph CRUSH &amp; device classes for information on
device-based rules.
# of PGs
The number of placement groups [placement_groups] that
the pool should have at the beginning. Default: 128.
Target Ratio
The ratio of data that is expected in the pool. The PG
autoscaler uses the ratio relative to other ratio sets. It takes precedence
over the target size if both are set.
Target Size
The estimated amount of data expected in the pool. The PG
autoscaler uses this size to estimate the optimal PG count.
Min. # of PGs
The minimum number of placement groups. This setting is used to
fine-tune the lower bound of the PG count for that pool. The PG autoscaler
will not merge PGs below this threshold.
Further information on Ceph pool handling can be found in the Ceph pool
Further information on Ceph pool handling can be found in the Ceph pool
operation [Ceph pool operation
operation [Ceph pool operation
http://docs.ceph.com/docs/luminous/rados/operations/pools/]
https://docs.ceph.com/en/quincy/rados/operations/pools/]
manual.
manual.
Erasure Coded Pools
Erasure coding (EC) is a form of &#8216;forward error correction&#8217; codes that allows
to recover from a certain amount of data loss. Erasure coded pools can offer
more usable space compared to replicated pools, but they do that for the price
of performance.
For comparison: in classic, replicated pools, multiple replicas of the data
are stored (size) while in erasure coded pool, data is split into k data
chunks with additional m coding (checking) chunks. Those coding chunks can be
used to recreate data should data chunks be missing.
The number of coding chunks, m, defines how many OSDs can be lost without
losing any data. The total amount of objects stored is k + m.
Creating EC Pools
Erasure coded (EC) pools can be created with the pveceph CLI tooling.
Planning an EC pool needs to account for the fact, that they work differently
than replicated pools.
The default min_size of an EC pool depends on the m parameter. If m = 1,
the min_size of the EC pool will be k. The min_size will be k + 1 if
m &gt; 1. The Ceph documentation recommends a conservative min_size of k + 2
[Ceph Erasure Coded Pool Recovery
https://docs.ceph.com/en/quincy/rados/operations/erasure-code/#erasure-coded-pool-recovery].
If there are less than min_size OSDs available, any IO to the pool will be
blocked until there are enough OSDs available again.
When planning an erasure coded pool, keep an eye on the min_size as it
defines how many OSDs need to be available. Otherwise, IO will be blocked.
For example, an EC pool with k = 2 and m = 1 will have size = 3,
min_size = 2 and will stay operational if one OSD fails. If the pool is
configured with k = 2, m = 2, it will have a size = 4 and min_size = 3
and stay operational if one OSD is lost.
To create a new EC pool, run the following command:
pveceph pool create &lt;pool-name&gt; --erasure-coding k=2,m=1
Optional parameters are failure-domain and device-class. If you
need to change any EC profile settings used by the pool, you will have to
create a new pool with a new profile.
This will create a new EC pool plus the needed replicated pool to store the RBD
omap and other metadata. In the end, there will be a &lt;pool name&gt;-data and
&lt;pool name&gt;-metadata pool. The default behavior is to create a matching storage
configuration as well. If that behavior is not wanted, you can disable it by
providing the --add_storages 0 parameter.  When configuring the storage
configuration manually, keep in mind that the data-pool parameter needs to be
set. Only then will the EC pool be used to store the data objects. For example:
The optional parameters --size, --min_size and --crush_rule will be
used for the replicated metadata pool, but not for the erasure coded data pool.
If you need to change the min_size on the data pool, you can do it later.
The size and crush_rule parameters cannot be changed on erasure coded
pools.
If there is a need to further customize the EC profile, you can do so by
creating it with the Ceph tools directly [Ceph Erasure Code Profile
https://docs.ceph.com/en/quincy/rados/operations/erasure-code/#erasure-code-profiles], and
specify the profile to use with the profile parameter.
For example:
pveceph pool create &lt;pool-name&gt; --erasure-coding profile=&lt;profile-name&gt;
Adding EC Pools as Storage
You can add an already existing EC pool as storage to Proxmox VE. It works the same
way as adding an RBD pool but requires the extra data-pool option.
pvesm add rbd &lt;storage-name&gt; --pool &lt;replicated-pool&gt; --data-pool &lt;ec-pool&gt;
Do not forget to add the keyring and monhost option for any external
Ceph clusters, not managed by the local Proxmox VE cluster.
Destroy Pools
To destroy a pool via the GUI, select a node in the tree view and go to the
Ceph &#8594; Pools panel. Select the pool to destroy and click the Destroy
button. To confirm the destruction of the pool, you need to enter the pool name.
Run the following command to destroy a pool. Specify the -remove_storages to
also remove the associated storage.
pveceph pool destroy &lt;name&gt;
Pool deletion runs in the background and can take some time.
You will notice the data usage in the cluster decreasing throughout this
process.
PG Autoscaler
The PG autoscaler allows the cluster to consider the amount of (expected) data
stored in each pool and to choose the appropriate pg_num values automatically.
It is available since Ceph Nautilus.
You may need to activate the PG autoscaler module before adjustments can take
effect.
ceph mgr module enable pg_autoscaler
The autoscaler is configured on a per pool basis and has the following modes:
warn
A health warning is issued if the suggested pg_num value differs too
much from the current value.
on
The pg_num is adjusted automatically with no need for any manual
interaction.
off
No automatic pg_num adjustments are made, and no warning will be issued
if the PG count is not optimal.
The scaling factor can be adjusted to facilitate future data storage with the
target_size, target_size_ratio and the pg_num_min options.
By default, the autoscaler considers tuning the PG count of a pool if
it is off by a factor of 3. This will lead to a considerable shift in data
placement and might introduce a high load on the cluster.
You can find a more in-depth introduction to the PG autoscaler on Ceph&#8217;s Blog -
New in
Nautilus: PG merging and autotuning.
Ceph CRUSH &amp; device classes
Ceph CRUSH &amp; device classes
The foundation of Ceph is its algorithm, Controlled Replication
The [https://ceph.com/assets/pdfs/weil-crush-sc06.pdf] (Controlled
Under Scalable Hashing
Replication Under Scalable Hashing) algorithm is at the
(CRUSH [CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
foundation of Ceph.
CRUSH calculates where to store to and retrieve data from, this has the
CRUSH calculates where to store and retrieve data from. This has the
advantage that no central index service is needed. CRUSH works with a map of
advantage that no central indexing service is needed. CRUSH works using a map of
OSDs, buckets (device locations) and rulesets (data replication) for pools.
OSDs, buckets (device locations) and rulesets (data replication) for pools.
Further information can be found in the Ceph documentation, under the
Further information can be found in the Ceph documentation, under the
section CRUSH map [CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
section CRUSH map [CRUSH map https://docs.ceph.com/en/quincy/rados/operations/crush-map/].
This map can be altered to reflect different replication hierarchies. The object
This map can be altered to reflect different replication hierarchies. The object
replicas can be separated (eg. failure domains), while maintaining the desired
replicas can be separated (e.g., failure domains), while maintaining the desired
distribution.
distribution.
A common use case is to use different classes of disks for different Ceph pools.
A common configuration is to use different classes of disks for different Ceph
For this reason, Ceph introduced the device classes with luminous, to
pools. For this reason, Ceph introduced device classes with luminous, to
accommodate the need for easy ruleset generation.
accommodate the need for easy ruleset generation.
The device classes can be seen in the ceph osd tree output. These classes
The device classes can be seen in the ceph osd tree output. These classes
Line 266: Line 505:
  -7      2.56848    host sumi3
  -7      2.56848    host sumi3
  14  nvme 0.72769        osd.14
  14  nvme 0.72769        osd.14
To let a pool distribute its objects only on a specific device class, you need
To instruct a pool to only distribute objects on a specific device class, you
to create a ruleset with the specific class first.
first need to create a ruleset for the device class:
ceph osd crush rule create-replicated &lt;rule-name&gt; &lt;root&gt; &lt;failure-domain&gt; &lt;class&gt;
ceph osd crush rule create-replicated &lt;rule-name&gt; &lt;root&gt; &lt;failure-domain&gt; &lt;class&gt;
&lt;rule-name&gt;
&lt;rule-name&gt;
name of the rule, to connect with a pool (seen in GUI &amp; CLI)
name of the rule, to connect with a pool (seen in GUI &amp; CLI)
&lt;root&gt;
&lt;root&gt;
which crush root it should belong to (default ceph root "default")
which crush root it should belong to (default Ceph root "default")
&lt;failure-domain&gt;
&lt;failure-domain&gt;
at which failure-domain the objects should be distributed (usually host)
at which failure-domain the objects should be distributed (usually host)
&lt;class&gt;
&lt;class&gt;
what type of OSD backing store to use (eg. nvme, ssd, hdd)
what type of OSD backing store to use (e.g., nvme, ssd, hdd)
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
ceph osd pool set &lt;pool-name&gt; crush_rule &lt;rule-name&gt;
ceph osd pool set &lt;pool-name&gt; crush_rule &lt;rule-name&gt;
If the pool already contains objects, all of these have to be moved
If the pool already contains objects, these must be moved accordingly.
accordingly. Depending on your setup this may introduce a big performance hit on
Depending on your setup, this may introduce a big performance impact on your
your cluster. As an alternative, you can create a new pool and move disks
cluster. As an alternative, you can create a new pool and move disks separately.
separately.
Ceph Client
Ceph Client
You can then configure Proxmox VE to use such pools to store VM or
Following the setup from the previous sections, you can configure Proxmox VE to use
Container images. Simply use the GUI too add a new RBD storage (see
such pools to store VM and Container images. Simply use the GUI to add a new
section Ceph RADOS Block Devices (RBD)).
RBD storage (see section
You also need to copy the keyring to a predefined location for a external Ceph
Ceph RADOS Block Devices (RBD)).
You also need to copy the keyring to a predefined location for an external Ceph
cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
done automatically.
done automatically.
The file name needs to be &lt;storage_id&gt; + `.keyring - &lt;storage_id&gt; is
The filename needs to be &lt;storage_id&gt; + `.keyring, where &lt;storage_id&gt; is
the expression after rbd: in /etc/pve/storage.cfg which is
the expression after rbd: in /etc/pve/storage.cfg. In the following example,
my-ceph-storage in the following example:
my-ceph-storage is the &lt;storage_id&gt;:
mkdir /etc/pve/priv/ceph
mkdir /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
CephFS
CephFS
Ceph provides also a filesystem running on top of the same object storage as
Ceph also provides a filesystem, which runs on top of the same object storage as
RADOS block devices do. A Metadata Server (MDS) is used to map
RADOS block devices do. A Metadata Server (MDS) is used to map the
the RADOS backed objects to files and directories, allowing to provide a
RADOS backed objects to files and directories, allowing Ceph to provide a
POSIX-compliant replicated filesystem. This allows one to have a clustered
POSIX-compliant, replicated filesystem. This allows you to easily configure a
highly available shared filesystem in an easy way if ceph is already used. Its
clustered, highly available, shared filesystem. Ceph&#8217;s Metadata Servers
Metadata Servers guarantee that files get balanced out over the whole Ceph
guarantee that files are evenly distributed over the entire Ceph cluster. As a
cluster, this way even high load will not overload a single host, which can be
result, even cases of high load will not overwhelm a single host, which can be
an issue with traditional shared filesystem approaches, like NFS, for
an issue with traditional shared filesystem approaches, for example NFS.
example.
Proxmox VE supports both creating a hyper-converged CephFS and using an existing
Proxmox VE supports both, using an existing CephFS as storage
CephFS as storage to save backups, ISO files, and container
to save backups, ISO files or container templates and creating a
templates.
hyper-converged CephFS itself.
Metadata Server (MDS)
Metadata Server (MDS)
CephFS needs at least one Metadata Server to be configured and running to be
CephFS needs at least one Metadata Server to be configured and running, in order
able to work. One can simply create one through the Proxmox VE web GUI&#8217;s Node -&gt;
to function. You can create an MDS through the Proxmox VE web GUI&#8217;s Node
CephFS panel or on the command line with:
-&gt; CephFS panel or from the command line with:
pveceph mds create
pveceph mds create
Multiple metadata servers can be created in a cluster. But with the default
Multiple metadata servers can be created in a cluster, but with the default
settings only one can be active at any time. If an MDS, or its node, becomes
settings, only one can be active at a time. If an MDS or its node becomes
unresponsive (or crashes), another standby MDS will get promoted to active.
unresponsive (or crashes), another standby MDS will get promoted to active.
One can speed up the hand-over between the active and a standby MDS up by using
You can speed up the handover between the active and standby MDS by using
the hotstandby parameter option on create, or if you have already created it
the hotstandby parameter option on creation, or if you have already created it
you may set/add:
you may set/add:
mds standby replay = true
mds standby replay = true
in the ceph.conf respective MDS section. With this enabled, this specific MDS
in the respective MDS section of /etc/pve/ceph.conf. With this enabled, the
will always poll the active one, so that it can take over faster as it is in a
specified MDS will remain in a warm state, polling the active one, so that it
warm state. But naturally, the active polling will cause some additional
can take over faster in case of any issues.
performance impact on your system and active MDS.
This active polling will have an additional performance impact on your
system and the active MDS.
Multiple Active MDS
Multiple Active MDS
Since Luminous (12.2.x) you can also have multiple active metadata servers
Since Luminous (12.2.x) you can have multiple active metadata servers
running, but this is normally only useful for a high count on parallel clients,
running at once, but this is normally only useful if you have a high amount of
as else the MDS seldom is the bottleneck. If you want to set this up please
clients running in parallel. Otherwise the MDS is rarely the bottleneck in a
refer to the ceph documentation. [Configuring multiple active MDS
system. If you want to set this up, please refer to the Ceph documentation.
daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
[Configuring multiple active MDS daemons
Create a CephFS
https://docs.ceph.com/en/quincy/cephfs/multimds/]
With Proxmox VE&#8217;s CephFS integration into you can create a CephFS easily over the
Create CephFS
Web GUI, the CLI or an external API interface. Some prerequisites are required
With Proxmox VE&#8217;s integration of CephFS, you can easily create a CephFS using the
web interface, CLI or an external API interface. Some prerequisites are required
for this to work:
for this to work:
Prerequisites for a successful CephFS setup:
Prerequisites for a successful CephFS setup:
Install Ceph packages, if this was already done some
Install Ceph packages - if this was already done some
  time ago you might want to rerun it on an up to date system to ensure that
time ago, you may want to rerun it on an up-to-date system to
  also all CephFS related packages get installed.
ensure that all CephFS related packages get installed.
Setup Monitors
Setup Monitors
Setup your OSDs
Setup your OSDs
Setup at least one MDS
Setup at least one MDS
After this got all checked and done you can simply create a CephFS through
After this is complete, you can simply create a CephFS through
either the Web GUI&#8217;s Node -&gt; CephFS panel or the command line tool pveceph,
either the Web GUI&#8217;s Node -&gt; CephFS panel or the command-line tool pveceph,
for example with:
for example:
pveceph fs create --pg_num 128 --add-storage
pveceph fs create --pg_num 128 --add-storage
This creates a CephFS named &#8216;'cephfs'' using a pool for its data named
This creates a CephFS named cephfs, using a pool for its data named
`'cephfs_data'' with `128` placement groups and a pool for its metadata named
cephfs_data with 128 placement groups and a pool for its metadata named
'cephfs_metadata'' with one quarter of the data pools placement groups (`32).
cephfs_metadata with one quarter of the data pool&#8217;s placement groups (32).
Check the Proxmox VE managed Ceph pool chapter or visit the
Check the Proxmox VE managed Ceph pool chapter or visit the
Ceph documentation for more information regarding a fitting placement group
Ceph documentation for more information regarding an appropriate placement group
number (pg_num) for your setup [Ceph Placement Groups
number (pg_num) for your setup [placement_groups].
http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
Additionally, the --add-storage parameter will add the CephFS to the Proxmox VE
Additionally, the `--add-storage&#8217; parameter will add the CephFS to the Proxmox VE
storage configuration after it has been created successfully.
storage configuration after it was created successfully.
Destroy CephFS
Destroy CephFS
Destroying a CephFS will render all its data unusable, this cannot be
Destroying a CephFS will render all of its data unusable. This cannot be
undone!
undone!
If you really want to destroy an existing CephFS you first need to stop, or
To completely and gracefully remove a CephFS, the following steps are
destroy, all metadata server (M̀DS). You can destroy them either over the Web
necessary:
GUI or the command line interface, with:
Disconnect every non-Proxmox VE client (e.g. unmount the CephFS in guests).
Disable all related CephFS Proxmox VE storage entries (to prevent it from being
  automatically mounted).
Remove all used resources from guests (e.g. ISOs) that are on the CephFS you
  want to destroy.
Unmount the CephFS storages on all cluster nodes manually with
umount /mnt/pve/&lt;STORAGE-NAME&gt;
Where &lt;STORAGE-NAME&gt; is the name of the CephFS storage in your Proxmox VE.
Now make sure that no metadata server (MDS) is running for that CephFS,
  either by stopping or destroying them. This can be done through the web
  interface or via the command-line interface, for the latter you would issue
  the following command:
pveceph stop --service mds.NAME
to stop them, or
pveceph mds destroy NAME
pveceph mds destroy NAME
on each Proxmox VE node hosting a MDS daemon.
to destroy them.
Then, you can remove (destroy) CephFS by issuing a:
Note that standby servers will automatically be promoted to active when an
ceph fs rm NAME --yes-i-really-mean-it
active MDS is stopped or removed, so it is best to first stop all standby
on a single node hosting Ceph. After this you may want to remove the created
servers.
data and metadata pools, this can be done either over the Web GUI or the CLI
Now you can destroy the CephFS with
with:
pveceph fs destroy NAME --remove-storages --remove-pools
pveceph pool destroy NAME
This will automatically destroy the underlying Ceph pools as well as remove
Ceph monitoring and troubleshooting
the storages from pve config.
A good start is to continuosly monitor the ceph health from the start of
After these steps, the CephFS should be completely removed and if you have
initial deployment. Either through the ceph tools itself, but also by accessing
other CephFS instances, the stopped metadata servers can be started again
to act as standbys.
Ceph maintenance
Replace OSDs
One of the most common maintenance tasks in Ceph is to replace the disk of an
OSD. If a disk is already in a failed state, then you can go ahead and run
through the steps in Destroy OSDs. Ceph will recreate
those copies on the remaining OSDs if possible. This rebalancing will start as
soon as an OSD failure is detected or an OSD was actively stopped.
With the default size/min_size (3/2) of a pool, recovery only starts when
&#8216;size + 1` nodes are available. The reason for this is that the Ceph object
balancer CRUSH defaults to a full node as
`failure domain&#8217;.
To replace a functioning disk from the GUI, go through the steps in
Destroy OSDs. The only addition is to wait until
the cluster shows HEALTH_OK before stopping the OSD to destroy it.
On the command line, use the following commands:
ceph osd out osd.&lt;id&gt;
You can check with the command below if the OSD can be safely removed.
ceph osd safe-to-destroy osd.&lt;id&gt;
Once the above check tells you that it is safe to remove the OSD, you can
continue with the following commands:
systemctl stop ceph-osd@&lt;id&gt;.service
pveceph osd destroy &lt;id&gt;
Replace the old disk with the new one and use the same procedure as described
in Create OSDs.
Trim/Discard
It is good practice to run fstrim (discard) regularly on VMs and containers.
This releases data blocks that the filesystem isn’t using anymore. It reduces
data usage and resource load. Most modern operating systems issue such discard
commands to their disks regularly. You only need to ensure that the Virtual
Machines enable the disk discard option.
Scrub &amp; Deep Scrub
Ceph ensures data integrity by scrubbing placement groups. Ceph checks every
object in a PG for its health. There are two forms of Scrubbing, daily
cheap metadata checks and weekly deep data checks. The weekly deep scrub reads
the objects and uses checksums to ensure data integrity. If a running scrub
interferes with business (performance) needs, you can adjust the time when
scrubs [Ceph scrubbing https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/#scrubbing]
are executed.
Shutdown Proxmox VE + Ceph HCI cluster
To shut down the whole Proxmox VE + Ceph cluster, first stop all Ceph clients. These
will mainly be VMs and containers. If you have additional clients that might
access a Ceph FS or an installed RADOS GW, stop these as well.
Highly available guests will switch their state to stopped when powered down
via the Proxmox VE tooling.
Once all clients, VMs and containers are off or not accessing the Ceph cluster
anymore, verify that the Ceph cluster is in a healthy state. Either via the Web UI
or the CLI:
ceph -s
To disable all self-healing actions, and to pause any client IO in the Ceph
cluster, enable the following OSD flags in the Ceph &#8594; OSD panel or via the
CLI:
ceph osd set noout
ceph osd set norecover
ceph osd set norebalance
ceph osd set nobackfill
ceph osd set nodown
ceph osd set pause
Start powering down your nodes without a monitor (MON). After these nodes are
down, continue by shutting down nodes with monitors on them.
When powering on the cluster, start the nodes with monitors (MONs) first. Once
all nodes are up and running, confirm that all Ceph services are up and running
before you unset the OSD flags again:
ceph osd unset pause
ceph osd unset nodown
ceph osd unset nobackfill
ceph osd unset norebalance
ceph osd unset norecover
ceph osd unset noout
You can now start up the guests. Highly available guests will change their state
to started when they power on.
Ceph Monitoring and Troubleshooting
It is important to continuously monitor the health of a Ceph deployment from the
beginning, either by using the Ceph tools or by accessing
the status through the Proxmox VE API.
the status through the Proxmox VE API.
The following ceph commands below can be used to see if the cluster is healthy
The following Ceph commands can be used to see if the cluster is healthy
(HEALTH_OK), if there are warnings (HEALTH_WARN), or even errors
(HEALTH_OK), if there are warnings (HEALTH_WARN), or even errors
(HEALTH_ERR). If the cluster is in an unhealthy state the status commands
(HEALTH_ERR). If the cluster is in an unhealthy state, the status commands
below will also give you an overview on the current events and actions take.
below will also give you an overview of the current events and actions to take.
# single time output
# single time output
pve# ceph -s
pve# ceph -s
# continuously output status changes (press CTRL+C to stop)
# continuously output status changes (press CTRL+C to stop)
pve# ceph -w
pve# ceph -w
To get a more detailed view, every ceph service has a log file under
To get a more detailed view, every Ceph service has a log file under
/var/log/ceph/ and if there is not enough detail, the log level can be
/var/log/ceph/. If more detail is required, the log level can be
adjusted [Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/].
adjusted [Ceph log and debugging https://docs.ceph.com/en/quincy/rados/troubleshooting/log-and-debug/].
You can find more information about troubleshooting
You can find more information about troubleshooting
[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
[Ceph troubleshooting https://docs.ceph.com/en/quincy/rados/troubleshooting/]
a Ceph cluster on its website.
a Ceph cluster on the official website.
</pvehide>
</pvehide>
<!--PVE_IMPORT_END_MARKER-->
<!--PVE_IMPORT_END_MARKER-->

Latest revision as of 12:09, 28 November 2024

Introduction

screenshot/gui-ceph-status-dashboard.png

Proxmox VE unifies your compute and storage systems, that is, you can use the same physical nodes within a cluster for both computing (processing VMs and containers) and replicated storage. The traditional silos of compute and storage resources can be wrapped up into a single hyper-converged appliance. Separate storage networks (SANs) and connections via network attached storage (NAS) disappear. With the integration of Ceph, an open source software-defined storage platform, Proxmox VE has the ability to run and manage Ceph storage directly on the hypervisor nodes.

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability.

Some advantages of Ceph on Proxmox VE are:
  • Easy setup and management via CLI and GUI

  • Thin provisioning

  • Snapshot support

  • Self healing

  • Scalable to the exabyte level

  • Provides block, file system, and object storage

  • Setup pools with different performance and redundancy characteristics

  • Data is replicated, making it fault tolerant

  • Runs on commodity hardware

  • No need for hardware RAID controllers

  • Open source

For small to medium-sized deployments, it is possible to install a Ceph server for using RADOS Block Devices (RBD) or CephFS directly on your Proxmox VE cluster nodes (see Ceph RADOS Block Devices (RBD)). Recent hardware has a lot of CPU power and RAM, so running storage services and virtual guests on the same node is possible.

To simplify management, Proxmox VE provides you native integration to install and manage Ceph services on Proxmox VE nodes either via the built-in web interface, or using the pveceph command line tool.

Terminology

Ceph consists of multiple Daemons, for use as an RBD storage:
  • Ceph Monitor (ceph-mon, or MON)

  • Ceph Manager (ceph-mgr, or MGS)

  • Ceph Metadata Service (ceph-mds, or MDS)

  • Ceph Object Storage Daemon (ceph-osd, or OSD)

Tip We highly recommend to get familiar with Ceph
[Ceph intro https://docs.ceph.com/en/quincy/start/]
, its architecture
[Ceph architecture https://docs.ceph.com/en/quincy/architecture/]
and vocabulary
[Ceph glossary https://docs.ceph.com/en/quincy/glossary]
.

Recommendations for a Healthy Ceph Cluster

To build a hyper-converged Proxmox + Ceph Cluster, you must use at least three (preferably) identical servers for the setup.

Check also the recommendations from Ceph’s website.

Note The recommendations below should be seen as a rough guidance for choosing hardware. Therefore, it is still essential to adapt it to your specific needs. You should test your setup and monitor health and performance continuously.
CPU

Ceph services can be classified into two categories:

  • Intensive CPU usage, benefiting from high CPU base frequencies and multiple cores. Members of that category are:

    • Object Storage Daemon (OSD) services

    • Meta Data Service (MDS) used for CephFS

  • Moderate CPU usage, not needing multiple CPU cores. These are:

    • Monitor (MON) services

    • Manager (MGR) services

As a simple rule of thumb, you should assign at least one CPU core (or thread) to each Ceph service to provide the minimum resources required for stable and durable Ceph performance.

For example, if you plan to run a Ceph monitor, a Ceph manager and 6 Ceph OSDs services on a node you should reserve 8 CPU cores purely for Ceph when targeting basic and stable performance.

Note that OSDs CPU usage depend mostly from the disks performance. The higher the possible IOPS (IO Operations per Second) of a disk, the more CPU can be utilized by a OSD service. For modern enterprise SSD disks, like NVMe’s that can permanently sustain a high IOPS load over 100’000 with sub millisecond latency, each OSD can use multiple CPU threads, e.g., four to six CPU threads utilized per NVMe backed OSD is likely for very high performance disks.

Memory

Especially in a hyper-converged setup, the memory consumption needs to be carefully planned out and monitored. In addition to the predicted memory usage of virtual machines and containers, you must also account for having enough memory available for Ceph to provide excellent and stable performance.

As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory will be used by an OSD. While the usage might be less under normal conditions, it will use most during critical operations like recovery, re-balancing or backfilling. That means that you should avoid maxing out your available memory already on normal operation, but rather leave some headroom to cope with outages.

The OSD service itself will use additional memory. The Ceph BlueStore backend of the daemon requires by default 3-5 GiB of memory (adjustable).

Network

We recommend a network bandwidth of at least 10 Gbps, or more, to be used exclusively for Ceph traffic. A meshed network setup
[Full Mesh Network for Ceph https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server]
is also an option for three to five node clusters, if there are no 10+ Gbps switches available.

Important The volume of traffic, especially during recovery, will interfere with other services on the same network, especially the latency sensitive Proxmox VE corosync cluster stack can be affected, resulting in possible loss of cluster quorum. Moving the Ceph traffic to dedicated and physical separated networks will avoid such interference, not only for corosync, but also for the networking services provided by any virtual guests.

For estimating your bandwidth needs, you need to take the performance of your disks into account.. While a single HDD might not saturate a 1 Gb link, multiple HDD OSDs per node can already saturate 10 Gbps too. If modern NVMe-attached SSDs are used, a single one can already saturate 10 Gbps of bandwidth, or more. For such high-performance setups we recommend at least a 25 Gpbs, while even 40 Gbps or 100+ Gbps might be required to utilize the full performance potential of the underlying disks.

If unsure, we recommend using three (physical) separate networks for high-performance setups:

  • one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster traffic.

  • one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the ceph server and ceph client storage traffic. Depending on your needs this can also be used to host the virtual guest traffic and the VM live-migration traffic.

  • one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync cluster communication.

Disks

When planning the size of your Ceph cluster, it is important to take the recovery time into consideration. Especially with small clusters, recovery might take long. It is recommended that you use SSDs instead of HDDs in small setups to reduce recovery time, minimizing the likelihood of a subsequent failure event during recovery.

In general, SSDs will provide more IOPS than spinning disks. With this in mind, in addition to the higher cost, it may make sense to implement a class based separation of pools. Another way to speed up OSDs is to use a faster disk as a journal or DB/Write-Ahead-Log device, see creating Ceph OSDs. If a faster disk is used for multiple OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.

Aside from the disk type, Ceph performs best with an evenly sized, and an evenly distributed amount of disks per node. For example, 4 x 500 GB disks within each node is better than a mixed setup with a single 1 TB and three 250 GB disk.

You also need to balance OSD count and single OSD capacity. More capacity allows you to increase storage density, but it also means that a single OSD failure forces Ceph to recover more data at once.

Avoid RAID

As Ceph handles data object redundancy and multiple parallel writes to disks (OSDs) on its own, using a RAID controller normally doesn’t improve performance or availability. On the contrary, Ceph is designed to handle whole disks on it’s own, without any abstraction in between. RAID controllers are not designed for the Ceph workload and may complicate things and sometimes even reduce performance, as their write and caching algorithms may interfere with the ones from Ceph.

Warning Avoid RAID controllers. Use host bus adapter (HBA) instead.

Initial Ceph Installation & Configuration

Using the Web-based Wizard

screenshot/gui-node-ceph-install.png

With Proxmox VE you have the benefit of an easy to use installation wizard for Ceph. Click on one of your cluster nodes and navigate to the Ceph section in the menu tree. If Ceph is not already installed, you will see a prompt offering to do so.

The wizard is divided into multiple sections, where each needs to finish successfully, in order to use Ceph.

First you need to chose which Ceph version you want to install. Prefer the one from your other nodes, or the newest if this is the first node you install Ceph.

After starting the installation, the wizard will download and install all the required packages from Proxmox VE’s Ceph repository.

screenshot/gui-node-ceph-install-wizard-step0.png

After finishing the installation step, you will need to create a configuration. This step is only needed once per cluster, as this configuration is distributed automatically to all remaining cluster members through Proxmox VE’s clustered configuration file system (pmxcfs).

The configuration step includes the following settings:

  • Public Network: This network will be used for public storage communication (e.g., for virtual machines using a Ceph RBD backed disk, or a CephFS mount), and communication between the different Ceph services. This setting is required.
    Separating your Ceph traffic from the Proxmox VE cluster communication (corosync), and possible the front-facing (public) networks of your virtual guests, is highly recommended. Otherwise, Ceph’s high-bandwidth IO-traffic could cause interference with other low-latency dependent services.

  • Cluster Network: Specify to separate the OSD replication and heartbeat traffic as well. This setting is optional.
    Using a physically separated network is recommended, as it will relieve the Ceph public and the virtual guests network, while also providing a significant Ceph performance improvements.
    The Ceph cluster network can be configured and moved to another physically separated network at a later time.

screenshot/gui-node-ceph-install-wizard-step2.png

You have two more options which are considered advanced and therefore should only changed if you know what you are doing.

  • Number of replicas: Defines how often an object is replicated.

  • Minimum replicas: Defines the minimum number of required replicas for I/O to be marked as complete.

Additionally, you need to choose your first monitor node. This step is required.

That’s it. You should now see a success page as the last step, with further instructions on how to proceed. Your system is now ready to start using Ceph. To get started, you will need to create some additional monitors, OSDs and at least one pool.

The rest of this chapter will guide you through getting the most out of your Proxmox VE based Ceph setup. This includes the aforementioned tips and more, such as CephFS, which is a helpful addition to your new Ceph cluster.

CLI Installation of Ceph Packages

Alternatively to the the recommended Proxmox VE Ceph installation wizard available in the web interface, you can use the following CLI command on each node:

pveceph install

This sets up an apt package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.

Initial Ceph configuration via CLI

Use the Proxmox VE Ceph installation wizard (recommended) or run the following command on one node:

pveceph init --network 10.10.10.0/24

This creates an initial configuration at /etc/pve/ceph.conf with a dedicated network for Ceph. This file is automatically distributed to all Proxmox VE nodes, using pmxcfs. The command also creates a symbolic link at /etc/ceph/ceph.conf, which points to that file. Thus, you can simply run Ceph commands without the need to specify a configuration file.

Ceph Monitor

screenshot/gui-ceph-monitor.png

The Ceph Monitor (MON)
[Ceph Monitor https://docs.ceph.com/en/quincy/rados/configuration/mon-config-ref/]
maintains a master copy of the cluster map. For high availability, you need at least 3 monitors. One monitor will already be installed if you used the installation wizard. You won’t need more than 3 monitors, as long as your cluster is small to medium-sized. Only really large clusters will require more than this.

Create Monitors

On each node where you want to place a monitor (three monitors are recommended), create one by using the Ceph → Monitor tab in the GUI or run:

pveceph mon create

Destroy Monitors

To remove a Ceph Monitor via the GUI, first select a node in the tree view and go to the Ceph → Monitor panel. Select the MON and click the Destroy button.

To remove a Ceph Monitor via the CLI, first connect to the node on which the MON is running. Then execute the following command:

pveceph mon destroy
Note At least three Monitors are needed for quorum.

Ceph Manager

The Manager daemon runs alongside the monitors. It provides an interface to monitor the cluster. Since the release of Ceph luminous, at least one ceph-mgr
[Ceph Manager https://docs.ceph.com/en/quincy/mgr/]
daemon is required.

Create Manager

Multiple Managers can be installed, but only one Manager is active at any given time.

pveceph mgr create
Note It is recommended to install the Ceph Manager on the monitor nodes. For high availability install more then one manager.

Destroy Manager

To remove a Ceph Manager via the GUI, first select a node in the tree view and go to the Ceph → Monitor panel. Select the Manager and click the Destroy button.

To remove a Ceph Monitor via the CLI, first connect to the node on which the Manager is running. Then execute the following command:

pveceph mgr destroy
Note While a manager is not a hard-dependency, it is crucial for a Ceph cluster, as it handles important features like PG-autoscaling, device health monitoring, telemetry and more.

Ceph OSDs

screenshot/gui-ceph-osd-status.png

Ceph Object Storage Daemons store objects for Ceph over the network. It is recommended to use one OSD per physical disk.

Create OSDs

You can create an OSD either via the Proxmox VE web interface or via the CLI using pveceph. For example:

pveceph osd create /dev/sd[X]
Tip We recommend a Ceph cluster with at least three nodes and at least 12 OSDs, evenly distributed among the nodes.

If the disk was in use before (for example, for ZFS or as an OSD) you first need to zap all traces of that usage. To remove the partition table, boot sector and any other OSD leftover, you can use the following command:

ceph-volume lvm zap /dev/sd[X] --destroy
Warning The above command will destroy all data on the disk!
Ceph Bluestore

Starting with the Ceph Kraken release, a new Ceph OSD storage type was introduced called Bluestore
[Ceph Bluestore https://ceph.com/community/new-luminous-bluestore/]
. This is the default when creating OSDs since Ceph Luminous.

pveceph osd create /dev/sd[X]
Block.db and block.wal

If you want to use a separate DB/WAL device for your OSDs, you can specify it through the -db_dev and -wal_dev options. The WAL is placed with the DB, if not specified separately.

pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]

You can directly choose the size of those with the -db_size and -wal_size parameters respectively. If they are not given, the following values (in order) will be used:

  • bluestore_block_{db,wal}_size from Ceph configuration…

    • … database, section osd

    • … database, section global

    • … file, section osd

    • … file, section global

  • 10% (DB)/1% (WAL) of OSD size

Note The DB stores BlueStore’s internal metadata, and the WAL is BlueStore’s internal journal or write-ahead log. It is recommended to use a fast SSD or NVRAM for better performance.
Ceph Filestore

Before Ceph Luminous, Filestore was used as the default storage type for Ceph OSDs. Starting with Ceph Nautilus, Proxmox VE does not support creating such OSDs with pveceph anymore. If you still want to create filestore OSDs, use ceph-volume directly.

ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]

Destroy OSDs

To remove an OSD via the GUI, first select a Proxmox VE node in the tree view and go to the Ceph → OSD panel. Then select the OSD to destroy and click the OUT button. Once the OSD status has changed from in to out, click the STOP button. Finally, after the status has changed from up to down, select Destroy from the More drop-down menu.

To remove an OSD via the CLI run the following commands.

ceph osd out <ID>
systemctl stop ceph-osd@<ID>.service
Note The first command instructs Ceph not to include the OSD in the data distribution. The second command stops the OSD service. Until this time, no data is lost.

The following command destroys the OSD. Specify the -cleanup option to additionally destroy the partition table.

pveceph osd destroy <ID>
Warning The above command will destroy all data on the disk!

Ceph Pools

screenshot/gui-ceph-pools.png

A pool is a logical group for storing objects. It holds a collection of objects, known as Placement Groups (PG, pg_num).

Create and Edit Pools

You can create and edit pools from the command line or the web interface of any Proxmox VE host under Ceph → Pools.

When no options are given, we set a default of 128 PGs, a size of 3 replicas and a min_size of 2 replicas, to ensure no data loss occurs if any OSD fails.

Warning Do not set a min_size of 1. A replicated pool with min_size of 1 allows I/O on an object when it has only 1 replica, which could lead to data loss, incomplete PGs or unfound objects.

It is advised that you either enable the PG-Autoscaler or calculate the PG number based on your setup. You can find the formula and the PG calculator
[PG calculator https://web.archive.org/web/20210301111112/http://ceph.com/pgcalc/]
online. From Ceph Nautilus onward, you can change the number of PGs
[Placement Groups https://docs.ceph.com/en/quincy/rados/operations/placement-groups/]
after the setup.

The PG autoscaler
[Automated Scaling https://docs.ceph.com/en/quincy/rados/operations/placement-groups/#automated-scaling]
can automatically scale the PG count for a pool in the background. Setting the Target Size or Target Ratio advanced parameters helps the PG-Autoscaler to make better decisions.

Example for creating a pool over the CLI
pveceph pool create <pool-name> --add_storages
Tip If you would also like to automatically define a storage for your pool, keep the ‘Add as Storage’ checkbox checked in the web interface, or use the command-line option --add_storages at pool creation.

Pool Options

screenshot/gui-ceph-pool-create.png

The following options are available on pool creation, and partially also when editing a pool.

Name

The name of the pool. This must be unique and can’t be changed afterwards.

Size

The number of replicas per object. Ceph always tries to have this many copies of an object. Default: 3.

PG Autoscale Mode

The automatic PG scaling mode
[autoscaler]
of the pool. If set to warn, it produces a warning message when a pool has a non-optimal PG count. Default: warn.

Add as Storage

Configure a VM or container storage using the new pool. Default: true (only visible on creation).

Advanced Options
Min. Size

The minimum number of replicas per object. Ceph will reject I/O on the pool if a PG has less than this many replicas. Default: 2.

Crush Rule

The rule to use for mapping object placement in the cluster. These rules define how data is placed within the cluster. See Ceph CRUSH & device classes for information on device-based rules.

# of PGs

The number of placement groups
[placement_groups]
that the pool should have at the beginning. Default: 128.

Target Ratio

The ratio of data that is expected in the pool. The PG autoscaler uses the ratio relative to other ratio sets. It takes precedence over the target size if both are set.

Target Size

The estimated amount of data expected in the pool. The PG autoscaler uses this size to estimate the optimal PG count.

Min. # of PGs

The minimum number of placement groups. This setting is used to fine-tune the lower bound of the PG count for that pool. The PG autoscaler will not merge PGs below this threshold.

Further information on Ceph pool handling can be found in the Ceph pool operation
[Ceph pool operation https://docs.ceph.com/en/quincy/rados/operations/pools/]
manual.

Erasure Coded Pools

Erasure coding (EC) is a form of ‘forward error correction’ codes that allows to recover from a certain amount of data loss. Erasure coded pools can offer more usable space compared to replicated pools, but they do that for the price of performance.

For comparison: in classic, replicated pools, multiple replicas of the data are stored (size) while in erasure coded pool, data is split into k data chunks with additional m coding (checking) chunks. Those coding chunks can be used to recreate data should data chunks be missing.

The number of coding chunks, m, defines how many OSDs can be lost without losing any data. The total amount of objects stored is k + m.

Creating EC Pools

Erasure coded (EC) pools can be created with the pveceph CLI tooling. Planning an EC pool needs to account for the fact, that they work differently than replicated pools.

The default min_size of an EC pool depends on the m parameter. If m = 1, the min_size of the EC pool will be k. The min_size will be k + 1 if m > 1. The Ceph documentation recommends a conservative min_size of k + 2
[Ceph Erasure Coded Pool Recovery https://docs.ceph.com/en/quincy/rados/operations/erasure-code/#erasure-coded-pool-recovery]
.

If there are less than min_size OSDs available, any IO to the pool will be blocked until there are enough OSDs available again.

Note When planning an erasure coded pool, keep an eye on the min_size as it defines how many OSDs need to be available. Otherwise, IO will be blocked.

For example, an EC pool with k = 2 and m = 1 will have size = 3, min_size = 2 and will stay operational if one OSD fails. If the pool is configured with k = 2, m = 2, it will have a size = 4 and min_size = 3 and stay operational if one OSD is lost.

To create a new EC pool, run the following command:

pveceph pool create <pool-name> --erasure-coding k=2,m=1

Optional parameters are failure-domain and device-class. If you need to change any EC profile settings used by the pool, you will have to create a new pool with a new profile.

This will create a new EC pool plus the needed replicated pool to store the RBD omap and other metadata. In the end, there will be a <pool name>-data and <pool name>-metadata pool. The default behavior is to create a matching storage configuration as well. If that behavior is not wanted, you can disable it by providing the --add_storages 0 parameter. When configuring the storage configuration manually, keep in mind that the data-pool parameter needs to be set. Only then will the EC pool be used to store the data objects. For example:

Note The optional parameters --size, --min_size and --crush_rule will be used for the replicated metadata pool, but not for the erasure coded data pool. If you need to change the min_size on the data pool, you can do it later. The size and crush_rule parameters cannot be changed on erasure coded pools.

If there is a need to further customize the EC profile, you can do so by creating it with the Ceph tools directly
[Ceph Erasure Code Profile https://docs.ceph.com/en/quincy/rados/operations/erasure-code/#erasure-code-profiles]
, and specify the profile to use with the profile parameter.

For example:

pveceph pool create <pool-name> --erasure-coding profile=<profile-name>

Adding EC Pools as Storage

You can add an already existing EC pool as storage to Proxmox VE. It works the same way as adding an RBD pool but requires the extra data-pool option.

pvesm add rbd <storage-name> --pool <replicated-pool> --data-pool <ec-pool>
Tip Do not forget to add the keyring and monhost option for any external Ceph clusters, not managed by the local Proxmox VE cluster.

Destroy Pools

To destroy a pool via the GUI, select a node in the tree view and go to the Ceph → Pools panel. Select the pool to destroy and click the Destroy button. To confirm the destruction of the pool, you need to enter the pool name.

Run the following command to destroy a pool. Specify the -remove_storages to also remove the associated storage.

pveceph pool destroy <name>
Note Pool deletion runs in the background and can take some time. You will notice the data usage in the cluster decreasing throughout this process.

PG Autoscaler

The PG autoscaler allows the cluster to consider the amount of (expected) data stored in each pool and to choose the appropriate pg_num values automatically. It is available since Ceph Nautilus.

You may need to activate the PG autoscaler module before adjustments can take effect.

ceph mgr module enable pg_autoscaler

The autoscaler is configured on a per pool basis and has the following modes:

warn

A health warning is issued if the suggested pg_num value differs too much from the current value.

on

The pg_num is adjusted automatically with no need for any manual interaction.

off

No automatic pg_num adjustments are made, and no warning will be issued if the PG count is not optimal.

The scaling factor can be adjusted to facilitate future data storage with the target_size, target_size_ratio and the pg_num_min options.

Warning By default, the autoscaler considers tuning the PG count of a pool if it is off by a factor of 3. This will lead to a considerable shift in data placement and might introduce a high load on the cluster.

You can find a more in-depth introduction to the PG autoscaler on Ceph’s Blog - New in Nautilus: PG merging and autotuning.

Ceph CRUSH & device classes

screenshot/gui-ceph-config.png

The
[https://ceph.com/assets/pdfs/weil-crush-sc06.pdf]
(Controlled Replication Under Scalable Hashing) algorithm is at the foundation of Ceph.

CRUSH calculates where to store and retrieve data from. This has the advantage that no central indexing service is needed. CRUSH works using a map of OSDs, buckets (device locations) and rulesets (data replication) for pools.

Note Further information can be found in the Ceph documentation, under the section CRUSH map
[CRUSH map https://docs.ceph.com/en/quincy/rados/operations/crush-map/]
.

This map can be altered to reflect different replication hierarchies. The object replicas can be separated (e.g., failure domains), while maintaining the desired distribution.

A common configuration is to use different classes of disks for different Ceph pools. For this reason, Ceph introduced device classes with luminous, to accommodate the need for easy ruleset generation.

The device classes can be seen in the ceph osd tree output. These classes represent their own root bucket, which can be seen with the below command.

ceph osd crush tree --show-shadow

Example output form the above command:

ID  CLASS WEIGHT  TYPE NAME
-16  nvme 2.18307 root default~nvme
-13  nvme 0.72769     host sumi1~nvme
 12  nvme 0.72769         osd.12
-14  nvme 0.72769     host sumi2~nvme
 13  nvme 0.72769         osd.13
-15  nvme 0.72769     host sumi3~nvme
 14  nvme 0.72769         osd.14
 -1       7.70544 root default
 -3       2.56848     host sumi1
 12  nvme 0.72769         osd.12
 -5       2.56848     host sumi2
 13  nvme 0.72769         osd.13
 -7       2.56848     host sumi3
 14  nvme 0.72769         osd.14

To instruct a pool to only distribute objects on a specific device class, you first need to create a ruleset for the device class:

ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>

<rule-name>

name of the rule, to connect with a pool (seen in GUI & CLI)

<root>

which crush root it should belong to (default Ceph root "default")

<failure-domain>

at which failure-domain the objects should be distributed (usually host)

<class>

what type of OSD backing store to use (e.g., nvme, ssd, hdd)

Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.

ceph osd pool set <pool-name> crush_rule <rule-name>
Tip If the pool already contains objects, these must be moved accordingly. Depending on your setup, this may introduce a big performance impact on your cluster. As an alternative, you can create a new pool and move disks separately.

Ceph Client

screenshot/gui-ceph-log.png

Following the setup from the previous sections, you can configure Proxmox VE to use such pools to store VM and Container images. Simply use the GUI to add a new RBD storage (see section Ceph RADOS Block Devices (RBD)).

You also need to copy the keyring to a predefined location for an external Ceph cluster. If Ceph is installed on the Proxmox nodes itself, then this will be done automatically.

Note The filename needs to be <storage_id> + `.keyring, where <storage_id> is the expression after rbd: in /etc/pve/storage.cfg. In the following example, my-ceph-storage is the <storage_id>:
mkdir /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring

CephFS

Ceph also provides a filesystem, which runs on top of the same object storage as RADOS block devices do. A Metadata Server (MDS) is used to map the RADOS backed objects to files and directories, allowing Ceph to provide a POSIX-compliant, replicated filesystem. This allows you to easily configure a clustered, highly available, shared filesystem. Ceph’s Metadata Servers guarantee that files are evenly distributed over the entire Ceph cluster. As a result, even cases of high load will not overwhelm a single host, which can be an issue with traditional shared filesystem approaches, for example NFS.

screenshot/gui-node-ceph-cephfs-panel.png

Proxmox VE supports both creating a hyper-converged CephFS and using an existing CephFS as storage to save backups, ISO files, and container templates.

Metadata Server (MDS)

CephFS needs at least one Metadata Server to be configured and running, in order to function. You can create an MDS through the Proxmox VE web GUI’s Node -> CephFS panel or from the command line with:

pveceph mds create

Multiple metadata servers can be created in a cluster, but with the default settings, only one can be active at a time. If an MDS or its node becomes unresponsive (or crashes), another standby MDS will get promoted to active. You can speed up the handover between the active and standby MDS by using the hotstandby parameter option on creation, or if you have already created it you may set/add:

mds standby replay = true

in the respective MDS section of /etc/pve/ceph.conf. With this enabled, the specified MDS will remain in a warm state, polling the active one, so that it can take over faster in case of any issues.

Note This active polling will have an additional performance impact on your system and the active MDS.
Multiple Active MDS

Since Luminous (12.2.x) you can have multiple active metadata servers running at once, but this is normally only useful if you have a high amount of clients running in parallel. Otherwise the MDS is rarely the bottleneck in a system. If you want to set this up, please refer to the Ceph documentation.
[Configuring multiple active MDS daemons https://docs.ceph.com/en/quincy/cephfs/multimds/]

Create CephFS

With Proxmox VE’s integration of CephFS, you can easily create a CephFS using the web interface, CLI or an external API interface. Some prerequisites are required for this to work:

Prerequisites for a successful CephFS setup:

After this is complete, you can simply create a CephFS through either the Web GUI’s Node -> CephFS panel or the command-line tool pveceph, for example:

pveceph fs create --pg_num 128 --add-storage

This creates a CephFS named cephfs, using a pool for its data named cephfs_data with 128 placement groups and a pool for its metadata named cephfs_metadata with one quarter of the data pool’s placement groups (32). Check the Proxmox VE managed Ceph pool chapter or visit the Ceph documentation for more information regarding an appropriate placement group number (pg_num) for your setup
[placement_groups]
. Additionally, the --add-storage parameter will add the CephFS to the Proxmox VE storage configuration after it has been created successfully.

Destroy CephFS

Warning Destroying a CephFS will render all of its data unusable. This cannot be undone!

To completely and gracefully remove a CephFS, the following steps are necessary:

  • Disconnect every non-Proxmox VE client (e.g. unmount the CephFS in guests).

  • Disable all related CephFS Proxmox VE storage entries (to prevent it from being automatically mounted).

  • Remove all used resources from guests (e.g. ISOs) that are on the CephFS you want to destroy.

  • Unmount the CephFS storages on all cluster nodes manually with

    umount /mnt/pve/<STORAGE-NAME>

    Where <STORAGE-NAME> is the name of the CephFS storage in your Proxmox VE.

  • Now make sure that no metadata server (MDS) is running for that CephFS, either by stopping or destroying them. This can be done through the web interface or via the command-line interface, for the latter you would issue the following command:

    pveceph stop --service mds.NAME

    to stop them, or

    pveceph mds destroy NAME

    to destroy them.

    Note that standby servers will automatically be promoted to active when an active MDS is stopped or removed, so it is best to first stop all standby servers.

  • Now you can destroy the CephFS with

    pveceph fs destroy NAME --remove-storages --remove-pools

    This will automatically destroy the underlying Ceph pools as well as remove the storages from pve config.

After these steps, the CephFS should be completely removed and if you have other CephFS instances, the stopped metadata servers can be started again to act as standbys.

Ceph maintenance

Replace OSDs

One of the most common maintenance tasks in Ceph is to replace the disk of an OSD. If a disk is already in a failed state, then you can go ahead and run through the steps in Destroy OSDs. Ceph will recreate those copies on the remaining OSDs if possible. This rebalancing will start as soon as an OSD failure is detected or an OSD was actively stopped.

Note With the default size/min_size (3/2) of a pool, recovery only starts when ‘size + 1` nodes are available. The reason for this is that the Ceph object balancer CRUSH defaults to a full node as `failure domain’.

To replace a functioning disk from the GUI, go through the steps in Destroy OSDs. The only addition is to wait until the cluster shows HEALTH_OK before stopping the OSD to destroy it.

On the command line, use the following commands:

ceph osd out osd.<id>

You can check with the command below if the OSD can be safely removed.

ceph osd safe-to-destroy osd.<id>

Once the above check tells you that it is safe to remove the OSD, you can continue with the following commands:

systemctl stop ceph-osd@<id>.service
pveceph osd destroy <id>

Replace the old disk with the new one and use the same procedure as described in Create OSDs.

Trim/Discard

It is good practice to run fstrim (discard) regularly on VMs and containers. This releases data blocks that the filesystem isn’t using anymore. It reduces data usage and resource load. Most modern operating systems issue such discard commands to their disks regularly. You only need to ensure that the Virtual Machines enable the disk discard option.

Scrub & Deep Scrub

Ceph ensures data integrity by scrubbing placement groups. Ceph checks every object in a PG for its health. There are two forms of Scrubbing, daily cheap metadata checks and weekly deep data checks. The weekly deep scrub reads the objects and uses checksums to ensure data integrity. If a running scrub interferes with business (performance) needs, you can adjust the time when scrubs
[Ceph scrubbing https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/#scrubbing]
are executed.

Shutdown Proxmox VE + Ceph HCI cluster

To shut down the whole Proxmox VE + Ceph cluster, first stop all Ceph clients. These will mainly be VMs and containers. If you have additional clients that might access a Ceph FS or an installed RADOS GW, stop these as well. Highly available guests will switch their state to stopped when powered down via the Proxmox VE tooling.

Once all clients, VMs and containers are off or not accessing the Ceph cluster anymore, verify that the Ceph cluster is in a healthy state. Either via the Web UI or the CLI:

ceph -s

To disable all self-healing actions, and to pause any client IO in the Ceph cluster, enable the following OSD flags in the Ceph → OSD panel or via the CLI:

ceph osd set noout
ceph osd set norecover
ceph osd set norebalance
ceph osd set nobackfill
ceph osd set nodown
ceph osd set pause

Start powering down your nodes without a monitor (MON). After these nodes are down, continue by shutting down nodes with monitors on them.

When powering on the cluster, start the nodes with monitors (MONs) first. Once all nodes are up and running, confirm that all Ceph services are up and running before you unset the OSD flags again:

ceph osd unset pause
ceph osd unset nodown
ceph osd unset nobackfill
ceph osd unset norebalance
ceph osd unset norecover
ceph osd unset noout

You can now start up the guests. Highly available guests will change their state to started when they power on.

Ceph Monitoring and Troubleshooting

It is important to continuously monitor the health of a Ceph deployment from the beginning, either by using the Ceph tools or by accessing the status through the Proxmox VE API.

The following Ceph commands can be used to see if the cluster is healthy (HEALTH_OK), if there are warnings (HEALTH_WARN), or even errors (HEALTH_ERR). If the cluster is in an unhealthy state, the status commands below will also give you an overview of the current events and actions to take.

# single time output
pve# ceph -s
# continuously output status changes (press CTRL+C to stop)
pve# ceph -w

To get a more detailed view, every Ceph service has a log file under /var/log/ceph/. If more detail is required, the log level can be adjusted
[Ceph log and debugging https://docs.ceph.com/en/quincy/rados/troubleshooting/log-and-debug/]
.

You can find more information about troubleshooting
[Ceph troubleshooting https://docs.ceph.com/en/quincy/rados/troubleshooting/]
a Ceph cluster on the official website.