NAME

pveceph - Manage Ceph Services on Proxmox VE Nodes

SYNOPSIS

pveceph <COMMAND> [ARGS] [OPTIONS]

pveceph createmgr [OPTIONS]

Create Ceph Manager

--id [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?

The ID for the manager, when omitted the same as the nodename

pveceph createmon [OPTIONS]

Create Ceph Monitor and Manager

--exclude-manager <boolean> (default = 0)

When set, only a monitor will be created.

--id [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?

The ID for the monitor, when omitted the same as the nodename

--mon-address <string>

Overwrites autodetected monitor IP address. Must be in the public network of ceph.

pveceph createosd <dev> [OPTIONS]

Create OSD

<dev>: <string>

Block device name.

--bluestore <boolean> (default = 1)

Use bluestore instead of filestore. This is the default.

--fstype <btrfs | ext4 | xfs> (default = xfs)

File system type (filestore only).

--journal_dev <string>

Block device name for journal (filestore) or block.db (bluestore).

--wal_dev <string>

Block device name for block.wal (bluestore only).

pveceph createpool <name> [OPTIONS]

Create POOL

<name>: <string>

The name of the pool. It must be unique.

--add_storages <boolean>

Configure VM and CT storages using the new pool.

--application <cephfs | rbd | rgw>

The application of the pool, rbd by default.

--crush_rule <string>

The rule to use for mapping object placement in the cluster.

--min_size <integer> (1 - 7) (default = 2)

Minimum number of replicas per object

--pg_num <integer> (8 - 32768) (default = 64)

Number of placement groups.

--size <integer> (1 - 7) (default = 3)

Number of replicas per object

pveceph destroymgr <id>

Destroy Ceph Manager.

<id>: [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?

The ID of the manager

pveceph destroymon <monid> [OPTIONS]

Destroy Ceph Monitor and Manager.

<monid>: [a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?

Monitor ID

--exclude-manager <boolean> (default = 0)

When set, removes only the monitor, not the manager

pveceph destroyosd <osdid> [OPTIONS]

Destroy OSD

<osdid>: <integer>

OSD ID

--cleanup <boolean> (default = 0)

If set, we remove partition table entries.

pveceph destroypool <name> [OPTIONS]

Destroy pool

<name>: <string>

The name of the pool. It must be unique.

--force <boolean> (default = 0)

If true, destroys pool even if in use

--remove_storages <boolean> (default = 0)

Remove all pveceph-managed storages configured for this pool

pveceph help [OPTIONS]

Get help about specified command.

--extra-args <array>

Shows help for a specific command

--verbose <boolean>

Verbose output format.

pveceph init [OPTIONS]

Create initial ceph default configuration and setup symlinks.

--disable_cephx <boolean> (default = 0)

Disable cephx authentification.

Warning cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private!
--min_size <integer> (1 - 7) (default = 2)

Minimum number of available replicas per object to allow I/O

--network <string>

Use specific network for all ceph related traffic

--pg_bits <integer> (6 - 14) (default = 6)

Placement group bits, used to specify the default number of placement groups.

Note osd pool default pg num does not work for default pools.
--size <integer> (1 - 7) (default = 3)

Targeted number of replicas per object

pveceph install [OPTIONS]

Install ceph related packages.

--version <luminous>

no description available

pveceph lspools

List all pools.

pveceph purge

Destroy ceph related data and configuration files.

pveceph start [<service>]

Start ceph services.

<service>: (mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}

Ceph service name.

pveceph status

Get ceph status.

pveceph stop [<service>]

Stop ceph services.

<service>: (mon|mds|osd|mgr)\.[A-Za-z0-9\-]{1,32}

Ceph service name.

DESCRIPTION

gui-ceph-status.png

Proxmox VE unifies your compute and storage systems, i.e. you can use the same physical nodes within a cluster for both computing (processing VMs and containers) and replicated storage. The traditional silos of compute and storage resources can be wrapped up into a single hyper-converged appliance. Separate storage networks (SANs) and connections via network (NAS) disappear. With the integration of Ceph, an open source software-defined storage platform, Proxmox VE has the ability to run and manage Ceph storage directly on the hypervisor nodes.

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability.

For small to mid sized deployments, it is possible to install a Ceph server for RADOS Block Devices (RBD) directly on your Proxmox VE cluster nodes, see Ceph RADOS Block Devices (RBD). Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on the same node is possible.

To simplify management, we provide pveceph - a tool to install and manage Ceph services on Proxmox VE nodes.

Ceph consists of a couple of Daemons
[Ceph intro http://docs.ceph.com/docs/master/start/intro/]
, for use as a RBD storage:

  • Ceph Monitor (ceph-mon)

  • Ceph Manager (ceph-mgr)

  • Ceph OSD (ceph-osd; Object Storage Daemon)

Tip We recommend to get familiar with the Ceph vocabulary.
[Ceph glossary http://docs.ceph.com/docs/luminous/glossary]

Precondition

To build a Proxmox Ceph Cluster there should be at least three (preferably) identical servers for the setup.

A 10Gb network, exclusively used for Ceph, is recommended. A meshed network setup is also an option if there are no 10Gb switches available, see wiki .

Check also the recommendations from Ceph’s website.

Installation of Ceph Packages

On each node run the installation script as follows:

pveceph install

This sets up an apt package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.

Creating initial Ceph configuration

gui-ceph-config.png

After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network (10.10.10.0/24 in the following example) dedicated for Ceph:

pveceph init --network 10.10.10.0/24

This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using pmxcfs. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file.

Creating Ceph Monitors

gui-ceph-monitor.png

The Ceph Monitor (MON)
[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
maintains a master copy of the cluster map. For HA you need to have at least 3 monitors.

On each node where you want to place a monitor (three monitors are recommended), create it by using the Ceph → Monitor tab in the GUI or run.

pveceph createmon

This will also install the needed Ceph Manager (ceph-mgr) by default. If you do not want to install a manager, specify the -exclude-manager option.

Creating Ceph Manager

The Manager daemon runs alongside the monitors. It provides interfaces for monitoring the cluster. Since the Ceph luminous release the ceph-mgr
[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/]
daemon is required. During monitor installation the ceph manager will be installed as well.

Note It is recommended to install the Ceph Manager on the monitor nodes. For high availability install more then one manager.
pveceph createmgr

Creating Ceph OSDs

gui-ceph-osd-status.png

via GUI or via CLI as follows:

pveceph createosd /dev/sd[X]
Tip We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly among your, at least three nodes (4 OSDs on each node).

Ceph Bluestore

Starting with the Ceph Kraken release, a new Ceph OSD storage type was introduced, the so called Bluestore
[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/]
. In Ceph luminous this store is the default when creating OSDs.

pveceph createosd /dev/sd[X]
Note In order to select a disk in the GUI, to be more failsafe, the disk needs to have a GPT
[GPT partition table https://en.wikipedia.org/wiki/GUID_Partition_Table]
partition table. You can create this with gdisk /dev/sd(x). If there is no GPT, you cannot select the disk as DB/WAL.

If you want to use a separate DB/WAL device for your OSDs, you can specify it through the -wal_dev option.

pveceph createosd /dev/sd[X] -wal_dev /dev/sd[Y]
Note The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s internal journal or write-ahead log. It is recommended to use a fast SSDs or NVRAM for better performance.

Ceph Filestore

Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can still be used and might give better performance in small setups, when backed by a NVMe SSD or similar.

pveceph createosd /dev/sd[X] -bluestore 0
Note In order to select a disk in the GUI, the disk needs to have a GPT
[GPT]
partition table. You can create this with gdisk /dev/sd(x). If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.

If you want to use a dedicated SSD journal disk:

pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] -bluestore 0

Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk.

pveceph createosd /dev/sdf -journal_dev /dev/sdb -bluestore 0

This partitions the disk (data and journal partition), creates filesystems and starts the OSD, afterwards it is running and fully functional.

Note This command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using: ceph-disk zap /dev/sd[X]

You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended to achieve good performance.

Creating Ceph Pools

gui-ceph-pools.png

A pool is a logical group for storing objects. It holds Placement Groups (PG), a collection of objects.

When no options are given, we set a default of 64 PGs, a size of 3 replicas and a min_size of 2 replicas for serving objects in a degraded state.

Note The default number of PGs works for 2-6 disks. Ceph throws a "HEALTH_WARNING" if you have too few or too many PGs in your cluster.

It is advised to calculate the PG number depending on your setup, you can find the formula and the PG calculator
[PG calculator http://ceph.com/pgcalc/]
online. While PGs can be increased later on, they can never be decreased.

You can create pools through command line or on the GUI on each PVE host under Ceph → Pools.

pveceph createpool <name>

If you would like to automatically get also a storage definition for your pool, active the checkbox "Add storages" on the GUI or use the command line option --add_storages on pool creation.

Further information on Ceph pool handling can be found in the Ceph pool operation
[Ceph pool operation http://docs.ceph.com/docs/luminous/rados/operations/pools/]
manual.

Ceph CRUSH & device classes

The foundation of Ceph is its algorithm, Controlled Replication Under Scalable Hashing (CRUSH
[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]
).

CRUSH calculates where to store to and retrieve data from, this has the advantage that no central index service is needed. CRUSH works with a map of OSDs, buckets (device locations) and rulesets (data replication) for pools.

Note Further information can be found in the Ceph documentation, under the section CRUSH map
[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/]
.

This map can be altered to reflect different replication hierarchies. The object replicas can be separated (eg. failure domains), while maintaining the desired distribution.

A common use case is to use different classes of disks for different Ceph pools. For this reason, Ceph introduced the device classes with luminous, to accommodate the need for easy ruleset generation.

The device classes can be seen in the ceph osd tree output. These classes represent their own root bucket, which can be seen with the below command.

ceph osd crush tree --show-shadow

Example output form the above command:

ID  CLASS WEIGHT  TYPE NAME
-16  nvme 2.18307 root default~nvme
-13  nvme 0.72769     host sumi1~nvme
 12  nvme 0.72769         osd.12
-14  nvme 0.72769     host sumi2~nvme
 13  nvme 0.72769         osd.13
-15  nvme 0.72769     host sumi3~nvme
 14  nvme 0.72769         osd.14
 -1       7.70544 root default
 -3       2.56848     host sumi1
 12  nvme 0.72769         osd.12
 -5       2.56848     host sumi2
 13  nvme 0.72769         osd.13
 -7       2.56848     host sumi3
 14  nvme 0.72769         osd.14

To let a pool distribute its objects only on a specific device class, you need to create a ruleset with the specific class first.

ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>

<rule-name>

name of the rule, to connect with a pool (seen in GUI & CLI)

<root>

which crush root it should belong to (default ceph root "default")

<failure-domain>

at which failure-domain the objects should be distributed (usually host)

<class>

what type of OSD backing store to use (eg. nvme, ssd, hdd)

Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.

ceph osd pool set <pool-name> crush_rule <rule-name>
Tip If the pool already contains objects, all of these have to be moved accordingly. Depending on your setup this may introduce a big performance hit on your cluster. As an alternative, you can create a new pool and move disks separately.

Ceph Client

gui-ceph-log.png

You can then configure Proxmox VE to use such pools to store VM or Container images. Simply use the GUI too add a new RBD storage (see section Ceph RADOS Block Devices (RBD)).

You also need to copy the keyring to a predefined location for a external Ceph cluster. If Ceph is installed on the Proxmox nodes itself, then this will be done automatically.

Note The file name needs to be <storage_id> + `.keyring - <storage_id> is the expression after rbd: in /etc/pve/storage.cfg which is my-ceph-storage in the following example:
mkdir /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring

Copyright © 2007-2017 Proxmox Server Solutions GmbH

This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License along with this program. If not, see http://www.gnu.org/licenses/