- 1 Introduction
- 2 Advantages
- 3 Recommended hardware
- 4 Installation of Proxmox VE
- 5 Network for Ceph
- 6 Installation of Ceph packages
- 7 Create initial Ceph configuration
- 8 Creating Ceph Monitors
- 9 Continue with CLI or GUI
- 10 Creating more Ceph Monitors
- 11 Creating Ceph OSDs
- 12 Ceph Pools
- 13 Ceph Client
- 14 Why do we need a new command line tool (pveceph)?
- 15 Note for users of HP SmartArray controllers
- 16 Video Tutorials
- 17 Further readings about Ceph
Proxmox VE supports Ceph’s RADOS Block Device to be used for VM disks. The Ceph storage services are usually hosted on external, dedicated storage nodes. Such storage clusters can sum up to several hundreds of nodes, providing petabytes of storage capacity.
For smaller deployments, it is also possible to run Ceph services directly on your Proxmox VE nodes. Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on same node is possible.
This articles describes how to setup and run Ceph storage services directly on Proxmox VE nodes. If you want to install and configure an external ceph storage read the Ceph documentation. To configure an external Ceph storage works as described in section Ceph Client accordingly.
- Easy setup and management with CLI and GUI support on Proxmox VE
- Thin provisioning
- Snapshots support
- Self healing
- No single point of failure
- Scalable to the exabyte level
- Setup pools with different performance and redundancy characteristics
- Data is replicated, making it fault tolerant
- Runs on economical commodity hardware
- No need for hardware RAID controllers
- Easy management
- Open source
You need at least three identical servers for the redundant setup. Here is the specifications of one of our test lab clusters with Proxmox VE and Ceph (three nodes):
- Dual Xeon E5-2620v2, 32 GB RAM, Intel S2600CP mainboard, Intel RMM, Chenbro 2U chassis with eight 3.5” hot-swap drive bays, 2 fixed 2.5" SSD bays
- 10 GBit network for Ceph traffic (one Intel X540-T2 in each server, one 10Gb switch - Netgear XS712T)
- Single enterprise class SSD for the Proxmox VE installation (because we run Ceph monitors there and quite a lot of logs), we use one Intel DC S3500 80 GB per host.
- Single, fast and reliable enterprise class SSD for Ceph Journal. Just for this test lab cluster, we used instead some Samsung SSD 840 PRO with 240 GB
- SATA disk for storing the data (OSDs), use at least 4 disks/OSDs per server, more OSD disks are faster. We use four Seagate Constellation ES.3 SATA 6Gb (4TB model) per server.
N.B. enterprise class SSD means also that they have "power loss data protection", like intel DC S3500 or DC S3700 have
This setup delivers 48 TB storage. By using the redundancy of 3, you can store up to 16 TB (100%). But to be prepared for failed disks and hosts, you should never fill up your storage with 100 %.
By adding more disks, the storage can be expanded up to 96 TB just by plugging in additional disks into the free drive bays. Of course, you can add more servers too as soon as your business is growing.
If you do not want to run virtual machines and Ceph on the same host, you can just add more Proxmox VE nodes and use these for running the guests and the others just for the storage.
Installation of Proxmox VE
Before you start with Ceph, you need a working Proxmox VE cluster with 3 nodes (or more). We install Proxmox VE on a fast and reliable enterprise class SSD, so we can use all bays for OSD (Object Storage Devices) data. Just follow the well known instructions on Installation and Proxmox VE 2.0 Cluster.
Use ext4 if you install on SSD (at the boot prompt of the installation ISO you can specify parameters, e.g. "linux ext4 swapsize=4").
Network for Ceph
All nodes need access to a separate 10Gb network interface, exclusively used for Ceph. We use network 10.10.10.0/24 for this tutorial.
It is highly recommended to use 10Gb for that network to avoid performance problems. Bonding can be used to increase availability.
If you do not have fast network switches, you can also setup a Full Mesh Network for Ceph Server.
The network setup (ceph private network) from our first node contains:
# from /etc/network/interfaces auto eth2 iface eth2 inet static address 10.10.10.1 netmask 255.255.255.0
The network setup (ceph private network) from our second node contains:
# from /etc/network/interfaces auto eth2 iface eth2 inet static address 10.10.10.2 netmask 255.255.255.0
The network setup (ceph private network) from our third node contains:
# from /etc/network/interfaces auto eth2 iface eth2 inet static address 10.10.10.3 netmask 255.255.255.0
Installation of Ceph packages
You now need to select 3 nodes and install the Ceph software packages there. We wrote a small command line utility called 'pveceph' which helps you performing this tasks, you can also choose the version of Ceph. Login to all your nodes and execute the following on all:
node1# pveceph install -version hammer
node2# pveceph install -version hammer
node3# pveceph install -version hammer
This sets up an 'apt' package repository in /etc/apt/sources.list.d/ceph.list and installs the required software. If you want to install older release, just use dumpling or emperor instead of firefly.
Create initial Ceph configuration
node1# pveceph init --network 10.10.10.0/24
This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using pmxcfs. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file.
Creating Ceph Monitors
node1# pveceph createmon
Continue with CLI or GUI
As soon as you have created the first monitor, you can start using the Proxmox GUI (see the video tutorial on Managing Ceph Server) to manage and view your Ceph configuration.
Of course, you can continue to use the command line tools (CLI). We continue with the CLI in this wiki article, but you should achieve the same results no matter which way you finish the remaining steps.
Creating more Ceph Monitors
You should run 3 monitors, one on each node. Create them via GUI or via CLI. So please login to the next node and run:
node2# pveceph createmon
And execute the same steps on the third node:
node3# pveceph createmon
If you add a node where you do not want to run a Ceph monitor, e.g. another node for OSDs, you need to install the Ceph packages with 'pveceph install' and you need to initialize Ceph with 'pveceph init'
Creating Ceph OSDs
Creating OSDs can be done via GUI - self explaining - or via CLI, explained here:
Having that said, initializing an OSD can be done with:
# pveceph createosd /dev/sd[X]
If you want to use a dedicated SSD journal disk:
# pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk
# pveceph createosd /dev/sdf -journal_dev /dev/sdb
This partitions the disk (data and journal partition), creates filesystems, starts the OSD and add it to the existing crush map. So afterward the OSD is running and fully functional. Please create at least 12 OSDs, distributed among your nodes (4 on each node).
It should be noted that this command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using:
# ceph-disk zap /dev/sd[X]
- In some cases disks that used to be part of a 3ware raid need the following in addition to zap.
#To remove partition table and boot sector the following should be sufficient: dd if=/dev/zero of=/dev/$DISK bs=1024 count=1
DISK=$1 if [ "$1" = "" ]; then echo "Need to supply a dev name like sdg . exiting" exit 1 fi echo " make sure this is the correct disk " echo $DISK echo " you will end up with NO partition table when this procedes . example: Disk /dev/$1 doesn't contain a valid partition table Press enter to procede , or ctl-c to exit " read x dd if=/dev/zero of=/dev/$DISK bs=512 count=50000
You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended if you expect good performance.
In order to use a dedicated journal disk (SSD), the disk needs to have a GPT partition table. You can create this with 'gdisk /dev/sd(x)'. If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.
In order to calculate your the number of placement groups for your pools, you can use:
Ceph PGs per Pool Calculator
You can then configure Proxmox VE to use such pools to store VM images, just use the GUI ("Add Storage": RBD). A typical entry in the Proxmox VE storage configuration looks like:
# from /etc/pve/storage.cfg rbd: my-ceph-storage monhost 10.10.10.1;10.10.10.2;10.10.10.3 pool rbd content images username admin
You also need to copy the keyring to a predefined location.
- Note that the file name needs to be storage id + .keyring . storage id is the expression after 'rbd:' in /etc/pve/storage.cfg which is my-ceph-storage in the current example.
# cd /etc/pve/priv/ # mkdir ceph # cp /etc/ceph/ceph.client.admin.keyring ceph/my-ceph-storage.keyring
Why do we need a new command line tool (pveceph)?
For the use in the specific Proxmox VE architecture we use pveceph. Proxmox VE provides a distributed file system (pmxcfs) to store configuration files.
We use this to store the Ceph configuration. The advantage is that all nodes see the same file, and there is no need to copy configuration data around using ssh/scp. The tool can also use additional information from your Proxmox VE setup.
Tools like ceph-deploy cannot take advantage of that architecture.
Note for users of HP SmartArray controllers
Proxmox will struggle to add an OSD on a SmartArray conroller via the GUI. This is due to an '!' being inserted into the device path, where a '/' should be instead. An example would be that "/dev/cciss!c0d0" is shown instead of "/dev/cciss/c0d0".
This is, however, very easy to fix with a small file edit.
Connect to the terminal via SSH.
Open the following file in your editor of choice (Vi, Emacs, Nano, etc):
Find the line that says:
$devname =~ s|/dev/||;
Add the following string on a new line, immediately after the line mentioned above:
$devname =~ s|cciss/|cciss!|;
Finally, save the file.
Unfortunately, this will still not allow the disk to be added via the GUI, but you can now use the instructions above to add the OSD via the CLI, and then carry on as you otherwise would with creation of pools, etc.
Further readings about Ceph
- https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/, Journal SSD Recommendations