Ceph Jewel to Luminous: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
mNo edit summary
Line 6: Line 6:


More information see
More information see
[http://docs.ceph.com/docs/master/release-notes/#v12-1-0-luminous-rc Release Notes]
[http://docs.ceph.com/docs/master/release-notes/#v12-1-2-luminous-rc Release Notes]


== Assumption ==
== Assumption ==

Revision as of 17:35, 11 August 2017

Yellowpin.svg Note: the current Ceph Luminous 12.1.x is the release candidate, for production ready Ceph Cluster packages please wait for version 12.2.x
Yellowpin.svg Note: the current Ceph Luminous 12.1.x do not work with Opteron CPU and (very) old Intel Xeons

Introduction

This HOWTO explains the upgrade from Ceph Jewel to Luminous (12.1.0 or higher) on Proxmox VE 4.x.

More information see Release Notes

Assumption

In this HOWTO we assume that all nodes are on the very latest Proxmox VE 4.4 version and Ceph is on Version Jewel (10.2.9).

The Cluster must be healthy and working.

Cluster Preparation

On a cluster member you have to set sortbitwise. This is very important, if this flag is not set you can lose all your data.

ceph osd set sortbitwise

To avoid re-balance during the upgrade process set noout.

ceph osd set noout

Since Luminous you have to allow explicit to delete a pool. Edit /etc/pve/ceph.conf with your preferred editor and add this line in section [global]

mon allow pool delete = true

Preparation on each ceph cluster node

Change the current Ceph repositories from Jewel to Luminous.

sed -i 's/jewel/luminous/' /etc/apt/sources.list.d/ceph.list

More information see Ceph Packages

Upgrade on each ceph cluster node

Upgrade all your nodes with the following commands.

apt-get update && apt-get dist-upgrade

It will upgrade the Ceph on your node to Luminous. After the update you still run the old jewel binary.

Restart the Monitor daemon

After all cluster nodes are upgraded you have to restart the monitor on each nodes were a monitor run.

systemctl restart ceph-mon@<MON-ID>.service

Restart the OSD daemon on all nodes

Then restart all OSD's on all node

systemctl restart ceph-osd.target

Check cluster status and adjust settings

After your node has successful restarts the daemons, unset the 'noout' flag. On the GUI or by this command.

ceph osd unset noout

Now check if you Ceph cluster is healthy.

ceph -s

You will get a warning like this "require_osd_release < luminous" you can fix it with the following command.

ceph osd require-osd-release luminous

Also it is recommended to set the tunable to optimal, but this will produce a massive rebalance.

ceph osd set-require-min-compat-client jewel 
ceph osd crush tunables optimal