Ceph Jewel to Luminous: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
Line 33: Line 33:
More information see
More information see
[http://docs.ceph.com/docs/master/install/get-packages/ Ceph Packages]
[http://docs.ceph.com/docs/master/install/get-packages/ Ceph Packages]
'''Note: The ceph luminous version to install should be lower then the released version on the ceph repository from Proxmox.'''
In some cases the released packages on the ceph upstream repository are newer then on the ceph repository from Proxmox. This will prevent the use of the ceph repository from Proxmox on upgrading to PVE 5.x.
You can find the ceph luminous packages from Proxmox here: http://download.proxmox.com/debian/ceph-luminous/dists/stretch/main/binary-amd64/
  (upstream) download.ceph.com    -> 12.2.7
  (Proxmox)  download.proxmox.com -> 12.2.5
In this example the upstream version is newer then the version from Proxmox. So the lower version (eg. 12.2.5) needs to be installed explicitly.
  apt install ceph=12.2.5
To install a version with apt explicitly you specify the version directly with the package.


== Upgrade on each ceph cluster node ==
== Upgrade on each ceph cluster node ==

Revision as of 13:38, 23 July 2018

Introduction

This HOWTO explains the upgrade from Ceph Jewel to Luminous (12.2.0 or higher) on Proxmox VE 4.x in preparation for upgrading to PVE 5.x.

The latest Ceph version supported in pveceph in PVE 4.x is Ceph Jewel (10.2.x). An upgrade to Ceph Luminous (12.2.x) is only possible temporarily as first step of upgrading to PVE 5.x.

More information see Release Notes

Assumption

In this HOWTO we assume that all nodes are on the very latest Proxmox VE 4.4 version and Ceph is on Version Jewel (10.2.9 or higher).

The Cluster must be healthy and working.

Cluster Preparation

On a cluster member you have to set sortbitwise. This is very important, if this flag is not set you can lose all your data.

ceph osd set sortbitwise

To avoid re-balance during the upgrade process set noout.

ceph osd set noout

Since Luminous you have to allow explicit to delete a pool. Edit /etc/pve/ceph.conf with your preferred editor and add this line in section [global]

mon allow pool delete = true

Preparation on each ceph cluster node

Change the current Ceph repositories from Jewel to Luminous.

sed -i 's/jewel/luminous/' /etc/apt/sources.list.d/ceph.list

More information see Ceph Packages


Note: The ceph luminous version to install should be lower then the released version on the ceph repository from Proxmox.

In some cases the released packages on the ceph upstream repository are newer then on the ceph repository from Proxmox. This will prevent the use of the ceph repository from Proxmox on upgrading to PVE 5.x.

You can find the ceph luminous packages from Proxmox here: http://download.proxmox.com/debian/ceph-luminous/dists/stretch/main/binary-amd64/

 (upstream) download.ceph.com    -> 12.2.7
 (Proxmox)  download.proxmox.com -> 12.2.5

In this example the upstream version is newer then the version from Proxmox. So the lower version (eg. 12.2.5) needs to be installed explicitly.

 apt install ceph=12.2.5

To install a version with apt explicitly you specify the version directly with the package.

Upgrade on each ceph cluster node

Upgrade all your nodes with the following commands.

apt-get update && apt-get dist-upgrade

It will upgrade Ceph on your node to Luminous. After the update all services are still running using the old jewel binaries.

Restart the Monitor daemons

After all cluster nodes are upgraded you have to restart the monitor on each node were a monitor is configured.

systemctl restart ceph-mon@<MON-ID>.service

Verify Monitor instance versions

Print the binary versions of all currently running Monitor instances in your cluster. Verify that all monitors are running the same Ceph version, and that the version number starts with 12 (X, Y, and AAA are placeholders):

ceph mon versions
{
    "ceph version 12.X.Y (AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA) luminous (rc)": 3
}

The last number shows the number of monitor instances.

Create Manager instances

Create a manager instance on each node where a monitor is configured:

pveceph createmgr

Restart the OSD daemon on all nodes

Then restart all OSD instances on all node

systemctl restart ceph-osd.target

Check the currently running binary version of all running OSD instances in your cluster:

ceph osd versions

After restarting all OSD instances on all nodes, this should output one line with a Ceph Luminous version string followed by the total number of OSDs in your cluster.

Check cluster status and adjust settings

After your node has successful restarts the daemons, unset the 'noout' flag. On the GUI or by this command.

ceph osd unset noout

Now check if you Ceph cluster is healthy.

ceph -s

You will get a warning like this "require_osd_release < luminous" you can fix it with the following command.

ceph osd require-osd-release luminous

Also it is recommended to set the tunable to optimal, but this will produce a massive rebalance.

ceph osd set-require-min-compat-client jewel 
ceph osd crush tunables optimal 

After you set all tunables, you might see the following message: "application not enabled on 2 pool(s)". Starting with Ceph luminous a pool needs to be associated with an application.

ceph osd pool application enable rbd rbd