Ceph Jewel to Luminous: Difference between revisions
(Created page with "{{Note|the current Ceph Luminous 12.1.x is the release candidate, for production ready Ceph Cluster packages please wait for version 12.2.x}} == Introduction == This HOWTO ex...") |
mNo edit summary |
||
(20 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
== Introduction == | |||
This HOWTO explains the upgrade from Ceph Jewel to Luminous (12.2.0 or higher) on Proxmox VE 4.x in preparation for upgrading to PVE 5.x. | |||
The latest Ceph version supported in pveceph in PVE 4.x is Ceph Jewel (10.2.x). An upgrade to Ceph Luminous (12.2.x) is only possible temporarily as first step of upgrading to PVE 5.x. | |||
More information see | More information see | ||
[http://docs.ceph.com/docs/master/release-notes/#v12- | [http://docs.ceph.com/docs/master/release-notes/#v12-2-0-luminous Release Notes] | ||
== Assumption == | == Assumption == | ||
In this HOWTO we assume that all nodes are on the very latest Proxmox VE 4.4 version and Ceph is on Version Jewel (10.2. | In this HOWTO we assume that all nodes are on the very latest Proxmox VE 4.4 version and Ceph is on Version Jewel (10.2.9 or higher). | ||
The Cluster must be healthy and working. | The Cluster must be healthy and working. | ||
Line 14: | Line 14: | ||
== Cluster Preparation == | == Cluster Preparation == | ||
On a cluster member you have to set sortbitwise. | On a cluster member you have to set sortbitwise. | ||
This is very important, if this flag is not set you can lose all your data. | '''This is very important, if this flag is not set you can lose all your data.''' | ||
ceph osd set sortbitwise | ceph osd set sortbitwise | ||
Line 21: | Line 21: | ||
ceph osd set noout | ceph osd set noout | ||
Since Luminous you have to allow explicit to delete a pool. | |||
Edit /etc/pve/ceph.conf with your preferred editor and add this line in section [global] | |||
mon allow pool delete = true | |||
== Preparation on each ceph cluster node == | == Preparation on each ceph cluster node == | ||
Change the current Ceph repositories from Jewel to Luminous | Change the current Ceph repositories from the upstream Jewel entry to the Luminous repository provided by Proxmox: | ||
echo "deb http://download.proxmox.com/debian/ceph-luminous jessie main" > /etc/apt/sources.list.d/ceph.list | |||
'''Note:''' this repository is only provided for upgrading PVE 4.x with Ceph Jewel to PVE 5.x with Ceph Luminous - it will not receive any further updates, and PVE 4.x does not (fully) support Ceph Luminous in the web interface. | |||
== Upgrade on each ceph cluster node == | == Upgrade on each ceph cluster node == | ||
Line 34: | Line 37: | ||
apt-get update && apt-get dist-upgrade | apt-get update && apt-get dist-upgrade | ||
It will upgrade | It will upgrade Ceph on your node to Luminous. | ||
After the update | After the update all services are still running using the old jewel binaries. | ||
== Restart the Monitor | == Restart the Monitor daemons == | ||
After all cluster nodes are upgraded you have to restart the monitor on each | After all cluster nodes are upgraded you have to restart the monitor on each node were a monitor is configured. | ||
systemctl restart ceph-mon@<MON-ID>.service | systemctl restart ceph-mon@<MON-ID>.service | ||
== Verify Monitor instance versions == | |||
Print the binary versions of all currently running Monitor instances in your cluster. Verify that all monitors are running the same Ceph version, and that the version number starts with 12 (X, Y, and AAA are placeholders): | |||
ceph mon versions | |||
{ | |||
"ceph version 12.X.Y (AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA) luminous (rc)": 3 | |||
} | |||
The last number shows the number of monitor instances. | |||
== Create Manager instances == | |||
Create a manager instance on each node where a monitor is configured: | |||
pveceph createmgr | |||
== Restart the OSD daemon on all nodes == | == Restart the OSD daemon on all nodes == | ||
Then restart all OSD | Then restart all OSD instances on all node | ||
systemctl restart ceph-osd.target | systemctl restart ceph-osd.target | ||
Check the currently running binary version of all running OSD instances in your cluster: | |||
ceph osd versions | |||
After restarting all OSD instances on all nodes, this should output one line with a Ceph Luminous version string followed by the total number of OSDs in your cluster. | |||
== Check cluster status and adjust settings == | == Check cluster status and adjust settings == | ||
Line 70: | Line 97: | ||
ceph osd crush tunables optimal | ceph osd crush tunables optimal | ||
[[Category:HOWTO]] [[Category:Installation]] | After you set all tunables, you might see the following message: "application not enabled on 2 pool(s)". Starting with Ceph luminous a pool needs to be associated with an application. | ||
ceph osd pool application enable rbd rbd | |||
It is recommended to wait until a rebalance is finished before proceeding with the upgrade from PVE 4.x to PVE 5.x. | |||
'''IMPORTANT''': Once you upgraded ceph to luminous do not make any changes to any Ceph backed VMs or CTs until you finished the upgrade to 5.0! PVE 4.4 is not able to handle all aspects of the newer Ceph version (e.g., default units changed between Jewel and Luminous). | |||
[[Category:HOWTO]] [[Category:Installation]][[Category: Ceph Upgrade]] |
Latest revision as of 14:22, 31 May 2023
Introduction
This HOWTO explains the upgrade from Ceph Jewel to Luminous (12.2.0 or higher) on Proxmox VE 4.x in preparation for upgrading to PVE 5.x.
The latest Ceph version supported in pveceph in PVE 4.x is Ceph Jewel (10.2.x). An upgrade to Ceph Luminous (12.2.x) is only possible temporarily as first step of upgrading to PVE 5.x.
More information see Release Notes
Assumption
In this HOWTO we assume that all nodes are on the very latest Proxmox VE 4.4 version and Ceph is on Version Jewel (10.2.9 or higher).
The Cluster must be healthy and working.
Cluster Preparation
On a cluster member you have to set sortbitwise. This is very important, if this flag is not set you can lose all your data.
ceph osd set sortbitwise
To avoid re-balance during the upgrade process set noout.
ceph osd set noout
Since Luminous you have to allow explicit to delete a pool. Edit /etc/pve/ceph.conf with your preferred editor and add this line in section [global]
mon allow pool delete = true
Preparation on each ceph cluster node
Change the current Ceph repositories from the upstream Jewel entry to the Luminous repository provided by Proxmox:
echo "deb http://download.proxmox.com/debian/ceph-luminous jessie main" > /etc/apt/sources.list.d/ceph.list
Note: this repository is only provided for upgrading PVE 4.x with Ceph Jewel to PVE 5.x with Ceph Luminous - it will not receive any further updates, and PVE 4.x does not (fully) support Ceph Luminous in the web interface.
Upgrade on each ceph cluster node
Upgrade all your nodes with the following commands.
apt-get update && apt-get dist-upgrade
It will upgrade Ceph on your node to Luminous. After the update all services are still running using the old jewel binaries.
Restart the Monitor daemons
After all cluster nodes are upgraded you have to restart the monitor on each node were a monitor is configured.
systemctl restart ceph-mon@<MON-ID>.service
Verify Monitor instance versions
Print the binary versions of all currently running Monitor instances in your cluster. Verify that all monitors are running the same Ceph version, and that the version number starts with 12 (X, Y, and AAA are placeholders):
ceph mon versions
{ "ceph version 12.X.Y (AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA) luminous (rc)": 3 }
The last number shows the number of monitor instances.
Create Manager instances
Create a manager instance on each node where a monitor is configured:
pveceph createmgr
Restart the OSD daemon on all nodes
Then restart all OSD instances on all node
systemctl restart ceph-osd.target
Check the currently running binary version of all running OSD instances in your cluster:
ceph osd versions
After restarting all OSD instances on all nodes, this should output one line with a Ceph Luminous version string followed by the total number of OSDs in your cluster.
Check cluster status and adjust settings
After your node has successful restarts the daemons, unset the 'noout' flag. On the GUI or by this command.
ceph osd unset noout
Now check if you Ceph cluster is healthy.
ceph -s
You will get a warning like this "require_osd_release < luminous" you can fix it with the following command.
ceph osd require-osd-release luminous
Also it is recommended to set the tunable to optimal, but this will produce a massive rebalance.
ceph osd set-require-min-compat-client jewel
ceph osd crush tunables optimal
After you set all tunables, you might see the following message: "application not enabled on 2 pool(s)". Starting with Ceph luminous a pool needs to be associated with an application.
ceph osd pool application enable rbd rbd
It is recommended to wait until a rebalance is finished before proceeding with the upgrade from PVE 4.x to PVE 5.x.
IMPORTANT: Once you upgraded ceph to luminous do not make any changes to any Ceph backed VMs or CTs until you finished the upgrade to 5.0! PVE 4.4 is not able to handle all aspects of the newer Ceph version (e.g., default units changed between Jewel and Luminous).