Ceph Quincy to Reef

From Proxmox VE
Revision as of 14:55, 15 September 2023 by Thomas Lamprecht (talk | contribs)
Jump to navigation Jump to search

Introduction

This article explains how to upgrade Ceph from Quincy (17.2+) to Reef (18.2+) on Proxmox VE 8.

Important Release Notes

Yellowpin.svg Note: Filestore OSDs are deprecated. Before you proceed, destroy your Filestore OSDs and recreate them to be Bluestore OSDs one by one.
  • A health warning is now reported if the require-osd-release flag is not set to the appropriate release after a cluster upgrade.

For more information, see Release Notes

Assumption

We assume that all nodes are on the latest Proxmox VE 8.0 (or higher) version and Ceph is on version Quincy (17.2.6-pve1+3 or higher). If not, see the Ceph Pacific to Quincy upgrade guide.

Yellowpin.svg Note: While in theory it is possible to upgrade from the older Ceph Pacific (16.2+) to Reef (18.2+) release directly, we do not provide builds of Ceph Pacific for Proxmox VE 8, making this impossible

The cluster must be healthy and working!

Yellowpin.svg Note: All commands starting with ceph need to be run only once. It doesn't matter on which node in the Ceph cluster.

Enable msgrv2 protocol and update Ceph configuration

If you did not already do so when you upgraded to Nautilus, Octopus or Pacific, you must enable the new v2 network protocol. Issue the following command:

ceph mon enable-msgr2

This will instruct all monitors that bind to the old default port 6789 for the legacy v1 protocol to also bind to the new 3300 v2 protocol port. To see if all monitors have been updated run

ceph mon dump

and verify that each monitor has both a v2: and v1: address listed.

Preparation on each Ceph cluster node

Please note also that currently (2023-09-15) Ceph Reef is only available on as test and no-subscription repository. While we do not know of any issue from the Ceph Reef release, we still recommend that production systems hold out until we make the release available on our enterprise repository.

Change the current Ceph repositories from Quincy to Reef.

sed -i 's/quincy/reef/' /etc/apt/sources.list.d/ceph.list

Your /etc/apt/sources.list.d/ceph.list should now look like this

deb https://enterprise.proxmox.com/debian/ceph-reef bookworm enterprise

Note, with Proxmox VE 8 we introduced an enterprise repository for Ceph, which is accessible with a valid Proxmox VE subscription. If you do not have a valid subscription you can use the publicly available no-subscription or test repositories, for example:

deb http://download.proxmox.com/debian/ceph-reef bookworm no-subscription

Set the 'noout' flag

Set the noout flag for the duration of the upgrade (optional, but recommended):

ceph osd set noout

Or via the GUI in the OSD tab (Manage Global Flags).

Upgrade on each Ceph cluster node

Upgrade all your nodes with the following commands or by installing the latest updates via the GUI. It will upgrade the Ceph on your node to Reef.

apt update
apt full-upgrade

After the update, your setup will still be running the old Pacific binaries.

Restart the monitor daemon

Yellowpin.svg Note: You can use the web-interface or the command-line to restart ceph services.

After upgrading all cluster nodes, you have to restart the monitor on each node where a monitor runs.

systemctl restart ceph-mon.target

Do so one node at a time. Wait after each restart and periodically check the status of the cluster:

ceph -s

It should be in HEALTH_OK or

HEALTH_WARN
noout flag(s) set

Once all monitors are up, verify that the monitor upgrade is complete. Look for the Reef string in the mon map. The command

ceph mon dump | grep min_mon_release

should report

min_mon_release 18 (reef)

If it does not, this implies that one or more monitors haven’t been upgraded and restarted, and/or that the quorum doesn't include all monitors.

Restart the manager daemons on all nodes

If the managers did not automatically restart with the monitors, restart them now on all nodes

systemctl restart ceph-mgr.target

Verify that the ceph-mgr daemons are running by checking ceph -s

ceph -s
...
 services:
  mon: 3 daemons, quorum foo,bar,baz
  mgr: foo(active), standbys: bar, baz
...

Restart the OSD daemon on all nodes

Restart all OSDs. Only restart OSDs on one node at a time to avoid loss of data redundancy. To restart all OSDs on a node, run the following command:

systemctl restart ceph-osd.target

Wait after each restart and periodically checking the status of the cluster:

ceph status

It should be in HEALTH_OK or

HEALTH_WARN
noout flag(s) set

Once all OSDs are running with the latest versions, the following warning can appear:

all OSDs are running reef or later but require_osd_release < reef

Disallow pre-Reef OSDs and enable all new Reef-only functionality

ceph osd require-osd-release reef

Upgrade all CephFS MDS daemons

For each CephFS file system, (you can list the file systems with ceph fs ls)

  1. Disable standby_replay
    ceph fs set <fs_name> allow_standby_replay false
  2. Reduce the number of ranks to 1 (if you plan to restore it later, first take notes of the original number of MDS daemons).:
    ceph status
    ceph fs get <fs_name> | grep max_mds
    ceph fs set <fs_name> max_mds 1
    With a rank higher than 1 you will see more than one MDS active for that Ceph FS.
  3. Wait for the cluster to deactivate any non-zero ranks by periodically checking the status of Ceph.:
    ceph status
    The number of active MDS should go down to the number of file systems you have
    Alternatively, check in the CephFS panel in the GUI that each Ceph filesystem has only one active MDS
  4. Take all standby MDS daemons offline on the appropriate hosts with:
    systemctl stop ceph-mds.target
  5. Confirm that only one MDS is online and is on rank 0 for your FS:
    ceph status
  6. Upgrade the last remaining MDS daemon by restarting the daemon:
    systemctl restart ceph-mds.target
  7. Restart all standby MDS daemons that were taken offline:
    systemctl start ceph-mds.target
  8. Restore the original value of max_mds for the volume:
    ceph fs set <fs_name> max_mds <original_max_mds>

Unset the 'noout' flag

Once the upgrade process is finished, don't forget to unset the noout flag.

ceph osd unset noout

Or via the GUI in the OSD tab (Manage Global Flags).

Notes

  • When restarting a MGR, log lines containing "has missing NOTIFY_TYPES member" can be ignored

See Also