Ceph Octopus to Pacific: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
m (Proxmox VE 7.0 release)
(→‎Upgrade Tunables: add still valid points with hint that it's only needed if not followed in the upgrade from nautilus to octopus)
Line 146: Line 146:
== Upgrade Tunables ==
== Upgrade Tunables ==


TODO: evaluate tunable changes
{{Note|These are not needed if already done with the [[Ceph Nautilus to Octopus|Nautilus to Octopus]] upgrade.|info}}
 
If your CRUSH tunables are '''older than Hammer''', Ceph will now issue a health warning. If you see a health alert to that effect, you can revert this change with:


  ceph config set mon mon_crush_min_required_version firefly
  ceph config set mon mon_crush_min_required_version firefly
If Ceph does not complain, however, then we recommend you also switch any existing CRUSH buckets to straw2, which was added back in the Hammer release. If you have any ‘straw’ buckets, this will result in a modest amount of data movement, but generally nothing too severe:
# create a backup first
ceph osd getcrushmap -o backup-crushmap
ceph osd crush set-all-straw-buckets-to-straw2
If there are problems, you can easily revert with:
ceph osd setcrushmap -i backup-crushmap
Moving to ‘straw2’ buckets will unlock a few recent features, like the crush-compat balancer mode added back in Nautilus.


== Known Issues ==
== Known Issues ==

Revision as of 12:32, 6 July 2021

Introduction

This article explains how to upgrade Ceph from Octopus to Pacific (16.2.4 or higher) on Proxmox VE 7.x.

For more information see Release Notes

Assumption

We assume that all nodes are on the latest Proxmox VE 7.0 (or higher) version and Ceph is on version Octopus (15.2.13-pve1 or higher).

  • Ceph version is 15.2.x Octopus
    If not, please see the Ceph Nautilus to Octopus upgrade guide.
    Note: While in theory one could upgrade from Ceph Nautilus to Pacific directly, Proxmox VE only supports the upgrade from Octopus to Pacific.
  • Already upgraded to Proxmox VE 7.x
    If no, please see [[Upgrade from 6.x to 7.0]] guide.
  • The cluster must be healthy and working!

Enable msgrv2 protocol and update Ceph configuration

If you did not already do so when you upgraded to Nautilus or Octopus, you must enable the new v2 network protocol. Issue the following command:

ceph mon enable-msgr2

This will instruct all monitors that bind to the old default port 6789 for the legacy v1 protocol to also bind to the new 3300 v2 protocol port. To see if all monitors have been updated run

ceph mon dump

and verify that each monitor has both a v2: and v1: address listed.

Preparation on each Ceph cluster node

Change the current Ceph repositories from Octopus to Pacific.

sed -i 's/octopus/pacific/' /etc/apt/sources.list.d/ceph.list

Your /etc/apt/sources.list.d/ceph.list should now look like this

deb http://download.proxmox.com/debian/ceph-pacific bullseye main

Set the 'noout' flag

Set the noout flag for the duration of the upgrade (optional, but recommended):

ceph osd set noout

Or via the GUI in the OSD tab (Manage Global Flags).

Upgrade on each Ceph cluster node

Upgrade all your nodes with the following commands. It will upgrade the Ceph on your node to Pacific.

apt update
apt full-upgrade

After the update, your setup will still be running the old Octopus binaries.

Restart the monitor daemon

Yellowpin.svg Note: You can use the web-interface or the command-line to restart ceph services.

After upgrading all cluster nodes, you have to restart the monitor on each node where a monitor runs.

systemctl restart ceph-mon.target

Once all monitors are up, verify that the monitor upgrade is complete. Look for the Pacific string in the mon map. The command

ceph mon dump | grep min_mon_release

should report

min_mon_release 16 (pacific)

If it does not, this implies that one or more monitors haven’t been upgraded and restarted, and/or that the quorum doesn't include all monitors.

Restart the manager daemons on all nodes

Then restart all managers on all nodes

systemctl restart ceph-mgr.target

Verify that the ceph-mgr daemons are running by checking ceph -s

ceph -s
...
 services:
  mon: 3 daemons, quorum foo,bar,baz
  mgr: foo(active), standbys: bar, baz
...

Restart the OSD daemon on all nodes

Important: After the upgrade, the first time each OSD starts, it will do a format conversion to improve the accounting for “omap” data. It may take a few minutes or up to a few hours (eg. on HDD with lots of omap data).

Best to restart the OSDs on one node at a time after

systemctl restart ceph-osd.target

Wait after each restart and periodically checking the status of the cluster:

ceph status

It should be in HEALTH_OK or

HEALTH_WARN
noout flag(s) set

You can disable this automatic conversion with:

 ceph config set osd bluestore_fsck_quick_fix_on_mount false

But the conversion should be made as soon as possible.

Disallow pre-Pacific OSDs and enable all new Pacific-only functionality

ceph osd require-osd-release pacific

NOTE: Missing this step breaks starting OSD from which have their required release on Ceph Luminous or older (for example, if you upgraded from Luminous -> Nautilus -> Octopus)

Upgrade all CephFS MDS daemons

For each CephFS file system,

  1. Make sure only one MDS is running
    The default installation uses one active MDS. To check if this is the case on your cluster, check the output of ceph status and verify that there is only one active MDS.
    Reduce the number of ranks to 1 (if you plan to restore it later, first take notes of the original number of MDS daemons).:
    ceph status
    ceph fs get <fs_name> | grep max_mds
    ceph fs set <fs_name> max_mds 1
  2. Wait for the cluster to deactivate any non-zero ranks by periodically checking the status:
    ceph status
  3. Take all standby MDS daemons offline on the appropriate hosts with:
    systemctl stop ceph-mds.target
  4. Confirm that only one MDS is online and is on rank 0 for your FS:
    ceph status
  5. Upgrade the last remaining MDS daemon by restarting the daemon:
    systemctl restart ceph-mds.target
  6. Restart all standby MDS daemons that were taken offline:
    systemctl start ceph-mds.target
  7. Restore the original value of max_mds for the volume:
    ceph fs set <fs_name> max_mds <original_max_mds>

Unset the 'noout' flag

Once the upgrade process is finished, don't forget to unset the noout flag.

ceph osd unset noout

Or via the GUI in the OSD tab (Manage Global Flags).

Upgrade Tunables

Yellowpin.svg Note: These are not needed if already done with the Nautilus to Octopus upgrade.

If your CRUSH tunables are older than Hammer, Ceph will now issue a health warning. If you see a health alert to that effect, you can revert this change with:

ceph config set mon mon_crush_min_required_version firefly

If Ceph does not complain, however, then we recommend you also switch any existing CRUSH buckets to straw2, which was added back in the Hammer release. If you have any ‘straw’ buckets, this will result in a modest amount of data movement, but generally nothing too severe:

# create a backup first
ceph osd getcrushmap -o backup-crushmap
ceph osd crush set-all-straw-buckets-to-straw2

If there are problems, you can easily revert with:

ceph osd setcrushmap -i backup-crushmap

Moving to ‘straw2’ buckets will unlock a few recent features, like the crush-compat balancer mode added back in Nautilus.

Known Issues

RocksDB resharding broken

Setting up RocksDB resharding after upgrading is currently (16.2.4) broken and will require removing and re-adding the OSD. Please avoid triggering any reshard until further notice here.

Placement Group (PG) count warning for pools

TODO: can probably get deleted, we mentioned this in quite a bit in octopus wiki

See Also