Upgrade from 4.x to 5.0

From Proxmox VE
Jump to navigation Jump to search
Yellowpin.svg Note: Proxmox VE 5.0 is currently in beta status

Introduction

Proxmox VE 5.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.

If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, or any other third party packages, you need to make sure that you also upgrade these package to Debian Stretch.

Generally speaking there are two possibilities to move from 4.x to 5.x

  • New installation on new hardware (and restore VM´s from backup) - safest way, recommended!
  • In-place upgrade via apt, step by step

In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.

Known Issues

  • Secure live migration of virtual machines is not working due to a bug in the Debian ssh package. To migrate

virtual machines between PVE 5.0 machines, you need to following parameter in /etc/pve/datacenter.cfg:

migration: type=insecure

New installation

  • Backup all VMs and containers to external media (see Backup and Restore)
  • Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.
  • Install Proxmox VE from ISO (this will wipe all data on the existing host)
  • Rebuild the cluster if you had any
  • Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup)
  • Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)
  • Restore full VMs from Backups (see Backup and Restore)

Bypassing Backup and Restore

The following is only for advanced users which have knowledge about Proxmox configuration files!

Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only

  • for KVM (i.e. not for containers)
  • if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)

The steps

  • Backup all VMs
  • Restore full VMs from Backups

will be replaced by

  • Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively
  • Restore <vmid>.conf file(s) for the respective machine(s)

Note: /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.

In-place upgrade

In-place upgrades are done with apt-get, so make sure that you are familiar with apt before you start here.

Preconditions

  • upgraded to latest V4.4 version
  • reliable access to all configured storages
  • healthy cluster
  • no VM or CT running
  • valid backup of all VM (needed if something goes wrong)
  • Correct repository configuration
  • at least 1GB free disk space at root mount point

Actions Step by Step

All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections). Again, make sure that you have a valid backup of all CT and VM before you start.

Tip: It is advisable to perform a dry-run of the upgrade first. Install the PVE 4.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 4.4 using the test repo (see Package repositories) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.

Add the PVE repositories to you installation

First make sure that your actual installation has the latest package of the PVE 4.4 release:

apt-get update && apt-get dist-upgrade

Update the Debian repository entry to stretch.

sed -i 's/jessie/stretch/g' /etc/apt/sources.list

Disable the PVE Enterprise repository, the no-subscription repository and the pvetest repository for PVE 4.x You need a to add # symbol to comment out these repositories in /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list. See Package_Repositories

Adding the PVE 5 beta package repository

echo "deb http://download.proxmox.com/debian/pve stretch pvetest" > /etc/apt/sources.list.d/pve-install-repo.list

If there is a backports line then remove it. Currently the upgrade has not been tested when packages from the backports repository are installed.

Update the repositories data:

apt-get update

Upgrade the basic system to Debian Stretch and PVE 5.0

This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.

Start with this step to get the initial set of upgraded packages.

 apt-get dist-upgrade

During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.

Reboot the system in order to use the new PVE kernel

Ceph Upgrade

The upgrade from PVE 4.x to 5.0 includes an upgrade to the Ceph Luminous release. When upgrading from the supported PVE 4.* release (Hammer) , make sure you read the release nodes of the Kraken and Luminous releases.

If the previous system upgrade ended successfully, upgrade to Ceph Luminous with

pveceph install

This command will set the proper Ceph package repository and upgrade the Ceph packages.

Troubleshooting

  • Failing upgrade to "stretch"

Make the sure that the repository configuration for stretch is correct.

If there was a network failure and the upgrade has been made partially try to repair the situation with

apt-get -fy install
  • Unable to boot due to grub failure

See Recover_From_Grub_Failure

External links