Upgrade from 4.x to 5.0: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
(Do not recommend a new install when doing an upgrade, as 4.4 to 5.0 on place upgrade is safe enough)
mNo edit summary
 
(20 intermediate revisions by 7 users not shown)
Line 1: Line 1:
{{note|Proxmox VE 5.0 is currently in beta status}}
 
== Introduction ==
== Introduction ==


Proxmox VE 5.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.
Proxmox VE 5.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.


If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, or any other third party packages, you need to make sure that you also upgrade these package to Debian Stretch.  
If you run a customized installation and/or you installed additional packages, for example for sheepdog, or any other third party packages, you need to make sure that you also upgrade these package to Debian Stretch.  


Generally speaking there are two possibilities to move from 4.x to 5.x
Generally speaking there are two possibilities to move from 4.x to 5.x
Line 13: Line 13:
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.


=== Known Issues ===
If you run a PVE 4 cluster it's [https://forum.proxmox.com/threads/4-4-and-5-x-version-in-the-same-cluster.37170/#post-182668 tested and supported] to add a PVE 5 node and migrate your guests to the new host.


* Secure live migration of virtual machines is not working due to a bug in the Debian ssh package. To migrate
=== Caveats to know before you start ===
virtual machines between PVE 5.0 machines, you need to following parameter in <code>/etc/pve/datacenter.cfg</code>:


  migration: type=insecure
* <span style="color: red">''' if using ceph'''</span>, upgrade your ceph cluster to the Luminous release <span style="color: red">'''before you upgrade'''</span>, following the article [[Ceph Jewel to Luminous]].


== New installation ==
== New installation ==
Line 30: Line 29:
* Restore full VMs from Backups (see [[Backup and Restore]])
* Restore full VMs from Backups (see [[Backup and Restore]])


=== Bypassing Backup and Restore ===
If you feel confortable with the command line, and all your VMs/CTs are one shared storage you can also follow
the procedure [[Bypassing backup and restore when upgrading]]


The following is only for advanced users which have knowledge about Proxmox configuration files!
Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only
* for KVM (i.e. not for containers)
* if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)
The steps
* Backup all VMs
* Restore full VMs from Backups
will be replaced by
* Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively
* Backup those storages from the storage configuration ( /etc/pve/storage.cfg ) which are shared and untouched, simply copy the respective lines and append them once to the newly build clusters /etc/pve/storage.cfg
* Restore <vmid>.conf file(s) for the respective machine(s)
'''Note:''' /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.
After you restored the VM configs and restored the external shared Storage configuration - so that it is accessible under the same name in the new cluster again - you should be able to start the VMs again.
No additional reboot is required.


== In-place upgrade ==
== In-place upgrade ==


In-place upgrades are done with apt-get, so make sure that you are familiar with apt before you start here.
In-place upgrades are done with apt-get, so make sure that you are familiar with apt before you start here.
'''Tip''': ''You can perform a test upgrade and a standalone server first. Install the Proxmox VE 4.4 ISO on testing hardware, then upgrade this installation to the latest minor version of Proxmox VE 4.4 (see [[Package repositories]]), copy/create relevant configurations to the test machine to replicate your production setup as closely as possible. Then you can start the upgrade.
You can even install Proxmox VE 4.4 in a VM and test the upgrade in this environment.''


=== Preconditions ===
=== Preconditions ===


* upgraded to latest V4.4 version
* upgraded to latest V 4.4
* reliable access to all configured storages
* reliable access to all configured storages
* healthy cluster
* healthy cluster
Line 69: Line 49:
* Correct repository configuration
* Correct repository configuration
* at least 1GB free disk space at root mount point
* at least 1GB free disk space at root mount point
* ensure your /boot partition, if any, has enough space for a new kernel (min 60MB) - e.g., by removing old unused kernels (see pveversion -v)
* if using Ceph, you should be already running the Ceph Luminous version, but see the caveat above
=== Move important Virtual Machines and Containers ===
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is currently upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is in general not supported. Keep this in mind when planning your cluster upgrade.


=== Actions Step by Step ===
=== Actions Step by Step ===


All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections). Again, make sure that you have a valid backup of all CT and VM before you start.
All has to be done on each Proxmox VE node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections). Again, make sure that you have a valid backup of all CT and VM before you start.
 
Further, no changes to any guest, VM nor CT, should be made during the upgrade process.
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 4.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 4.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''


==== Add the PVE repositories to you installation ====
==== Add the PVE repositories to you installation ====


First make sure that your actual installation has the latest package of the PVE 4.4 release:
First make sure that your actual installation has the latest package of the Proxmox VE 4.4 release:


  apt-get update && apt-get dist-upgrade
  apt-get update && apt-get dist-upgrade
Line 86: Line 71:
  sed -i 's/jessie/stretch/g' /etc/apt/sources.list
  sed -i 's/jessie/stretch/g' /etc/apt/sources.list


Disable the PVE Enterprise repository, the no-subscription repository and the pvetest repository for PVE 4.x
Update the Proxmox VE repository entry to stretch.
You need a to add # symbol to comment out these repositories in /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list.
 
See [[Package_Repositories]]
sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/pve-enterprise.list
 
Add the Proxmox VE repository key:


Adding the PVE 5 beta package repository
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg


  echo "deb http://download.proxmox.com/debian/pve stretch pvetest" > /etc/apt/sources.list.d/pve-install-repo.list
More information about [[Package_Repositories]]
 
''' Replace ceph.com repositories with proxmox.com ceph repositories '''
This step is only necessary if you have a ceph cluster on your PVE installation.
 
  echo "deb http://download.proxmox.com/debian/ceph-luminous stretch main" > /etc/apt/sources.list.d/ceph.list


'''If there is a backports line then remove it.'''
'''If there is a backports line then remove it.'''
Line 102: Line 94:


==== Upgrade the basic system to Debian Stretch and PVE 5.0 ====
==== Upgrade the basic system to Debian Stretch and PVE 5.0 ====
Before you can upgrade the system, you have to remove SysVinit.


This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.
apt purge insserv sysv-rc initscripts openrc


Start with this step to get the initial set of upgraded packages.
Start with this step to get the initial set of upgraded packages:


   apt-get dist-upgrade
   apt-get dist-upgrade
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.


During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.
During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.


Reboot the system in order to use the new PVE kernel
Reboot the system in order to use the new PVE kernel
=== Ceph Upgrade ===
The upgrade from PVE 4.x to 5.0 includes an upgrade to the Ceph Luminous release.
When upgrading from the supported PVE 4.* release (Hammer) , make sure you read the release nodes of the Kraken and Luminous releases.
If the previous system upgrade ended successfully, upgrade to Ceph Luminous with
pveceph install
This command will set the proper Ceph package repository and upgrade the Ceph packages.


=== Troubleshooting ===
=== Troubleshooting ===
Line 137: Line 121:


See [[Recover_From_Grub_Failure]]
See [[Recover_From_Grub_Failure]]
=== Breaking Changes in 5.0 ===
==== Configuration defaults ====
===== Default display switched from 'cirrus' to 'std' =====
The default display is now 'std' (Standard VGA card with Bochs VBE extensions), changed from the 'cirrus' type.
Cirrus has security bugs and 'std' is the default since qemu 2.2
To still be able to simply live migrate VMs to another PVE 4, or an already upgraded PVE 5 host - without off time, ensure that your Proxmox VE 4 Node is '''up to date''', i.e. you ran an:
  apt update
  apt full-upgrade
cycle, with valid Debian Jessie and Proxmox VE 4 repositories configured!
If you are using older package versions, for example qemu-server older than version 4.0-111, you will run into problems!


=== External links ===
=== External links ===
Line 142: Line 143:
*[https://www.debian.org/releases/stretch/amd64/release-notes/ Release Notes for Debian 9.0 (stretch), 64-bit PC]
*[https://www.debian.org/releases/stretch/amd64/release-notes/ Release Notes for Debian 9.0 (stretch), 64-bit PC]


[[Category: HOWTO]]
[[Category: HOWTO]][[Category: Installation]][[Category: Upgrade]]

Latest revision as of 08:14, 21 January 2022

Introduction

Proxmox VE 5.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.

If you run a customized installation and/or you installed additional packages, for example for sheepdog, or any other third party packages, you need to make sure that you also upgrade these package to Debian Stretch.

Generally speaking there are two possibilities to move from 4.x to 5.x

  • New installation on new hardware (and restore VM´s from backup)
  • In-place upgrade via apt, step by step

In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.

If you run a PVE 4 cluster it's tested and supported to add a PVE 5 node and migrate your guests to the new host.

Caveats to know before you start

  • if using ceph, upgrade your ceph cluster to the Luminous release before you upgrade, following the article Ceph Jewel to Luminous.

New installation

  • Backup all VMs and containers to external media (see Backup and Restore)
  • Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.
  • Install Proxmox VE from ISO (this will wipe all data on the existing host)
  • Rebuild the cluster if you had any
  • Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup)
  • Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)
  • Restore full VMs from Backups (see Backup and Restore)

If you feel confortable with the command line, and all your VMs/CTs are one shared storage you can also follow the procedure Bypassing backup and restore when upgrading


In-place upgrade

In-place upgrades are done with apt-get, so make sure that you are familiar with apt before you start here.

Tip: You can perform a test upgrade and a standalone server first. Install the Proxmox VE 4.4 ISO on testing hardware, then upgrade this installation to the latest minor version of Proxmox VE 4.4 (see Package repositories), copy/create relevant configurations to the test machine to replicate your production setup as closely as possible. Then you can start the upgrade. You can even install Proxmox VE 4.4 in a VM and test the upgrade in this environment.

Preconditions

  • upgraded to latest V 4.4
  • reliable access to all configured storages
  • healthy cluster
  • no VM or CT running
  • valid backup of all VM (needed if something goes wrong)
  • Correct repository configuration
  • at least 1GB free disk space at root mount point
  • ensure your /boot partition, if any, has enough space for a new kernel (min 60MB) - e.g., by removing old unused kernels (see pveversion -v)
  • if using Ceph, you should be already running the Ceph Luminous version, but see the caveat above

Move important Virtual Machines and Containers

If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is currently upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is in general not supported. Keep this in mind when planning your cluster upgrade.

Actions Step by Step

All has to be done on each Proxmox VE node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections). Again, make sure that you have a valid backup of all CT and VM before you start. Further, no changes to any guest, VM nor CT, should be made during the upgrade process.

Add the PVE repositories to you installation

First make sure that your actual installation has the latest package of the Proxmox VE 4.4 release:

apt-get update && apt-get dist-upgrade

Update the Debian repository entry to stretch.

sed -i 's/jessie/stretch/g' /etc/apt/sources.list

Update the Proxmox VE repository entry to stretch.

sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/pve-enterprise.list

Add the Proxmox VE repository key:

wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg

More information about Package_Repositories

Replace ceph.com repositories with proxmox.com ceph repositories This step is only necessary if you have a ceph cluster on your PVE installation.

echo "deb http://download.proxmox.com/debian/ceph-luminous stretch main" > /etc/apt/sources.list.d/ceph.list

If there is a backports line then remove it. Currently the upgrade has not been tested when packages from the backports repository are installed.

Update the repositories data:

apt-get update

Upgrade the basic system to Debian Stretch and PVE 5.0

Before you can upgrade the system, you have to remove SysVinit.

apt purge insserv sysv-rc initscripts openrc

Start with this step to get the initial set of upgraded packages:

 apt-get dist-upgrade

This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.

During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.

Reboot the system in order to use the new PVE kernel

Troubleshooting

  • Failing upgrade to "stretch"

Make the sure that the repository configuration for stretch is correct.

If there was a network failure and the upgrade has been made partially try to repair the situation with

apt-get -fy install
  • Unable to boot due to grub failure

See Recover_From_Grub_Failure


Breaking Changes in 5.0

Configuration defaults

Default display switched from 'cirrus' to 'std'

The default display is now 'std' (Standard VGA card with Bochs VBE extensions), changed from the 'cirrus' type. Cirrus has security bugs and 'std' is the default since qemu 2.2

To still be able to simply live migrate VMs to another PVE 4, or an already upgraded PVE 5 host - without off time, ensure that your Proxmox VE 4 Node is up to date, i.e. you ran an:

 apt update
 apt full-upgrade

cycle, with valid Debian Jessie and Proxmox VE 4 repositories configured!

If you are using older package versions, for example qemu-server older than version 4.0-111, you will run into problems!

External links