https://pve.proxmox.com/mediawiki/api.php?action=feedcontributions&user=Denis+Arnaud&feedformat=atomProxmox VE - User contributions [en]2024-03-28T23:03:14ZUser contributionsMediaWiki 1.35.11https://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_5.x_to_6.0&diff=10475Upgrade from 5.x to 6.02019-07-21T13:51:25Z<p>Denis Arnaud: /* Add the Proxmox VE 6 Package Repository */ Fixed a typo in the path for /etc/apt/</p>
<hr />
<div>= Introduction =<br />
<br />
Proxmox VE 6.x introduces several new major features. Carefully plan the upgrade, make and verify backups before beginning, and test extensively. Depending on the existing configuration, several manual steps—including some downtime—may be required.<br />
<br />
'''Note:''' A valid and tested backup is ''always'' needed before starting the upgrade process. Test the backup beforehand in a test lab setup.<br />
<br />
In case the system is customized and/or uses additional packages (for example GlusterFS) or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Buster.<br />
<br />
In general, there are two ways to upgrade a Proxmox VE 5.x system to Proxmox VE 6.x:<br />
<br />
*A new installation on a new hardware (and restoring VMs from the backup)<br />
*An in-place upgrade via apt (step-by-step)<br />
<br />
In both cases emptying the browser cache and reloading the GUI page is required after the upgrade.<br />
<br />
= New installation =<br />
<br />
* Backup all VMs and containers to external storage (see [[Backup and Restore]]).<br />
* Backup all files in /etc ('''required:''' files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf, as well as anything deviating from a default installation)<br />
* Install Proxmox VE from the ISO (this will delete all data on the existing host).<br />
* Rebuild your cluster, if applicable.<br />
* Restore the file /etc/pve/storage.cfg (this will make the external storage used for backup available).<br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if applicable).<br />
* Restore full VMs from backups (see [[Backup and Restore]]).<br />
<br />
Administrators comfortable with the command line can follow the procedure [[Bypassing backup and restore when upgrading]] if all VMs/CTs are on one shared storage.<br />
<br />
= In-place upgrade =<br />
<br />
In-place upgrades are done with apt. Familiarity with apt is required to proceed with this upgrade mechanism.<br />
<br />
== Preconditions ==<br />
<br />
* Upgrade to the latest version of Proxmox VE 5.4.<br />
* Reliable access to all configured storage.<br />
* The root account should have a password set (that you remember). Indeed, as part of the upgrade process, the sudo package will be uninstalled, and you will not be able to log in again as root if that latter has no password set<br />
* A healthy cluster.<br />
* Valid and tested backup of all VMs and CTs (in case something goes wrong).<br />
* Correct configuration of the repository.<br />
* At least 1GB free disk space at root mount point.<br />
* Ceph: upgrade the Ceph cluster to Nautilus '''after''' you have upgraded: Follow the guide [[Ceph Luminous to Nautilus]]<br />
<br />
== Testing the Upgrade ==<br />
<br />
An upgrade test can easily be performed using a standalone server first. Install the Proxmox VE 5.4 ISO on some test hardware; then upgrade this installation to the latest minor version of Proxmox VE 5.4 (see [[Package repositories]]). To replicate the production setup as closely as possible, copy or create all relevant configurations to the test machine. Then start the upgrade. It is also possible to install Proxmox VE 5.4 in a VM and test the upgrade in this environment.<br />
<br />
== Actions step-by-step ==<br />
<br />
The following actions need to be done on the command line of each Proxmox VE node in your cluster (via console or ssh; preferably via console to avoid interrupted ssh connections). Remember: make sure that a valid backup of all VMs and CTs has been created before proceeding.<br />
<br />
=== Continuously use the '''pve5to6''' checklist script ===<br />
<br />
A small checklist program named '''pve5to6''' is included in the latest Proxmox VE 5.4 packages. The program will provide hints and warnings about potential issues before, during and after the upgrade process. One can call it by executing:<br />
<br />
pve5to6<br />
<br />
This script only '''checks''' and reports things. By default, no changes to the system are made and thus none of the issues will be automatically fixed.<br />
One should have in mind that Proxmox VE can be heavily customized, so the script may not recognize all the possible problems of a particular setup!<br />
<br />
It is recommended to re-run the script after each attempt to fix an issue. This ensures that the actions taken actually fixed the respective warning.<br />
<br />
=== Cluster: always upgrade to Corosync 3 first ===<br />
<br />
With Corosync 3 the on-the-wire format has changed. It is now incompatible with Corosync 2.x because it switched out the underlying multicast UDP stack with kronosnet.<br />
Configuration files generated by a Proxmox VE with version 5.2 or newer, are already compatible with the new Corosync 3.x (at least enough to process the upgrade without any issues).<br />
<br />
'''Important Note:''' before the upgrade, stop all HA management services first—no matter which way you choose for upgrading to Corosync 3. Stopping all HA services ensures that no cluster nodes get fenced during the upgrade. This also means that there will not be any HA functionality available for the short duration of the Corosync upgrade.<br />
<br />
First, make sure that all warnings that are reported by the checklist script and not related to Corosync are fixed or determined to be benign/false negatives. Next, stop the local resource manager "pve-ha-lrm" on each node. Only after they have been stopped, also stop the cluster resource manager "pve-ha-crm" on each node; use the GUI (Node -> Services) or the CLI by running the following command on each node:<br />
<br />
systemctl stop pve-ha-lrm<br />
<br />
Only '''after''' the above was done for '''all''' nodes, run the following on each node:<br />
<br />
systemctl stop pve-ha-crm<br />
<br />
Then add the Proxmox Corosync 3 Stretch repository:<br />
echo "deb http://download.proxmox.com/debian/corosync-3/ stretch main" > /etc/apt/sources.list.d/corosync3.list<br />
<br />
and run<br />
apt update<br />
<br />
Then make sure again that only corosync, kronosnet and their libraries will be updated or newly installed:<br />
<br />
apt list --upgradeable<br />
Listing... Done<br />
corosync/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcmap4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcorosync-common4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcpg4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libqb0/stable 1.0.5-1~bpo9+2 amd64 [upgradable from: 1.0.3-1~bpo9]<br />
libquorum5/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libvotequorum8/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
<br />
There are two ways to proceed with the Corosync upgrade:<br />
<br />
* Upgrade nodes one by one. Initially, the newly upgraded node(s) will not have be quorate on their own. Once at least half of the nodes plus one have been upgraded, the upgraded partition will become quorate and the not-yet-upgraded partition will lose quorum. Once all nodes have been upgraded, they should form a healthy, quorate cluster again.<br />
* Upgrade all nodes simultaneously, e.g. using parallel ssh/screen/tmux.<br />
<br />
'''Note:''' changes to any VM/CT or the cluster in general are not allowed for the duration of the upgrade!<br />
<br />
Pre-download the upgrade to corosync-3 on all nodes, e.g., with:<br />
apt dist-upgrade --download-only<br />
<br />
Then run the actual upgrade on all nodes:<br />
apt dist-upgrade<br />
<br />
At any point in this procedure, the local view of the cluster quorum on a node can be verified with:<br />
pvecm status<br />
<br />
Once the update to Corosync 3.x is done '''on all nodes''', restart the local resource manager and cluster resource manager on all nodes:<br />
systemctl start pve-ha-lrm<br />
systemctl start pve-ha-crm<br />
<br />
=== Move important Virtual Machines and Containers ===<br />
<br />
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is currently upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is in general not supported. Keep this in mind when planning your cluster upgrade.<br />
<br />
=== Update the configured APT repositories ===<br />
<br />
First, make sure that the system is running using the latest Proxmox VE 5.4 packages:<br />
<br />
apt update<br />
apt dist-upgrade<br />
<br />
Update all Debian repository entries to Buster.<br />
<br />
sed -i 's/stretch/buster/g' /etc/apt/sources.list<br />
<br />
Disable all Proxmox VE 5.x repositories. This includes the pve-enterprise repository, the pve-no-subscription repository and the pvetest repository.<br />
<br />
To do so add the # symbol to comment out these repositories in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list files.<br />
See [[Package_Repositories]]<br />
<!-- '''Note:''' After an update from Corosync 2 to Corosync 3, also remove the Stretch Corosync 3.x repository from /etc/apt/sources.list.d/ before proceeding<br />
<br />
rm -f /etc/apt/sources.list.d/corosync3.list<br />
--><br />
<br />
=== Add the Proxmox VE 6 Package Repository ===<br />
echo "deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list<br />
<br />
For the no-subscription repository see [[Package Repositories]]. It can be something like:<br />
sed -i -e 's/stretch/buster/g' /etc/apt/sources.list.d/pve-install-repo.list <br />
<br />
'''(Ceph only)''' Replace ceph.com repositories with proxmox.com ceph repositories<br />
<br />
echo "deb http://download.proxmox.com/debian/ceph-luminous buster main" > /etc/apt/sources.list.d/ceph.list<br />
<br />
If there is a backports line, remove it - currently, the upgrade has not been tested with packages from the backports repository installed.<br />
<br />
Update the repositories data:<br />
<br />
apt update<br />
<br />
=== Upgrade the system to Debian Buster and Proxmox VE 6.0 ===<br />
<br />
This action will take some time depending on the system performance - up to 60 min or more. On high-performance servers with SSD storage, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages:<br />
<br />
apt dist-upgrade<br />
<br />
During the steps above, you may be asked to approve some of the new packages replacing configuration files. They are not relevant to the Proxmox VE upgrade, so you can choose what you want to do.<br />
<br />
Reboot the system in order to use the new PVE kernel<br />
<br />
== After the Proxmox VE upgrade ==<br />
<br />
=== For Clusters ===<br />
<br />
* remove extra the corosync 3 repository used to upgrade corosync on PVE 5 / Stretch if not done already. If you followed the steps here you can simply execute the following command to do so:<br />
rm /etc/apt/sources.list.d/corosync3.list<br />
<br />
=== For Hyperconverged Ceph ===<br />
<br />
Now you should upgrade the Ceph cluster to the Nautilus release, following the article [[Ceph Luminous to Nautilus]].<br />
<br />
== Checklist issues ==<br />
<br />
=== proxmox-ve package is too old ===<br />
Check the configured package repository entries (see [[Package_Repositories]]) and run <br />
<br />
apt update<br />
<br />
followed by<br />
<br />
apt dist-upgrade<br />
<br />
to get the latest PVE 5.x packages '''before''' upgrading to PVE 6.x<br />
<br />
=== corosync 2.x installed, cluster-wide upgrade to 3.x needed! ===<br />
<br />
See section [[#Cluster:_always_upgrade_to_Corosync_3_first|Upgrade to corosync 3 '''first''']]<br />
<br />
== Troubleshooting ==<br />
<br />
* Failing upgrade to "buster"<br />
<br />
Make sure that the repository configuration for Buster is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt -f install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover From Grub Failure]]<br />
<br />
== External links ==<br />
<br />
*[https://www.debian.org/releases/buster/amd64/release-notes/ Release Notes for Debian 10.0 (buster), 64-bit PC]<br />
<br />
[[Category: HOWTO]][[Category: Installation]]</div>Denis Arnaudhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_5.x_to_6.0&diff=10474Upgrade from 5.x to 6.02019-07-21T13:47:36Z<p>Denis Arnaud: /* Add the Proxmox VE 6 Package Repository */</p>
<hr />
<div>= Introduction =<br />
<br />
Proxmox VE 6.x introduces several new major features. Carefully plan the upgrade, make and verify backups before beginning, and test extensively. Depending on the existing configuration, several manual steps—including some downtime—may be required.<br />
<br />
'''Note:''' A valid and tested backup is ''always'' needed before starting the upgrade process. Test the backup beforehand in a test lab setup.<br />
<br />
In case the system is customized and/or uses additional packages (for example GlusterFS) or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Buster.<br />
<br />
In general, there are two ways to upgrade a Proxmox VE 5.x system to Proxmox VE 6.x:<br />
<br />
*A new installation on a new hardware (and restoring VMs from the backup)<br />
*An in-place upgrade via apt (step-by-step)<br />
<br />
In both cases emptying the browser cache and reloading the GUI page is required after the upgrade.<br />
<br />
= New installation =<br />
<br />
* Backup all VMs and containers to external storage (see [[Backup and Restore]]).<br />
* Backup all files in /etc ('''required:''' files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf, as well as anything deviating from a default installation)<br />
* Install Proxmox VE from the ISO (this will delete all data on the existing host).<br />
* Rebuild your cluster, if applicable.<br />
* Restore the file /etc/pve/storage.cfg (this will make the external storage used for backup available).<br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if applicable).<br />
* Restore full VMs from backups (see [[Backup and Restore]]).<br />
<br />
Administrators comfortable with the command line can follow the procedure [[Bypassing backup and restore when upgrading]] if all VMs/CTs are on one shared storage.<br />
<br />
= In-place upgrade =<br />
<br />
In-place upgrades are done with apt. Familiarity with apt is required to proceed with this upgrade mechanism.<br />
<br />
== Preconditions ==<br />
<br />
* Upgrade to the latest version of Proxmox VE 5.4.<br />
* Reliable access to all configured storage.<br />
* The root account should have a password set (that you remember). Indeed, as part of the upgrade process, the sudo package will be uninstalled, and you will not be able to log in again as root if that latter has no password set<br />
* A healthy cluster.<br />
* Valid and tested backup of all VMs and CTs (in case something goes wrong).<br />
* Correct configuration of the repository.<br />
* At least 1GB free disk space at root mount point.<br />
* Ceph: upgrade the Ceph cluster to Nautilus '''after''' you have upgraded: Follow the guide [[Ceph Luminous to Nautilus]]<br />
<br />
== Testing the Upgrade ==<br />
<br />
An upgrade test can easily be performed using a standalone server first. Install the Proxmox VE 5.4 ISO on some test hardware; then upgrade this installation to the latest minor version of Proxmox VE 5.4 (see [[Package repositories]]). To replicate the production setup as closely as possible, copy or create all relevant configurations to the test machine. Then start the upgrade. It is also possible to install Proxmox VE 5.4 in a VM and test the upgrade in this environment.<br />
<br />
== Actions step-by-step ==<br />
<br />
The following actions need to be done on the command line of each Proxmox VE node in your cluster (via console or ssh; preferably via console to avoid interrupted ssh connections). Remember: make sure that a valid backup of all VMs and CTs has been created before proceeding.<br />
<br />
=== Continuously use the '''pve5to6''' checklist script ===<br />
<br />
A small checklist program named '''pve5to6''' is included in the latest Proxmox VE 5.4 packages. The program will provide hints and warnings about potential issues before, during and after the upgrade process. One can call it by executing:<br />
<br />
pve5to6<br />
<br />
This script only '''checks''' and reports things. By default, no changes to the system are made and thus none of the issues will be automatically fixed.<br />
One should have in mind that Proxmox VE can be heavily customized, so the script may not recognize all the possible problems of a particular setup!<br />
<br />
It is recommended to re-run the script after each attempt to fix an issue. This ensures that the actions taken actually fixed the respective warning.<br />
<br />
=== Cluster: always upgrade to Corosync 3 first ===<br />
<br />
With Corosync 3 the on-the-wire format has changed. It is now incompatible with Corosync 2.x because it switched out the underlying multicast UDP stack with kronosnet.<br />
Configuration files generated by a Proxmox VE with version 5.2 or newer, are already compatible with the new Corosync 3.x (at least enough to process the upgrade without any issues).<br />
<br />
'''Important Note:''' before the upgrade, stop all HA management services first—no matter which way you choose for upgrading to Corosync 3. Stopping all HA services ensures that no cluster nodes get fenced during the upgrade. This also means that there will not be any HA functionality available for the short duration of the Corosync upgrade.<br />
<br />
First, make sure that all warnings that are reported by the checklist script and not related to Corosync are fixed or determined to be benign/false negatives. Next, stop the local resource manager "pve-ha-lrm" on each node. Only after they have been stopped, also stop the cluster resource manager "pve-ha-crm" on each node; use the GUI (Node -> Services) or the CLI by running the following command on each node:<br />
<br />
systemctl stop pve-ha-lrm<br />
<br />
Only '''after''' the above was done for '''all''' nodes, run the following on each node:<br />
<br />
systemctl stop pve-ha-crm<br />
<br />
Then add the Proxmox Corosync 3 Stretch repository:<br />
echo "deb http://download.proxmox.com/debian/corosync-3/ stretch main" > /etc/apt/sources.list.d/corosync3.list<br />
<br />
and run<br />
apt update<br />
<br />
Then make sure again that only corosync, kronosnet and their libraries will be updated or newly installed:<br />
<br />
apt list --upgradeable<br />
Listing... Done<br />
corosync/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcmap4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcorosync-common4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcpg4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libqb0/stable 1.0.5-1~bpo9+2 amd64 [upgradable from: 1.0.3-1~bpo9]<br />
libquorum5/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libvotequorum8/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
<br />
There are two ways to proceed with the Corosync upgrade:<br />
<br />
* Upgrade nodes one by one. Initially, the newly upgraded node(s) will not have be quorate on their own. Once at least half of the nodes plus one have been upgraded, the upgraded partition will become quorate and the not-yet-upgraded partition will lose quorum. Once all nodes have been upgraded, they should form a healthy, quorate cluster again.<br />
* Upgrade all nodes simultaneously, e.g. using parallel ssh/screen/tmux.<br />
<br />
'''Note:''' changes to any VM/CT or the cluster in general are not allowed for the duration of the upgrade!<br />
<br />
Pre-download the upgrade to corosync-3 on all nodes, e.g., with:<br />
apt dist-upgrade --download-only<br />
<br />
Then run the actual upgrade on all nodes:<br />
apt dist-upgrade<br />
<br />
At any point in this procedure, the local view of the cluster quorum on a node can be verified with:<br />
pvecm status<br />
<br />
Once the update to Corosync 3.x is done '''on all nodes''', restart the local resource manager and cluster resource manager on all nodes:<br />
systemctl start pve-ha-lrm<br />
systemctl start pve-ha-crm<br />
<br />
=== Move important Virtual Machines and Containers ===<br />
<br />
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is currently upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is in general not supported. Keep this in mind when planning your cluster upgrade.<br />
<br />
=== Update the configured APT repositories ===<br />
<br />
First, make sure that the system is running using the latest Proxmox VE 5.4 packages:<br />
<br />
apt update<br />
apt dist-upgrade<br />
<br />
Update all Debian repository entries to Buster.<br />
<br />
sed -i 's/stretch/buster/g' /etc/apt/sources.list<br />
<br />
Disable all Proxmox VE 5.x repositories. This includes the pve-enterprise repository, the pve-no-subscription repository and the pvetest repository.<br />
<br />
To do so add the # symbol to comment out these repositories in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list files.<br />
See [[Package_Repositories]]<br />
<!-- '''Note:''' After an update from Corosync 2 to Corosync 3, also remove the Stretch Corosync 3.x repository from /etc/apt/sources.list.d/ before proceeding<br />
<br />
rm -f /etc/apt/sources.list.d/corosync3.list<br />
--><br />
<br />
=== Add the Proxmox VE 6 Package Repository ===<br />
echo "deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list<br />
<br />
For the no-subscription repository see [[Package Repositories]]. It can be something like:<br />
sed -i -e 's/stretch/buster/g' sources.list.d/pve-install-repo.list <br />
<br />
'''(Ceph only)''' Replace ceph.com repositories with proxmox.com ceph repositories<br />
<br />
echo "deb http://download.proxmox.com/debian/ceph-luminous buster main" > /etc/apt/sources.list.d/ceph.list<br />
<br />
If there is a backports line, remove it - currently, the upgrade has not been tested with packages from the backports repository installed.<br />
<br />
Update the repositories data:<br />
<br />
apt update<br />
<br />
=== Upgrade the system to Debian Buster and Proxmox VE 6.0 ===<br />
<br />
This action will take some time depending on the system performance - up to 60 min or more. On high-performance servers with SSD storage, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages:<br />
<br />
apt dist-upgrade<br />
<br />
During the steps above, you may be asked to approve some of the new packages replacing configuration files. They are not relevant to the Proxmox VE upgrade, so you can choose what you want to do.<br />
<br />
Reboot the system in order to use the new PVE kernel<br />
<br />
== After the Proxmox VE upgrade ==<br />
<br />
=== For Clusters ===<br />
<br />
* remove extra the corosync 3 repository used to upgrade corosync on PVE 5 / Stretch if not done already. If you followed the steps here you can simply execute the following command to do so:<br />
rm /etc/apt/sources.list.d/corosync3.list<br />
<br />
=== For Hyperconverged Ceph ===<br />
<br />
Now you should upgrade the Ceph cluster to the Nautilus release, following the article [[Ceph Luminous to Nautilus]].<br />
<br />
== Checklist issues ==<br />
<br />
=== proxmox-ve package is too old ===<br />
Check the configured package repository entries (see [[Package_Repositories]]) and run <br />
<br />
apt update<br />
<br />
followed by<br />
<br />
apt dist-upgrade<br />
<br />
to get the latest PVE 5.x packages '''before''' upgrading to PVE 6.x<br />
<br />
=== corosync 2.x installed, cluster-wide upgrade to 3.x needed! ===<br />
<br />
See section [[#Cluster:_always_upgrade_to_Corosync_3_first|Upgrade to corosync 3 '''first''']]<br />
<br />
== Troubleshooting ==<br />
<br />
* Failing upgrade to "buster"<br />
<br />
Make sure that the repository configuration for Buster is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt -f install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover From Grub Failure]]<br />
<br />
== External links ==<br />
<br />
*[https://www.debian.org/releases/buster/amd64/release-notes/ Release Notes for Debian 10.0 (buster), 64-bit PC]<br />
<br />
[[Category: HOWTO]][[Category: Installation]]</div>Denis Arnaudhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_5.x_to_6.0&diff=10473Upgrade from 5.x to 6.02019-07-21T12:47:11Z<p>Denis Arnaud: /* Preconditions */ Added a comment wrt password setting for the root user</p>
<hr />
<div>= Introduction =<br />
<br />
Proxmox VE 6.x introduces several new major features. Carefully plan the upgrade, make and verify backups before beginning, and test extensively. Depending on the existing configuration, several manual steps—including some downtime—may be required.<br />
<br />
'''Note:''' A valid and tested backup is ''always'' needed before starting the upgrade process. Test the backup beforehand in a test lab setup.<br />
<br />
In case the system is customized and/or uses additional packages (for example GlusterFS) or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Buster.<br />
<br />
In general, there are two ways to upgrade a Proxmox VE 5.x system to Proxmox VE 6.x:<br />
<br />
*A new installation on a new hardware (and restoring VMs from the backup)<br />
*An in-place upgrade via apt (step-by-step)<br />
<br />
In both cases emptying the browser cache and reloading the GUI page is required after the upgrade.<br />
<br />
= New installation =<br />
<br />
* Backup all VMs and containers to external storage (see [[Backup and Restore]]).<br />
* Backup all files in /etc ('''required:''' files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf, as well as anything deviating from a default installation)<br />
* Install Proxmox VE from the ISO (this will delete all data on the existing host).<br />
* Rebuild your cluster, if applicable.<br />
* Restore the file /etc/pve/storage.cfg (this will make the external storage used for backup available).<br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if applicable).<br />
* Restore full VMs from backups (see [[Backup and Restore]]).<br />
<br />
Administrators comfortable with the command line can follow the procedure [[Bypassing backup and restore when upgrading]] if all VMs/CTs are on one shared storage.<br />
<br />
= In-place upgrade =<br />
<br />
In-place upgrades are done with apt. Familiarity with apt is required to proceed with this upgrade mechanism.<br />
<br />
== Preconditions ==<br />
<br />
* Upgrade to the latest version of Proxmox VE 5.4.<br />
* Reliable access to all configured storage.<br />
* The root account should have a password set (that you remember). Indeed, as part of the upgrade process, the sudo package will be uninstalled, and you will not be able to log in again as root if that latter has no password set<br />
* A healthy cluster.<br />
* Valid and tested backup of all VMs and CTs (in case something goes wrong).<br />
* Correct configuration of the repository.<br />
* At least 1GB free disk space at root mount point.<br />
* Ceph: upgrade the Ceph cluster to Nautilus '''after''' you have upgraded: Follow the guide [[Ceph Luminous to Nautilus]]<br />
<br />
== Testing the Upgrade ==<br />
<br />
An upgrade test can easily be performed using a standalone server first. Install the Proxmox VE 5.4 ISO on some test hardware; then upgrade this installation to the latest minor version of Proxmox VE 5.4 (see [[Package repositories]]). To replicate the production setup as closely as possible, copy or create all relevant configurations to the test machine. Then start the upgrade. It is also possible to install Proxmox VE 5.4 in a VM and test the upgrade in this environment.<br />
<br />
== Actions step-by-step ==<br />
<br />
The following actions need to be done on the command line of each Proxmox VE node in your cluster (via console or ssh; preferably via console to avoid interrupted ssh connections). Remember: make sure that a valid backup of all VMs and CTs has been created before proceeding.<br />
<br />
=== Continuously use the '''pve5to6''' checklist script ===<br />
<br />
A small checklist program named '''pve5to6''' is included in the latest Proxmox VE 5.4 packages. The program will provide hints and warnings about potential issues before, during and after the upgrade process. One can call it by executing:<br />
<br />
pve5to6<br />
<br />
This script only '''checks''' and reports things. By default, no changes to the system are made and thus none of the issues will be automatically fixed.<br />
One should have in mind that Proxmox VE can be heavily customized, so the script may not recognize all the possible problems of a particular setup!<br />
<br />
It is recommended to re-run the script after each attempt to fix an issue. This ensures that the actions taken actually fixed the respective warning.<br />
<br />
=== Cluster: always upgrade to Corosync 3 first ===<br />
<br />
With Corosync 3 the on-the-wire format has changed. It is now incompatible with Corosync 2.x because it switched out the underlying multicast UDP stack with kronosnet.<br />
Configuration files generated by a Proxmox VE with version 5.2 or newer, are already compatible with the new Corosync 3.x (at least enough to process the upgrade without any issues).<br />
<br />
'''Important Note:''' before the upgrade, stop all HA management services first—no matter which way you choose for upgrading to Corosync 3. Stopping all HA services ensures that no cluster nodes get fenced during the upgrade. This also means that there will not be any HA functionality available for the short duration of the Corosync upgrade.<br />
<br />
First, make sure that all warnings that are reported by the checklist script and not related to Corosync are fixed or determined to be benign/false negatives. Next, stop the local resource manager "pve-ha-lrm" on each node. Only after they have been stopped, also stop the cluster resource manager "pve-ha-crm" on each node; use the GUI (Node -> Services) or the CLI by running the following command on each node:<br />
<br />
systemctl stop pve-ha-lrm<br />
<br />
Only '''after''' the above was done for '''all''' nodes, run the following on each node:<br />
<br />
systemctl stop pve-ha-crm<br />
<br />
Then add the Proxmox Corosync 3 Stretch repository:<br />
echo "deb http://download.proxmox.com/debian/corosync-3/ stretch main" > /etc/apt/sources.list.d/corosync3.list<br />
<br />
and run<br />
apt update<br />
<br />
Then make sure again that only corosync, kronosnet and their libraries will be updated or newly installed:<br />
<br />
apt list --upgradeable<br />
Listing... Done<br />
corosync/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcmap4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcorosync-common4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libcpg4/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libqb0/stable 1.0.5-1~bpo9+2 amd64 [upgradable from: 1.0.3-1~bpo9]<br />
libquorum5/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
libvotequorum8/stable 3.0.2-pve2~bpo9 amd64 [upgradable from: 2.4.4-pve1]<br />
<br />
There are two ways to proceed with the Corosync upgrade:<br />
<br />
* Upgrade nodes one by one. Initially, the newly upgraded node(s) will not have be quorate on their own. Once at least half of the nodes plus one have been upgraded, the upgraded partition will become quorate and the not-yet-upgraded partition will lose quorum. Once all nodes have been upgraded, they should form a healthy, quorate cluster again.<br />
* Upgrade all nodes simultaneously, e.g. using parallel ssh/screen/tmux.<br />
<br />
'''Note:''' changes to any VM/CT or the cluster in general are not allowed for the duration of the upgrade!<br />
<br />
Pre-download the upgrade to corosync-3 on all nodes, e.g., with:<br />
apt dist-upgrade --download-only<br />
<br />
Then run the actual upgrade on all nodes:<br />
apt dist-upgrade<br />
<br />
At any point in this procedure, the local view of the cluster quorum on a node can be verified with:<br />
pvecm status<br />
<br />
Once the update to Corosync 3.x is done '''on all nodes''', restart the local resource manager and cluster resource manager on all nodes:<br />
systemctl start pve-ha-lrm<br />
systemctl start pve-ha-crm<br />
<br />
=== Move important Virtual Machines and Containers ===<br />
<br />
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is currently upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is in general not supported. Keep this in mind when planning your cluster upgrade.<br />
<br />
=== Update the configured APT repositories ===<br />
<br />
First, make sure that the system is running using the latest Proxmox VE 5.4 packages:<br />
<br />
apt update<br />
apt dist-upgrade<br />
<br />
Update all Debian repository entries to Buster.<br />
<br />
sed -i 's/stretch/buster/g' /etc/apt/sources.list<br />
<br />
Disable all Proxmox VE 5.x repositories. This includes the pve-enterprise repository, the pve-no-subscription repository and the pvetest repository.<br />
<br />
To do so add the # symbol to comment out these repositories in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list files.<br />
See [[Package_Repositories]]<br />
<!-- '''Note:''' After an update from Corosync 2 to Corosync 3, also remove the Stretch Corosync 3.x repository from /etc/apt/sources.list.d/ before proceeding<br />
<br />
rm -f /etc/apt/sources.list.d/corosync3.list<br />
--><br />
<br />
=== Add the Proxmox VE 6 Package Repository ===<br />
echo "deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list<br />
<br />
For the no-subscription repository see [[Package Repositories]]<br />
<br />
'''(Ceph only)''' Replace ceph.com repositories with proxmox.com ceph repositories<br />
<br />
echo "deb http://download.proxmox.com/debian/ceph-luminous buster main" > /etc/apt/sources.list.d/ceph.list<br />
<br />
If there is a backports line, remove it - currently, the upgrade has not been tested with packages from the backports repository installed.<br />
<br />
Update the repositories data:<br />
<br />
apt update<br />
<br />
=== Upgrade the system to Debian Buster and Proxmox VE 6.0 ===<br />
<br />
This action will take some time depending on the system performance - up to 60 min or more. On high-performance servers with SSD storage, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages:<br />
<br />
apt dist-upgrade<br />
<br />
During the steps above, you may be asked to approve some of the new packages replacing configuration files. They are not relevant to the Proxmox VE upgrade, so you can choose what you want to do.<br />
<br />
Reboot the system in order to use the new PVE kernel<br />
<br />
== After the Proxmox VE upgrade ==<br />
<br />
=== For Clusters ===<br />
<br />
* remove extra the corosync 3 repository used to upgrade corosync on PVE 5 / Stretch if not done already. If you followed the steps here you can simply execute the following command to do so:<br />
rm /etc/apt/sources.list.d/corosync3.list<br />
<br />
=== For Hyperconverged Ceph ===<br />
<br />
Now you should upgrade the Ceph cluster to the Nautilus release, following the article [[Ceph Luminous to Nautilus]].<br />
<br />
== Checklist issues ==<br />
<br />
=== proxmox-ve package is too old ===<br />
Check the configured package repository entries (see [[Package_Repositories]]) and run <br />
<br />
apt update<br />
<br />
followed by<br />
<br />
apt dist-upgrade<br />
<br />
to get the latest PVE 5.x packages '''before''' upgrading to PVE 6.x<br />
<br />
=== corosync 2.x installed, cluster-wide upgrade to 3.x needed! ===<br />
<br />
See section [[#Cluster:_always_upgrade_to_Corosync_3_first|Upgrade to corosync 3 '''first''']]<br />
<br />
== Troubleshooting ==<br />
<br />
* Failing upgrade to "buster"<br />
<br />
Make sure that the repository configuration for Buster is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt -f install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover From Grub Failure]]<br />
<br />
== External links ==<br />
<br />
*[https://www.debian.org/releases/buster/amd64/release-notes/ Release Notes for Debian 10.0 (buster), 64-bit PC]<br />
<br />
[[Category: HOWTO]][[Category: Installation]]</div>Denis Arnaudhttps://pve.proxmox.com/mediawiki/index.php?title=Install_Proxmox_VE_on_Debian_Stretch&diff=10263Install Proxmox VE on Debian Stretch2019-02-08T14:14:13Z<p>Denis Arnaud: /* Add an /etc/hosts entry for your IP address */ Fixed a few typos/formatting issues</p>
<hr />
<div>== Introduction ==<br />
<br />
The installation of a supported Proxmox VE server should be done via Bare-metal_ISO_Installer. In some case it makes sense to install Proxmox VE on top of a running Debian Stretch 64-bit, especially if you want a custom partition layout. For this HowTO the following Debian Stretch ISO was used: [https://cdimage.debian.org/mirror/cdimage/release/9.0.0/amd64/iso-cd/debian-9.0.0-amd64-netinst.iso].<br />
<br />
== Install a standard Debian Stretch (amd64) ==<br />
Install a standard Debian Stretch, for details see [http://www.debian.org Debian], and select a fixed IP.<br />
It is recommended to only install the "standard" package selection and nothing else, as Proxmox VE brings its own packages for qemu, lxc.<br />
<br />
=== Add an /etc/hosts entry for your IP address ===<br />
<br />
Please make sure that your hostname is resolvable via <tt>/etc/hosts</tt>, <i>i.e</i> you need an entry in <tt>/etc/hosts</tt> which assigns an IPv4 address to that hostname.<br />
<br />
'''Note''': Make sure that no IPv6 address for your hostname is specified in <tt>/etc/hosts</tt><br />
<br />
For instance if your IP address is <tt>192.168.15.77</tt>, and your hostname <tt>prox4m1</tt>, then your <tt>/etc/hosts</tt> file should look like:<br />
<br />
<pre><br />
127.0.0.1 localhost.localdomain localhost<br />
192.168.15.77 prox4m1.proxmox.com prox4m1 pvelocalhost<br />
<br />
# The following lines are desirable for IPv6 capable hosts<br />
::1 localhost ip6-localhost ip6-loopback<br />
ff02::1 ip6-allnodes<br />
ff02::2 ip6-allrouters<br />
</pre><br />
<br />
You can test if your setup is ok using the <tt>hostname</tt> command:<br />
<pre><br />
hostname --ip-address<br />
192.168.15.77 # should return here your IP address<br />
</pre><br />
<br />
== Install Proxmox VE ==<br />
=== Adapt your sources.list ===<br />
<br />
Add the Proxmox VE repository:<br />
echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list<br />
<br />
Add the Proxmox VE repository key:<br />
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg<br />
<br />
Update your repository and system by running:<br />
apt update && apt dist-upgrade<br />
<br />
<b>NOTE:</b> If you have enabled other another arch (typically, i386 to run some older software) and apt complains about not being able to find /binary-i386: "Unable to find expected entry 'pve/binary-i386/Packages'" you need to remove other arch or use instead the row:<br />
<br />
deb [arch=amd64] http://download.proxmox.com/debian/pve stretch pve-no-subscription<br />
<br />
this is because Proxmox repository does not have any other arch besides amd64; more info on multiarch on debian on: https://wiki.debian.org/Multiarch/HOWTO.<br />
<br />
=== Install Proxmox VE packages ===<br />
Install the Proxmox VE packages<br />
<br />
apt install proxmox-ve postfix open-iscsi<br />
<br />
Accept the suggestion to remove Exim and configure postfix according to your network.<br />
If you have a mail server in your network, you should configure postfix as a '''satellite system''',<br />
and your existing mail server will be the 'relay host' which will route the emails send by the <br />
proxmox server to the end recipient.<br />
If you don't know what to enter here, choose '''local only'''. <br />
<br />
Finally, reboot your system, the new Proxmox VE kernel should be automatically selected in the GRUB menu.<br />
<br />
==== Recommended: remove the os-prober package ====<br />
<br />
The os-prober package scans all the partitions of your host<br />
including those of you guests VMs to create dual-boot GRUB entries.<br />
If you didn't install Proxmox VE as a dual boot besides another<br />
Operating System you can safely remove the os-prober package.<br />
<br />
apt remove os-prober<br />
<br />
== Connect to the Proxmox VE web interface ==<br />
Connect to the admin web interface (<nowiki>https://youripaddress:8006</nowiki>). If you have a fresh install and didn't add any users yet, you should use the root account with your linux root password, and select "PAM Authentication" to log in.<br />
<br />
Once logged in, create a '''Linux Bridge''' called '''vmbr0''', and add your first network interface to it.<br />
<br />
[[Image:Screen-vmbr0-setup-for-ext6.png|Adapt vmbr0 settings]]<br />
<br />
== Configure apt to use the new packages repositories ==<br />
In order to get latest updates, you need to add one of the new package repositories, see [[Package repositories]]<br />
<br />
== Troubleshooting ==<br />
=== resolv.conf gets overwritten ===<br />
The PVE GUI expects to control DNS management and will no longer take its DNS settings from /etc/network/interfaces<br />
Any package that auto-generates (overwrites) /etc/resolv.conf will cause DNS to fail.<br />
e.g. packages 'resolvconf' for IPv4 and 'rdnssd' for IPv6.<br />
<br />
== Optional Steps ==<br />
=== Optional: Remove the Debian kernel ===<br />
apt remove linux-image-amd64 linux-image-4.9.0-3-amd64<br />
<br />
Check grub2 config by running:<br />
update-grub<br />
<br />
[[Category: HOWTO]][[Category: Installation]]</div>Denis Arnaud