https://pve.proxmox.com/mediawiki/api.php?action=feedcontributions&user=Kadir&feedformat=atomProxmox VE - User contributions [en]2024-03-29T05:06:48ZUser contributionsMediaWiki 1.35.11https://pve.proxmox.com/mediawiki/index.php?title=Proxmox_SSL_Error_Fixing&diff=10229Proxmox SSL Error Fixing2018-12-03T07:31:18Z<p>Kadir: </p>
<hr />
<div>== Introduction ==<br />
Some users see errors like the following when doing ProxMOX upgrades;<br />
kvm: -vnc unix:/var/run/qemu-server/NODENAME.vnc,x509,password: Failed to start VNC server: The server certificate /etc/pve/local/pve-ssl.pem has expired<br />
<br />
== Solution ==<br />
<br />
Remove old certificates:<br />
<br />
rm /etc/pve/pve-root-ca.pem<br />
rm /etc/pve/local/pve-ssl.key<br />
rm /etc/pve/local/pve-ssl.pem <br />
<br />
Generate new CA and Proxmox certificates:<br />
<br />
For CA:<br />
openssl genrsa -out ca.key 2048<br />
openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.pem<br />
<br />
For Proxmox:<br />
openssl genrsa -out server.key 2048<br />
openssl req -new -key server.key -out server.csr <br />
openssl x509 -req -in server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out server.pem -days 365 -sha256<br />
<br />
Rename certificates to proxmox:<br />
mv ca.pem pve-root-ca.pem<br />
mv server.key pve-ssl.key<br />
mv server.pem pve-ssl.pem<br />
<br />
<br />
Copy to file Proxmox File System:<br />
cp pve-root-ca.pem /etc/pve/pve-root-ca.pem<br />
cp pve-ssl.key /etc/pve/local/pve-ssl.key<br />
cp pve-ssl.pem /etc/pve/local/pve-ssl.pem<br />
<br />
Restart daemon and proxy:<br />
service pveproxy restart<br />
service pvedaemon restart<br />
<br />
== Info ==<br />
<br />
'''You can use these certificates for all nodes.'''</div>Kadirhttps://pve.proxmox.com/mediawiki/index.php?title=Proxmox_SSL_Error_Fixing&diff=10018Proxmox SSL Error Fixing2017-12-04T11:33:36Z<p>Kadir: Created page with "== Introduction == Some users see errors like the following when doing ProxMOX upgrades; kvm: -vnc unix:/var/run/qemu-server/NODENAME.vnc,x509,password: Failed to start VNC s..."</p>
<hr />
<div>== Introduction ==<br />
Some users see errors like the following when doing ProxMOX upgrades;<br />
kvm: -vnc unix:/var/run/qemu-server/NODENAME.vnc,x509,password: Failed to start VNC server: The server certificate /etc/pve/local/pve-ssl.pem has expired<br />
<br />
== Solution ==<br />
<br />
Remove old certificates:<br />
<br />
rm /etc/pve/pve-root-ca.pem<br />
rm /etc/pve/local/pve-ssl.key<br />
rm /etc/pve/local/pve-ssl.pem <br />
<br />
Generate new CA and Proxmox certificates:<br />
<br />
For CA:<br />
openssl genrsa -out ca.key 2048<br />
openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.pem<br />
<br />
For Proxmox:<br />
openssl genrsa -out server.key 2048 <br />
openssl x509 -req -in server.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out server.pem -days 365 -sha256<br />
<br />
Rename certificates to proxmox:<br />
mv ca.pem pve-root-ca.pem<br />
mv server.key pve-ssl.key<br />
mv server.pem pve-ssl.pem<br />
<br />
<br />
Copy to file Proxmox File System:<br />
cp pve-root-ca.pem /etc/pve/pve-root-ca.pem<br />
cp pve-ssl.key /etc/pve/local/pve-ssl.key<br />
cp pve-ssl.pem /etc/pve/local/pve-ssl.pem<br />
<br />
Restart daemon and proxy:<br />
service pveproxy restart<br />
service pvedaemon restart<br />
<br />
== Info ==<br />
<br />
'''You can use these certificates for all nodes.'''</div>Kadirhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_3.x_to_4.0&diff=10014Upgrade from 3.x to 4.02017-11-28T14:10:00Z<p>Kadir: </p>
<hr />
<div>== Introduction ==<br />
<br />
Proxmox VE 4.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.<br />
<br />
Major upgrades for V4.x:<br />
*OpenVZ is removed, a conversion via backup/restore to LXC is needed <br />
*New corosync version, therefore clusters has to be re-established<br />
*New HA manager (replacing RGmanager, involving a complete HA re-configuration)<br />
<br />
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie. <br />
<br />
V4.x supports only the new '''DRBD9 which is not backwards compatible with the 8.x version''' and is considered only a technology preview.<br />
<br />
Generally speaking there are two possibilities to move from 3.x to 4.x<br />
<br />
*New installation on new hardware (and restore VM´s from backup) - safest way, recommended!<br />
*In-place upgrade via apt, step by step <br />
<br />
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.<br />
<br />
== New installation ==<br />
<br />
* Backup all VMs and containers to external media (see [[Backup and Restore]])<br />
* Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.<br />
* Install Proxmox VE from ISO (this will wipe all data on the existing host)<br />
* Rebuild the cluster if you had any<br />
* Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup) <br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)<br />
* Restore full VMs from Backups (see [[Backup and Restore]])<br />
* Restore/Convert containers (see [[Convert OpenVZ to LXC]])<br />
<br />
=== Bypassing Backup and Restore ===<br />
<br />
The following is only for advanced users which have knowledge about Proxmox configuration files!<br />
<br />
Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only<br />
<br />
* for KVM (i.e. not for containers)<br />
<br />
* if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)<br />
<br />
The steps <br />
<br />
* Backup all VMs<br />
* Restore full VMs from Backups<br />
<br />
will be replaced by<br />
<br />
* Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively<br />
* Backup those storages from the storage configuration ( /etc/pve/storage.cfg ) which are shared and untouched, simply copy the respective lines and append them once to the newly build clusters /etc/pve/storage.cfg<br />
* Restore <vmid>.conf file(s) for the respective machine(s)<br />
<br />
'''Note:''' /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.<br />
<br />
After you restored the VM configs and restored the external shared Storage configuration - so that it is accessible under the same name in the new cluster again - you should be able to start the VMs again.<br />
No additional reboot is required.<br />
<br />
== In-place upgrade ==<br />
<br />
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.<br />
<br />
=== Preconditions ===<br />
<br />
* upgraded to latest V3.4 version<br />
* reliable access to all configured storages<br />
* healthy cluster<br />
* no VM or CT running (note: VM live migration from 3.4 to 4.x node or vice versa NOT possible)<br />
* valid backup of all OpenVZ containers (needed for the conversion to LXC)<br />
* valid backup of all VM (only needed if something goes wrong)<br />
* Correct repository configuration (accessible both wheezy and jessie)<br />
* at least 1GB free disk space at root mount point<br />
<br />
=== Actions Step by Step ===<br />
<br />
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.<br />
<br />
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 3.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 3.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''<br />
<br />
==== Remove Proxmox VE 3.x packages in order to avoid dependency errors ====<br />
<br />
First make sure that your actual installation is "clean", tentatively run<br />
<br />
apt-get update && apt-get dist-upgrade<br />
<br />
Then start the removal:<br />
<br />
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware <br />
<br />
Adapt repository locations and update the apt database, point all to jessie, e.g.:<br />
<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list<br />
apt-get update<br />
'''If there is a backports line then remove it.'''<br />
Currently, ''pve-manager'' and ''ceph-common'' have unmet dependencies with regards to package versions in the jessie ''backports'' repo.<br />
<br />
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:<br />
<br />
deb http://download.ceph.com/debian-hammer jessie main<br />
<br />
<br />
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.<br />
<br />
==== Install the new kernel ====<br />
<br />
Add repository proxmox jessie location<br />
<br />
deb https://enterprise.proxmox.com/debian jessie pve-enterprise<br />
deb http://download.proxmox.com/debian jessie pve-no-subscription<br />
<br />
apt-get update<br />
<br />
Check first what the current new kernel's version is <br />
<br />
apt-cache search pve-kernel | sort -V<br />
<br />
- at the moment (September 2016) it is 4.4.19-1 - and install it:<br />
<br />
apt-get install pve-kernel-4.4.19-1-pve pve-firmware<br />
<br />
==== Upgrade the basic system to Debian Jessie ====<br />
<br />
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages.<br />
<br />
apt-get upgrade<br />
<br />
Once that's done, move on to the remaining packages to upgrade, with:<br />
<br />
apt-get dist-upgrade<br />
<br />
During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.<br />
<br />
Reboot the system in order to activate the new kernel.<br />
<br />
==== Install Proxmox VE 4.x ====<br />
Finally, install the new Proxmox VE 4.x packages with one single command:<br />
apt-get install proxmox-ve<br />
<br />
Then you should purge configuration files from packages which are no longer needed ('''''Note:''' purging vzctl will delete all files in /var/lib/vz/private and /var/lib/vz/root , only run this if you have backed up your openvz containers.''):<br />
<br />
dpkg --purge vzctl<br />
dpkg --purge redhat-cluster-pve<br />
<br />
'''Remove the old kernel''' (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too. Use dpkg --list | grep pve-kernel to find any 2.6.* kernels to remove):<br />
<br />
apt-get remove pve-kernel-2.6.*<br />
<br />
After these commands, proxmox gives some error message and ignore them.<br />
<br />
Reboot system,<br />
<br />
Again,<br />
apt-get install proxmox-ve<br />
<br />
Finally, reboot and test if all is working as expected.<br />
<br />
==== Optional: OpenVZ conversion ====<br />
<br />
Convert the previously backed up containers to LXC, following the HowTo on [[Convert OpenVZ to LXC]]<br />
<br />
You can also remove the obsolete OpenVZ container data from your local storage.<br />
<br />
rm -f /etc/pve/openvz/<ct-id>.conf<br />
rm -R <storage-path>/private/*<br />
<br />
==== Cluster upgrade ====<br />
'''It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.x cluster '''<br />
<br />
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.<br />
<br />
* at the first node<br />
<br />
pvecm create <clustername><br />
<br />
* at all other nodes:<br />
<br />
pvecm add <first-node´s-IP> -force<br />
<br />
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see [[High Availability Cluster 4.x]]<br />
<br />
After upgrading the last node remove the V3.x cluster data:<br />
<br />
rm /etc/pve/cluster.conf<br />
<br />
=== Troubleshooting ===<br />
<br />
* Failing upgrade to latest Proxmox VE 3.x or removal of old packages:<br />
<br />
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done '''after''' the removal of old Proxmox VE.<br />
<br />
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/<br />
<br />
* Failing upgrade to "jessie"<br />
<br />
Make the sure that the repository configuration for jessie is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt-get -fy install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover_From_Grub_Failure]]<br />
<br />
'''Info: As long as you do not delete the files on the link below, Proxmox starts up with the same virtual machines.'''<br />
<br />
'''/var/lib/vz/images/*''' --VM's data files are here.<br />
<br />
'''/etc/pve/qemu-server/*.conf''' --VM's conf files are here.<br />
=== External links ===<br />
<br />
*[https://www.debian.org/releases/jessie/amd64/release-notes/ Release Notes for Debian 8.0 (jessie), 64-bit PC]<br />
<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Kadirhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_3.x_to_4.0&diff=10013Upgrade from 3.x to 4.02017-11-28T13:00:34Z<p>Kadir: </p>
<hr />
<div>== Introduction ==<br />
<br />
Proxmox VE 4.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.<br />
<br />
Major upgrades for V4.x:<br />
*OpenVZ is removed, a conversion via backup/restore to LXC is needed <br />
*New corosync version, therefore clusters has to be re-established<br />
*New HA manager (replacing RGmanager, involving a complete HA re-configuration)<br />
<br />
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie. <br />
<br />
V4.x supports only the new '''DRBD9 which is not backwards compatible with the 8.x version''' and is considered only a technology preview.<br />
<br />
Generally speaking there are two possibilities to move from 3.x to 4.x<br />
<br />
*New installation on new hardware (and restore VM´s from backup) - safest way, recommended!<br />
*In-place upgrade via apt, step by step <br />
<br />
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.<br />
<br />
== New installation ==<br />
<br />
* Backup all VMs and containers to external media (see [[Backup and Restore]])<br />
* Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.<br />
* Install Proxmox VE from ISO (this will wipe all data on the existing host)<br />
* Rebuild the cluster if you had any<br />
* Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup) <br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)<br />
* Restore full VMs from Backups (see [[Backup and Restore]])<br />
* Restore/Convert containers (see [[Convert OpenVZ to LXC]])<br />
<br />
=== Bypassing Backup and Restore ===<br />
<br />
The following is only for advanced users which have knowledge about Proxmox configuration files!<br />
<br />
Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only<br />
<br />
* for KVM (i.e. not for containers)<br />
<br />
* if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)<br />
<br />
The steps <br />
<br />
* Backup all VMs<br />
* Restore full VMs from Backups<br />
<br />
will be replaced by<br />
<br />
* Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively<br />
* Backup those storages from the storage configuration ( /etc/pve/storage.cfg ) which are shared and untouched, simply copy the respective lines and append them once to the newly build clusters /etc/pve/storage.cfg<br />
* Restore <vmid>.conf file(s) for the respective machine(s)<br />
<br />
'''Note:''' /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.<br />
<br />
After you restored the VM configs and restored the external shared Storage configuration - so that it is accessible under the same name in the new cluster again - you should be able to start the VMs again.<br />
No additional reboot is required.<br />
<br />
== In-place upgrade ==<br />
<br />
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.<br />
<br />
=== Preconditions ===<br />
<br />
* upgraded to latest V3.4 version<br />
* reliable access to all configured storages<br />
* healthy cluster<br />
* no VM or CT running (note: VM live migration from 3.4 to 4.x node or vice versa NOT possible)<br />
* valid backup of all OpenVZ containers (needed for the conversion to LXC)<br />
* valid backup of all VM (only needed if something goes wrong)<br />
* Correct repository configuration (accessible both wheezy and jessie)<br />
* at least 1GB free disk space at root mount point<br />
<br />
=== Actions Step by Step ===<br />
<br />
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.<br />
<br />
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 3.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 3.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''<br />
<br />
==== Remove Proxmox VE 3.x packages in order to avoid dependency errors ====<br />
<br />
First make sure that your actual installation is "clean", tentatively run<br />
<br />
apt-get update && apt-get dist-upgrade<br />
<br />
Then start the removal:<br />
<br />
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware <br />
<br />
Adapt repository locations and update the apt database, point all to jessie, e.g.:<br />
<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list<br />
apt-get update<br />
'''If there is a backports line then remove it.'''<br />
Currently, ''pve-manager'' and ''ceph-common'' have unmet dependencies with regards to package versions in the jessie ''backports'' repo.<br />
<br />
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:<br />
<br />
deb http://download.ceph.com/debian-hammer jessie main<br />
<br />
<br />
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.<br />
<br />
==== Install the new kernel ====<br />
<br />
Add repository proxmox jessie location<br />
<br />
deb https://enterprise.proxmox.com/debian jessie pve-enterprise<br />
deb http://download.proxmox.com/debian jessie pve-no-subscription<br />
<br />
Check first what the current new kernel's version is <br />
<br />
apt-cache search pve-kernel | sort -V<br />
<br />
- at the moment (September 2016) it is 4.4.19-1 - and install it:<br />
<br />
apt-get install pve-kernel-4.4.19-1-pve pve-firmware<br />
<br />
==== Upgrade the basic system to Debian Jessie ====<br />
<br />
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages.<br />
<br />
apt-get upgrade<br />
<br />
Once that's done, move on to the remaining packages to upgrade, with:<br />
<br />
apt-get dist-upgrade<br />
<br />
During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.<br />
<br />
Reboot the system in order to activate the new kernel.<br />
<br />
==== Install Proxmox VE 4.x ====<br />
Finally, install the new Proxmox VE 4.x packages with one single command:<br />
apt-get install proxmox-ve<br />
<br />
Then you should purge configuration files from packages which are no longer needed ('''''Note:''' purging vzctl will delete all files in /var/lib/vz/private and /var/lib/vz/root , only run this if you have backed up your openvz containers.''):<br />
<br />
dpkg --purge vzctl<br />
dpkg --purge redhat-cluster-pve<br />
<br />
'''Remove the old kernel''' (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too. Use dpkg --list | grep pve-kernel to find any 2.6.* kernels to remove):<br />
<br />
apt-get remove pve-kernel-2.6.*<br />
<br />
After these commands, proxmox gives some error message and ignore them.<br />
<br />
Reboot system,<br />
<br />
Again,<br />
apt-get install proxmox-ve<br />
<br />
Finally, reboot and test if all is working as expected.<br />
<br />
==== Optional: OpenVZ conversion ====<br />
<br />
Convert the previously backed up containers to LXC, following the HowTo on [[Convert OpenVZ to LXC]]<br />
<br />
You can also remove the obsolete OpenVZ container data from your local storage.<br />
<br />
rm -f /etc/pve/openvz/<ct-id>.conf<br />
rm -R <storage-path>/private/*<br />
<br />
==== Cluster upgrade ====<br />
'''It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.x cluster '''<br />
<br />
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.<br />
<br />
* at the first node<br />
<br />
pvecm create <clustername><br />
<br />
* at all other nodes:<br />
<br />
pvecm add <first-node´s-IP> -force<br />
<br />
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see [[High Availability Cluster 4.x]]<br />
<br />
After upgrading the last node remove the V3.x cluster data:<br />
<br />
rm /etc/pve/cluster.conf<br />
<br />
=== Troubleshooting ===<br />
<br />
* Failing upgrade to latest Proxmox VE 3.x or removal of old packages:<br />
<br />
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done '''after''' the removal of old Proxmox VE.<br />
<br />
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/<br />
<br />
* Failing upgrade to "jessie"<br />
<br />
Make the sure that the repository configuration for jessie is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt-get -fy install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover_From_Grub_Failure]]<br />
<br />
'''Info: As long as you do not delete the files on the link below, Proxmox starts up with the same virtual machines.'''<br />
<br />
'''/var/lib/vz/images/*''' --VM's data files are here.<br />
<br />
'''/etc/pve/qemu-server/*.conf''' --VM's conf files are here.<br />
=== External links ===<br />
<br />
*[https://www.debian.org/releases/jessie/amd64/release-notes/ Release Notes for Debian 8.0 (jessie), 64-bit PC]<br />
<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Kadirhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_3.x_to_4.0&diff=10012Upgrade from 3.x to 4.02017-11-28T12:58:43Z<p>Kadir: </p>
<hr />
<div>== Introduction ==<br />
<br />
Proxmox VE 4.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.<br />
<br />
Major upgrades for V4.x:<br />
*OpenVZ is removed, a conversion via backup/restore to LXC is needed <br />
*New corosync version, therefore clusters has to be re-established<br />
*New HA manager (replacing RGmanager, involving a complete HA re-configuration)<br />
<br />
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie. <br />
<br />
V4.x supports only the new '''DRBD9 which is not backwards compatible with the 8.x version''' and is considered only a technology preview.<br />
<br />
Generally speaking there are two possibilities to move from 3.x to 4.x<br />
<br />
*New installation on new hardware (and restore VM´s from backup) - safest way, recommended!<br />
*In-place upgrade via apt, step by step <br />
<br />
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.<br />
<br />
== New installation ==<br />
<br />
* Backup all VMs and containers to external media (see [[Backup and Restore]])<br />
* Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.<br />
* Install Proxmox VE from ISO (this will wipe all data on the existing host)<br />
* Rebuild the cluster if you had any<br />
* Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup) <br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)<br />
* Restore full VMs from Backups (see [[Backup and Restore]])<br />
* Restore/Convert containers (see [[Convert OpenVZ to LXC]])<br />
<br />
=== Bypassing Backup and Restore ===<br />
<br />
The following is only for advanced users which have knowledge about Proxmox configuration files!<br />
<br />
Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only<br />
<br />
* for KVM (i.e. not for containers)<br />
<br />
* if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)<br />
<br />
The steps <br />
<br />
* Backup all VMs<br />
* Restore full VMs from Backups<br />
<br />
will be replaced by<br />
<br />
* Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively<br />
* Backup those storages from the storage configuration ( /etc/pve/storage.cfg ) which are shared and untouched, simply copy the respective lines and append them once to the newly build clusters /etc/pve/storage.cfg<br />
* Restore <vmid>.conf file(s) for the respective machine(s)<br />
<br />
'''Note:''' /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.<br />
<br />
After you restored the VM configs and restored the external shared Storage configuration - so that it is accessible under the same name in the new cluster again - you should be able to start the VMs again.<br />
No additional reboot is required.<br />
<br />
== In-place upgrade ==<br />
<br />
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.<br />
<br />
=== Preconditions ===<br />
<br />
* upgraded to latest V3.4 version<br />
* reliable access to all configured storages<br />
* healthy cluster<br />
* no VM or CT running (note: VM live migration from 3.4 to 4.x node or vice versa NOT possible)<br />
* valid backup of all OpenVZ containers (needed for the conversion to LXC)<br />
* valid backup of all VM (only needed if something goes wrong)<br />
* Correct repository configuration (accessible both wheezy and jessie)<br />
* at least 1GB free disk space at root mount point<br />
<br />
=== Actions Step by Step ===<br />
<br />
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.<br />
<br />
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 3.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 3.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''<br />
<br />
==== Remove Proxmox VE 3.x packages in order to avoid dependency errors ====<br />
<br />
First make sure that your actual installation is "clean", tentatively run<br />
<br />
apt-get update && apt-get dist-upgrade<br />
<br />
Then start the removal:<br />
<br />
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware <br />
<br />
Adapt repository locations and update the apt database, point all to jessie, e.g.:<br />
<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list<br />
apt-get update<br />
'''If there is a backports line then remove it.'''<br />
Currently, ''pve-manager'' and ''ceph-common'' have unmet dependencies with regards to package versions in the jessie ''backports'' repo.<br />
<br />
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:<br />
<br />
deb http://download.ceph.com/debian-hammer jessie main<br />
<br />
<br />
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.<br />
<br />
==== Install the new kernel ====<br />
<br />
Add repository proxmox jessie location<br />
<br />
deb http://download.proxmox.com/debian jessie pve-no-subscription<br />
<br />
Check first what the current new kernel's version is <br />
<br />
apt-cache search pve-kernel | sort -V<br />
<br />
- at the moment (September 2016) it is 4.4.19-1 - and install it:<br />
<br />
apt-get install pve-kernel-4.4.19-1-pve pve-firmware<br />
<br />
==== Upgrade the basic system to Debian Jessie ====<br />
<br />
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages.<br />
<br />
apt-get upgrade<br />
<br />
Once that's done, move on to the remaining packages to upgrade, with:<br />
<br />
apt-get dist-upgrade<br />
<br />
During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.<br />
<br />
Reboot the system in order to activate the new kernel.<br />
<br />
==== Install Proxmox VE 4.x ====<br />
Finally, install the new Proxmox VE 4.x packages with one single command:<br />
apt-get install proxmox-ve<br />
<br />
Then you should purge configuration files from packages which are no longer needed ('''''Note:''' purging vzctl will delete all files in /var/lib/vz/private and /var/lib/vz/root , only run this if you have backed up your openvz containers.''):<br />
<br />
dpkg --purge vzctl<br />
dpkg --purge redhat-cluster-pve<br />
<br />
'''Remove the old kernel''' (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too. Use dpkg --list | grep pve-kernel to find any 2.6.* kernels to remove):<br />
<br />
apt-get remove pve-kernel-2.6.*<br />
<br />
After these commands, proxmox gives some error message and ignore them.<br />
<br />
Reboot system,<br />
<br />
Again,<br />
apt-get install proxmox-ve<br />
<br />
Finally, reboot and test if all is working as expected.<br />
<br />
==== Optional: OpenVZ conversion ====<br />
<br />
Convert the previously backed up containers to LXC, following the HowTo on [[Convert OpenVZ to LXC]]<br />
<br />
You can also remove the obsolete OpenVZ container data from your local storage.<br />
<br />
rm -f /etc/pve/openvz/<ct-id>.conf<br />
rm -R <storage-path>/private/*<br />
<br />
==== Cluster upgrade ====<br />
'''It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.x cluster '''<br />
<br />
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.<br />
<br />
* at the first node<br />
<br />
pvecm create <clustername><br />
<br />
* at all other nodes:<br />
<br />
pvecm add <first-node´s-IP> -force<br />
<br />
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see [[High Availability Cluster 4.x]]<br />
<br />
After upgrading the last node remove the V3.x cluster data:<br />
<br />
rm /etc/pve/cluster.conf<br />
<br />
=== Troubleshooting ===<br />
<br />
* Failing upgrade to latest Proxmox VE 3.x or removal of old packages:<br />
<br />
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done '''after''' the removal of old Proxmox VE.<br />
<br />
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/<br />
<br />
* Failing upgrade to "jessie"<br />
<br />
Make the sure that the repository configuration for jessie is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt-get -fy install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover_From_Grub_Failure]]<br />
<br />
'''Info: As long as you do not delete the files on the link below, Proxmox starts up with the same virtual machines.'''<br />
<br />
'''/var/lib/vz/images/*''' --VM's data files are here.<br />
<br />
'''/etc/pve/qemu-server/*.conf''' --VM's conf files are here.<br />
=== External links ===<br />
<br />
*[https://www.debian.org/releases/jessie/amd64/release-notes/ Release Notes for Debian 8.0 (jessie), 64-bit PC]<br />
<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Kadirhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_3.x_to_4.0&diff=10011Upgrade from 3.x to 4.02017-11-28T12:56:56Z<p>Kadir: </p>
<hr />
<div>== Introduction ==<br />
<br />
Proxmox VE 4.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.<br />
<br />
Major upgrades for V4.x:<br />
*OpenVZ is removed, a conversion via backup/restore to LXC is needed <br />
*New corosync version, therefore clusters has to be re-established<br />
*New HA manager (replacing RGmanager, involving a complete HA re-configuration)<br />
<br />
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie. <br />
<br />
V4.x supports only the new '''DRBD9 which is not backwards compatible with the 8.x version''' and is considered only a technology preview.<br />
<br />
Generally speaking there are two possibilities to move from 3.x to 4.x<br />
<br />
*New installation on new hardware (and restore VM´s from backup) - safest way, recommended!<br />
*In-place upgrade via apt, step by step <br />
<br />
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.<br />
<br />
== New installation ==<br />
<br />
* Backup all VMs and containers to external media (see [[Backup and Restore]])<br />
* Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.<br />
* Install Proxmox VE from ISO (this will wipe all data on the existing host)<br />
* Rebuild the cluster if you had any<br />
* Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup) <br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)<br />
* Restore full VMs from Backups (see [[Backup and Restore]])<br />
* Restore/Convert containers (see [[Convert OpenVZ to LXC]])<br />
<br />
=== Bypassing Backup and Restore ===<br />
<br />
The following is only for advanced users which have knowledge about Proxmox configuration files!<br />
<br />
Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only<br />
<br />
* for KVM (i.e. not for containers)<br />
<br />
* if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)<br />
<br />
The steps <br />
<br />
* Backup all VMs<br />
* Restore full VMs from Backups<br />
<br />
will be replaced by<br />
<br />
* Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively<br />
* Backup those storages from the storage configuration ( /etc/pve/storage.cfg ) which are shared and untouched, simply copy the respective lines and append them once to the newly build clusters /etc/pve/storage.cfg<br />
* Restore <vmid>.conf file(s) for the respective machine(s)<br />
<br />
'''Note:''' /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.<br />
<br />
After you restored the VM configs and restored the external shared Storage configuration - so that it is accessible under the same name in the new cluster again - you should be able to start the VMs again.<br />
No additional reboot is required.<br />
<br />
== In-place upgrade ==<br />
<br />
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.<br />
<br />
=== Preconditions ===<br />
<br />
* upgraded to latest V3.4 version<br />
* reliable access to all configured storages<br />
* healthy cluster<br />
* no VM or CT running (note: VM live migration from 3.4 to 4.x node or vice versa NOT possible)<br />
* valid backup of all OpenVZ containers (needed for the conversion to LXC)<br />
* valid backup of all VM (only needed if something goes wrong)<br />
* Correct repository configuration (accessible both wheezy and jessie)<br />
* at least 1GB free disk space at root mount point<br />
<br />
=== Actions Step by Step ===<br />
<br />
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.<br />
<br />
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 3.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 3.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''<br />
<br />
==== Remove Proxmox VE 3.x packages in order to avoid dependency errors ====<br />
<br />
First make sure that your actual installation is "clean", tentatively run<br />
<br />
apt-get update && apt-get dist-upgrade<br />
<br />
Then start the removal:<br />
<br />
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware <br />
<br />
Adapt repository locations and update the apt database, point all to jessie, e.g.:<br />
<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list<br />
apt-get update<br />
'''If there is a backports line then remove it.'''<br />
Currently, ''pve-manager'' and ''ceph-common'' have unmet dependencies with regards to package versions in the jessie ''backports'' repo.<br />
<br />
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:<br />
<br />
deb http://download.ceph.com/debian-hammer jessie main<br />
<br />
<br />
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.<br />
<br />
==== Install the new kernel ====<br />
<br />
Add repository proxmox jessie location<br />
<br />
deb http://download.proxmox.com/debian jessie pve-no-subscription<br />
<br />
Check first what the current new kernel's version is <br />
<br />
apt-cache search pve-kernel | sort -V<br />
<br />
- at the moment (September 2016) it is 4.4.19-1 - and install it:<br />
<br />
apt-get install pve-kernel-4.4.19-1-pve pve-firmware<br />
<br />
==== Upgrade the basic system to Debian Jessie ====<br />
<br />
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages.<br />
<br />
apt-get upgrade<br />
<br />
Once that's done, move on to the remaining packages to upgrade, with:<br />
<br />
apt-get dist-upgrade<br />
<br />
During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.<br />
<br />
Reboot the system in order to activate the new kernel.<br />
<br />
==== Install Proxmox VE 4.x ====<br />
Finally, install the new Proxmox VE 4.x packages with one single command:<br />
apt-get install proxmox-ve<br />
<br />
Then you should purge configuration files from packages which are no longer needed ('''''Note:''' purging vzctl will delete all files in /var/lib/vz/private and /var/lib/vz/root , only run this if you have backed up your openvz containers.''):<br />
<br />
dpkg --purge vzctl<br />
dpkg --purge redhat-cluster-pve<br />
<br />
'''Remove the old kernel''' (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too. Use dpkg --list | grep pve-kernel to find any 2.6.* kernels to remove):<br />
<br />
apt-get remove pve-kernel-2.6.*<br />
<br />
After these commands, proxmox gives some error message and ignore them.<br />
<br />
Reboot system,<br />
<br />
Again,<br />
apt-get install proxmox-ve<br />
<br />
Finally, reboot and test if all is working as expected.<br />
<br />
==== Optional: OpenVZ conversion ====<br />
<br />
Convert the previously backed up containers to LXC, following the HowTo on [[Convert OpenVZ to LXC]]<br />
<br />
You can also remove the obsolete OpenVZ container data from your local storage.<br />
<br />
rm -f /etc/pve/openvz/<ct-id>.conf<br />
rm -R <storage-path>/private/*<br />
<br />
==== Cluster upgrade ====<br />
'''It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.x cluster '''<br />
<br />
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.<br />
<br />
* at the first node<br />
<br />
pvecm create <clustername><br />
<br />
* at all other nodes:<br />
<br />
pvecm add <first-node´s-IP> -force<br />
<br />
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see [[High Availability Cluster 4.x]]<br />
<br />
After upgrading the last node remove the V3.x cluster data:<br />
<br />
rm /etc/pve/cluster.conf<br />
<br />
=== Troubleshooting ===<br />
<br />
* Failing upgrade to latest Proxmox VE 3.x or removal of old packages:<br />
<br />
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done '''after''' the removal of old Proxmox VE.<br />
<br />
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/<br />
<br />
* Failing upgrade to "jessie"<br />
<br />
Make the sure that the repository configuration for jessie is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt-get -fy install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover_From_Grub_Failure]]<br />
<br />
'''Info: As long as you do not delete the files on the link below, Proxmox starts up with the same virtual machines.'''<br />
'''/var/lib/vz/images/*'''<br />
'''/etc/pve/qemu-server/*.conf'''<br />
=== External links ===<br />
<br />
*[https://www.debian.org/releases/jessie/amd64/release-notes/ Release Notes for Debian 8.0 (jessie), 64-bit PC]<br />
<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Kadirhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_3.x_to_4.0&diff=10010Upgrade from 3.x to 4.02017-11-28T12:56:36Z<p>Kadir: </p>
<hr />
<div>== Introduction ==<br />
<br />
Proxmox VE 4.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.<br />
<br />
Major upgrades for V4.x:<br />
*OpenVZ is removed, a conversion via backup/restore to LXC is needed <br />
*New corosync version, therefore clusters has to be re-established<br />
*New HA manager (replacing RGmanager, involving a complete HA re-configuration)<br />
<br />
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie. <br />
<br />
V4.x supports only the new '''DRBD9 which is not backwards compatible with the 8.x version''' and is considered only a technology preview.<br />
<br />
Generally speaking there are two possibilities to move from 3.x to 4.x<br />
<br />
*New installation on new hardware (and restore VM´s from backup) - safest way, recommended!<br />
*In-place upgrade via apt, step by step <br />
<br />
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.<br />
<br />
== New installation ==<br />
<br />
* Backup all VMs and containers to external media (see [[Backup and Restore]])<br />
* Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.<br />
* Install Proxmox VE from ISO (this will wipe all data on the existing host)<br />
* Rebuild the cluster if you had any<br />
* Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup) <br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)<br />
* Restore full VMs from Backups (see [[Backup and Restore]])<br />
* Restore/Convert containers (see [[Convert OpenVZ to LXC]])<br />
<br />
=== Bypassing Backup and Restore ===<br />
<br />
The following is only for advanced users which have knowledge about Proxmox configuration files!<br />
<br />
Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only<br />
<br />
* for KVM (i.e. not for containers)<br />
<br />
* if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)<br />
<br />
The steps <br />
<br />
* Backup all VMs<br />
* Restore full VMs from Backups<br />
<br />
will be replaced by<br />
<br />
* Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively<br />
* Backup those storages from the storage configuration ( /etc/pve/storage.cfg ) which are shared and untouched, simply copy the respective lines and append them once to the newly build clusters /etc/pve/storage.cfg<br />
* Restore <vmid>.conf file(s) for the respective machine(s)<br />
<br />
'''Note:''' /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.<br />
<br />
After you restored the VM configs and restored the external shared Storage configuration - so that it is accessible under the same name in the new cluster again - you should be able to start the VMs again.<br />
No additional reboot is required.<br />
<br />
== In-place upgrade ==<br />
<br />
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.<br />
<br />
=== Preconditions ===<br />
<br />
* upgraded to latest V3.4 version<br />
* reliable access to all configured storages<br />
* healthy cluster<br />
* no VM or CT running (note: VM live migration from 3.4 to 4.x node or vice versa NOT possible)<br />
* valid backup of all OpenVZ containers (needed for the conversion to LXC)<br />
* valid backup of all VM (only needed if something goes wrong)<br />
* Correct repository configuration (accessible both wheezy and jessie)<br />
* at least 1GB free disk space at root mount point<br />
<br />
=== Actions Step by Step ===<br />
<br />
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.<br />
<br />
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 3.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 3.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''<br />
<br />
==== Remove Proxmox VE 3.x packages in order to avoid dependency errors ====<br />
<br />
First make sure that your actual installation is "clean", tentatively run<br />
<br />
apt-get update && apt-get dist-upgrade<br />
<br />
Then start the removal:<br />
<br />
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware <br />
<br />
Adapt repository locations and update the apt database, point all to jessie, e.g.:<br />
<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list<br />
apt-get update<br />
'''If there is a backports line then remove it.'''<br />
Currently, ''pve-manager'' and ''ceph-common'' have unmet dependencies with regards to package versions in the jessie ''backports'' repo.<br />
<br />
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:<br />
<br />
deb http://download.ceph.com/debian-hammer jessie main<br />
<br />
<br />
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.<br />
<br />
==== Install the new kernel ====<br />
<br />
Add repository proxmox jessie location<br />
<br />
deb http://download.proxmox.com/debian jessie pve-no-subscription<br />
<br />
Check first what the current new kernel's version is <br />
<br />
apt-cache search pve-kernel | sort -V<br />
<br />
- at the moment (September 2016) it is 4.4.19-1 - and install it:<br />
<br />
apt-get install pve-kernel-4.4.19-1-pve pve-firmware<br />
<br />
==== Upgrade the basic system to Debian Jessie ====<br />
<br />
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages.<br />
<br />
apt-get upgrade<br />
<br />
Once that's done, move on to the remaining packages to upgrade, with:<br />
<br />
apt-get dist-upgrade<br />
<br />
During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.<br />
<br />
Reboot the system in order to activate the new kernel.<br />
<br />
==== Install Proxmox VE 4.x ====<br />
Finally, install the new Proxmox VE 4.x packages with one single command:<br />
apt-get install proxmox-ve<br />
<br />
Then you should purge configuration files from packages which are no longer needed ('''''Note:''' purging vzctl will delete all files in /var/lib/vz/private and /var/lib/vz/root , only run this if you have backed up your openvz containers.''):<br />
<br />
dpkg --purge vzctl<br />
dpkg --purge redhat-cluster-pve<br />
<br />
'''Remove the old kernel''' (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too. Use dpkg --list | grep pve-kernel to find any 2.6.* kernels to remove):<br />
<br />
apt-get remove pve-kernel-2.6.*<br />
<br />
After these commands, proxmox gives some error message and ignore them.<br />
<br />
Reboot system,<br />
<br />
Again,<br />
apt-get install proxmox-ve<br />
<br />
Finally, reboot and test if all is working as expected.<br />
<br />
==== Optional: OpenVZ conversion ====<br />
<br />
Convert the previously backed up containers to LXC, following the HowTo on [[Convert OpenVZ to LXC]]<br />
<br />
You can also remove the obsolete OpenVZ container data from your local storage.<br />
<br />
rm -f /etc/pve/openvz/<ct-id>.conf<br />
rm -R <storage-path>/private/*<br />
<br />
==== Cluster upgrade ====<br />
'''It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.x cluster '''<br />
<br />
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.<br />
<br />
* at the first node<br />
<br />
pvecm create <clustername><br />
<br />
* at all other nodes:<br />
<br />
pvecm add <first-node´s-IP> -force<br />
<br />
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see [[High Availability Cluster 4.x]]<br />
<br />
After upgrading the last node remove the V3.x cluster data:<br />
<br />
rm /etc/pve/cluster.conf<br />
<br />
=== Troubleshooting ===<br />
<br />
* Failing upgrade to latest Proxmox VE 3.x or removal of old packages:<br />
<br />
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done '''after''' the removal of old Proxmox VE.<br />
<br />
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/<br />
<br />
* Failing upgrade to "jessie"<br />
<br />
Make the sure that the repository configuration for jessie is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt-get -fy install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover_From_Grub_Failure]]<br />
<br />
'''Info: As long as you do not delete the files on the link below, Proxmox starts up with the same virtual machines.'''<br />
'''/var/lib/vz/images/*<br />
/etc/pve/qemu-server/*.conf'''<br />
=== External links ===<br />
<br />
*[https://www.debian.org/releases/jessie/amd64/release-notes/ Release Notes for Debian 8.0 (jessie), 64-bit PC]<br />
<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Kadirhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_3.x_to_4.0&diff=10009Upgrade from 3.x to 4.02017-11-28T12:55:39Z<p>Kadir: </p>
<hr />
<div>== Introduction ==<br />
<br />
Proxmox VE 4.x introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.<br />
<br />
Major upgrades for V4.x:<br />
*OpenVZ is removed, a conversion via backup/restore to LXC is needed <br />
*New corosync version, therefore clusters has to be re-established<br />
*New HA manager (replacing RGmanager, involving a complete HA re-configuration)<br />
<br />
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie. <br />
<br />
V4.x supports only the new '''DRBD9 which is not backwards compatible with the 8.x version''' and is considered only a technology preview.<br />
<br />
Generally speaking there are two possibilities to move from 3.x to 4.x<br />
<br />
*New installation on new hardware (and restore VM´s from backup) - safest way, recommended!<br />
*In-place upgrade via apt, step by step <br />
<br />
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.<br />
<br />
== New installation ==<br />
<br />
* Backup all VMs and containers to external media (see [[Backup and Restore]])<br />
* Backup all files in /etc You will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.<br />
* Install Proxmox VE from ISO (this will wipe all data on the existing host)<br />
* Rebuild the cluster if you had any<br />
* Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup) <br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)<br />
* Restore full VMs from Backups (see [[Backup and Restore]])<br />
* Restore/Convert containers (see [[Convert OpenVZ to LXC]])<br />
<br />
=== Bypassing Backup and Restore ===<br />
<br />
The following is only for advanced users which have knowledge about Proxmox configuration files!<br />
<br />
Since Backup and Restore can be a time-consuming process in the following a more rapid method is described - possible only<br />
<br />
* for KVM (i.e. not for containers)<br />
<br />
* if the (virtual) disk(s) for the VM(s) is (are) located at a storage which is not touched by the installation process (e.g. NFS at an external server)<br />
<br />
The steps <br />
<br />
* Backup all VMs<br />
* Restore full VMs from Backups<br />
<br />
will be replaced by<br />
<br />
* Backup <vmid>.conf file(s) for the respective machine(s), they are located under /etc/pve/nodes/<nodename>/lxc/ and /etc/pve/nodes/<nodename>/qemu-server/ respectively<br />
* Backup those storages from the storage configuration ( /etc/pve/storage.cfg ) which are shared and untouched, simply copy the respective lines and append them once to the newly build clusters /etc/pve/storage.cfg<br />
* Restore <vmid>.conf file(s) for the respective machine(s)<br />
<br />
'''Note:''' /etc/pve/lxc/ and /etc/pve/qemu-server/ are virtual symlinks for the current nodes lxc and qemu directory.<br />
<br />
After you restored the VM configs and restored the external shared Storage configuration - so that it is accessible under the same name in the new cluster again - you should be able to start the VMs again.<br />
No additional reboot is required.<br />
<br />
== In-place upgrade ==<br />
<br />
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.<br />
<br />
=== Preconditions ===<br />
<br />
* upgraded to latest V3.4 version<br />
* reliable access to all configured storages<br />
* healthy cluster<br />
* no VM or CT running (note: VM live migration from 3.4 to 4.x node or vice versa NOT possible)<br />
* valid backup of all OpenVZ containers (needed for the conversion to LXC)<br />
* valid backup of all VM (only needed if something goes wrong)<br />
* Correct repository configuration (accessible both wheezy and jessie)<br />
* at least 1GB free disk space at root mount point<br />
<br />
=== Actions Step by Step ===<br />
<br />
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.<br />
<br />
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 3.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 3.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''<br />
<br />
==== Remove Proxmox VE 3.x packages in order to avoid dependency errors ====<br />
<br />
First make sure that your actual installation is "clean", tentatively run<br />
<br />
apt-get update && apt-get dist-upgrade<br />
<br />
Then start the removal:<br />
<br />
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware <br />
<br />
Adapt repository locations and update the apt database, point all to jessie, e.g.:<br />
<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list<br />
apt-get update<br />
'''If there is a backports line then remove it.'''<br />
Currently, ''pve-manager'' and ''ceph-common'' have unmet dependencies with regards to package versions in the jessie ''backports'' repo.<br />
<br />
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:<br />
<br />
deb http://download.ceph.com/debian-hammer jessie main<br />
<br />
<br />
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.<br />
<br />
==== Install the new kernel ====<br />
<br />
Add repository proxmox jessie location<br />
<br />
deb http://download.proxmox.com/debian jessie pve-no-subscription<br />
<br />
Check first what the current new kernel's version is <br />
<br />
apt-cache search pve-kernel | sort -V<br />
<br />
- at the moment (September 2016) it is 4.4.19-1 - and install it:<br />
<br />
apt-get install pve-kernel-4.4.19-1-pve pve-firmware<br />
<br />
==== Upgrade the basic system to Debian Jessie ====<br />
<br />
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.<br />
<br />
Start with this step to get the initial set of upgraded packages.<br />
<br />
apt-get upgrade<br />
<br />
Once that's done, move on to the remaining packages to upgrade, with:<br />
<br />
apt-get dist-upgrade<br />
<br />
During either of the above, you may be asked to approve of some new packages replacing configuration files. Do with them as you see fit, but they are not relevant to the Proxmox upgrade.<br />
<br />
Reboot the system in order to activate the new kernel.<br />
<br />
==== Install Proxmox VE 4.x ====<br />
Finally, install the new Proxmox VE 4.x packages with one single command:<br />
apt-get install proxmox-ve<br />
<br />
Then you should purge configuration files from packages which are no longer needed ('''''Note:''' purging vzctl will delete all files in /var/lib/vz/private and /var/lib/vz/root , only run this if you have backed up your openvz containers.''):<br />
<br />
dpkg --purge vzctl<br />
dpkg --purge redhat-cluster-pve<br />
<br />
'''Remove the old kernel''' (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too. Use dpkg --list | grep pve-kernel to find any 2.6.* kernels to remove):<br />
<br />
apt-get remove pve-kernel-2.6.*<br />
<br />
After these commands, proxmox gives some error message and ignore them.<br />
<br />
Reboot system,<br />
<br />
Again,<br />
apt-get install proxmox-ve<br />
<br />
Finally, reboot and test if all is working as expected.<br />
<br />
==== Optional: OpenVZ conversion ====<br />
<br />
Convert the previously backed up containers to LXC, following the HowTo on [[Convert OpenVZ to LXC]]<br />
<br />
You can also remove the obsolete OpenVZ container data from your local storage.<br />
<br />
rm -f /etc/pve/openvz/<ct-id>.conf<br />
rm -R <storage-path>/private/*<br />
<br />
==== Cluster upgrade ====<br />
'''It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.x cluster '''<br />
<br />
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.<br />
<br />
* at the first node<br />
<br />
pvecm create <clustername><br />
<br />
* at all other nodes:<br />
<br />
pvecm add <first-node´s-IP> -force<br />
<br />
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see [[High Availability Cluster 4.x]]<br />
<br />
After upgrading the last node remove the V3.x cluster data:<br />
<br />
rm /etc/pve/cluster.conf<br />
<br />
=== Troubleshooting ===<br />
<br />
* Failing upgrade to latest Proxmox VE 3.x or removal of old packages:<br />
<br />
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done '''after''' the removal of old Proxmox VE.<br />
<br />
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/<br />
<br />
* Failing upgrade to "jessie"<br />
<br />
Make the sure that the repository configuration for jessie is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt-get -fy install<br />
<br />
* Unable to boot due to grub failure<br />
<br />
See [[Recover_From_Grub_Failure]]<br />
<br />
'''Info: As long as you do not delete the files on the link below, Proxmox starts up with the same virtual machines.'''<br />
=== External links ===<br />
<br />
*[https://www.debian.org/releases/jessie/amd64/release-notes/ Release Notes for Debian 8.0 (jessie), 64-bit PC]<br />
<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Kadir