Upgrade from 3.x to 4.0
- 1 Introduction
- 2 In-place upgrade
- 2.1 Preconditions
- 2.2 Actions Step by Step
- 2.3 Troubleshooting
- 2.4 External links
- 3 New installation
Proxmox VE 4.0 introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.
Major upgrades for V4.0:
- OpenVZ is removed, a conversion via backup/restore to LXC is needed
- New corosync version, therefore clusters has to be re-established
- New HA manager (replacing RGmanager, involving a complete HA re-configuration)
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie.
V4.0 supports only the new DRBD9 which is not backwards compatible with the 8.x version and is considered only a technology preview.
Generally speaking there are two possibilities to move from 3.x to 4.0
- In-place upgrade via apt, step by step
- New installation on new hardware (and restore VM´s from backup) - safest way.
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.
- upgraded to latest V3.4 version
- reliable access to all configured storages
- healthy cluster
- no VM or CT running (note: VM live migration from 3.4 to 4.0 node or vice versa NOT possible)
- valid backup of all OpenVZ containers (needed for the conversion to LXC)
- valid backup of all VM (only needed if something goes wrong)
- Correct repository configuration (accessible both wheezy and jessie)
- at least 1GB free disk space at root mount point
Actions Step by Step
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.
Remove Proxmox VE 3.x packages in order to avoid dependency errors
First make sure that your actual installation is "clean", tentatively run
apt-get update && apt-get dist-upgrade
Then start the removal:
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware
Adapt repository locations and update the apt database, point all to jessie, e.g.:
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list apt-get update
If there is a backports line then remove it. Currently, pve-manager and ceph-common have unmet dependencies with regards to package versions in the jessie backports repo.
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:
deb http://download.ceph.com/debian-hammer jessie main
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.
Install the new kernel
Check first what the current new kernel's version is
apt-cache search pve-kernel | sort
- at the moment (October, 6th 2015) it is 4.2.6-1 - and install it:
apt-get install pve-kernel-4.2.6-1-pve pve-firmware
Upgrade the basic system to Debian Jessie
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.
Reboot the system in order to activate the new kernel.
Install Proxmox VE 4.0
Finally, install the new Proxmox VE 4.0 packages with one single command:
apt-get install proxmox-ve
Then you should purge configuration files from packages which are no longer needed (assuming you already saved your OpenVZ containers)
dpkg --purge vzctl dpkg --purge redhat-cluster-pve
Remove the old kernel (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too):
apt-get remove pve-kernel-2.6.32-41-pve
Finally, reboot and test if all is working as expected.
Optional: OpenVZ conversion
Convert the previously backed up containers to LXC, following the HowTo on Convert OpenVZ to LXC
You can also remove the obsolete OpenVZ container data from your local storage.
rm -f /etc/pve/openvz/<ct-id>.conf rm -R <storage-path>/private/*
It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.0 cluster
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.
- at the first node
pvecm create <clustername>
- at all other nodes:
pvecm add <first-node´s-IP> -force
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see High Availability Cluster 4.x
After upgrading the last node remove the V3.x cluster data:
- Failing upgrade to latest Proxmox VE 3.x or removal of old packages:
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done after the removal of old Proxmox VE.
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/
- Failing upgrade to "jessie"
Make the sure that the repository configuration for jessie is correct.
If there was a network failure and the upgrade has been made partially try to repair the situation with
apt-get -fy install
- Backup all VMs and containers
- Save all files from /etc/pve/... on a save place
- Install Proxmox VE from ISO
- Rebuild the cluster if you had any
- Restore /etc/pve/storage.cfg
- Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw
- Create empty VMs for the following restore action
- Restore VMs from Backups
- For experts only: Alternatively - if the virtual HD(s) is (are) on an external storage - to the two steps mentioned above you can do the following (Be careful what you do in that case, wrong handling can lead to inconsistent data and cause severe problems in the system):
- copy the previously saved .conf - file(s) to the proper place in the node where it should run as it is /etc/pve/qemu-server/<vm-id>.conf
- Restore/Convert containers, see Convert OpenVZ to LXC