Difference between revisions of "Roadmap"

From Proxmox VE
Jump to navigation Jump to search
m (→‎Roadmap: add a few of our in-progress/planning features to list)
(48 intermediate revisions by 3 users not shown)
Line 1: Line 1:
=Roadmap=
+
= Roadmap =
 +
* Cross cluster authentication mechanism
 +
* More flexible scheduling for backup jobs
 +
* Make TFA protection more powerful, allow multiple factors (like Proxmox Backup Server)
 +
* Software TPM support for virtual machines
  
* Cloudinit
+
=Release History=
* Cluster create/join nodes via GUI
+
See also [https://forum.proxmox.com/forums/announcements.7/ Announcement forum]
* SMB/CIFS Storage plugin
+
 
* Display IP for VM (using qemu-agent)
+
==Proxmox VE 7.0==
* Let’s Encrypt GUI
+
Released 6. July 2021: See [[Downloads]], updated ISO release 2 with updated base system to final Debian 11.0 Bullseye and bug fixes 31. August 2021
* Basic/Advanced GUI
+
 
* Live migration with local storage (qemu) to the gui
+
* Based on Debian Bullseye (11)
* xterm.js as default console for containers, shell
+
* Ceph Pacific 16.2 as new default
* Support for external fencing devices
+
* Ceph Octopus 15.2 continued support
* Qdevice (additional external vote, e.g. for 2-node clusters)
+
* Kernel 5.11 default
* Ceph improvements
+
* LXC 4.0
* 4.15 Kernel
+
* QEMU 6.0
 +
* ZFS 2.0.4
 +
 
 +
'''Changelog Overview'''
 +
 
 +
* Installer:
 +
** Rework the installer environment to use <code>switch_root</code> instead of <code>chroot</code>, when transitioning from initrd to the actual installer.
 +
**: This improves module and firmware loading, and slightly reduces memory usage during installation.
 +
** Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
 +
** Improve ISO detection:
 +
*** Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
 +
*** Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
 +
** Use <code>zstd</code> compression for the initrd image and the squashfs images.
 +
** Setup Btrfs as root file system through the Proxmox VE Installer (Technology preview)
 +
** Update to busybox 1.33.1 as the core-utils provider.
 +
 
 +
* Enhancements in the web interface (GUI):
 +
** The node summary panel shows a high level status overview, while the separate Repository panel shows in-depth status and list of all configured repositories. Basic repository management, for example, activating or deactivating a repository, is also supported.
 +
** <code>Notes</code> panels for Guests and Nodes can now interpret Markdown and render it as HTML.
 +
** On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
 +
** The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
 +
** Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
 +
** Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
 +
** Improved translations, among others:
 +
*** Arabic
 +
*** French
 +
*** German
 +
*** Japan
 +
*** Polish
 +
*** Turkish
 +
 
 +
* Access Control:
 +
** Single-Sign-On (SSO) with the new OpenID Connect access realm type.
 +
*: You can integrate external authorization servers, either using existing public services or your own identity and access management solution, for example, Keycloack or LemonLDAP::NG.
 +
** Added new permission <code>Pool.Audit</code> to allow users to see pools, without permitting them to change the pool.
 +
*: See breaking changes below for some possible impact in custom created roles.
 +
 
 +
* Virtual Machines (KVM/QEMU):
 +
** QEMU 6.0 has support for <code>io_uring</code> as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
 +
*: The new default can be overridden in the guest config via <code>qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native</code> (where, for example, DRIVE would be <code>scsi0</code> and the OPTS could be get from <code>qm config VMID</code> output).
 +
** EFI disks stored on Ceph now use the <code>writeback</code> caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
 +
** Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
 +
*** This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
 +
*** Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
 +
*: With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
 +
** VM snapshot states are now always removed when a VM gets destroyed.
 +
** Improved logging during live restore.
 +
 
 +
* Container
 +
** Support for containers on custom storages.
 +
** Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
 +
 
 +
* Migration
 +
** QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
 +
*: Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
 +
** Containers: The <code>force</code> parameter to <code>pct migrate</code>, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as <code>shared</code>.
 +
 
 +
* High Availability (HA):
 +
** Release LRM locks and disable watchdog protection if all services of the node the LRM is running on, got removed and no new ones were added for over 10 minutes.
 +
*: This reduced the possible subtle impact of an active watchdog after a node was cleared of HA services, for example, when HA services were previously only configured for evaluation.
 +
** Add a new HA service state <code>recovery</code> and transform the <code>fence</code> state in a transition to that new state.
 +
*: This gains a clear distinction between to be fenced services and the services whose node already got fenced and are now awaiting recovery.
 +
** Continuously retry recovery, even if no suitable node was found.
 +
*: This improves recovery for services in restricted HA groups, as only with that the possibility of a quorate and working partition but no available new node for a specific service exists.
 +
*: For example, if HA is used for ensuring that a HA service using local resource, like a VM using local storage, will be restarted and up as long as the node is running.
 +
** Allow manually disabling HA service that currently are in <code>recovery</code> state, for more admin control in those situations.
 +
 
 +
* Backup and Restore
 +
** Backups of QEMU guests now support encryption using a master key.
 +
** It is now possible to back up VM templates with SATA and IDE disks.
 +
** The <code>maxfiles</code> parameter has been deprecated in favor of the more flexible <code>prune-options</code>.
 +
** <code>vzdump</code> now defaults to keeping all backups, instead of keeping only the latest one.
 +
** Caching during live restore got reworked, reducing total restore time required and improving time to fully booted guest both significantly.
 +
** Support file-restore for VMs using ZFS or LVM for one, or more, storages in the guest OS.
 +
 
 +
* Network:
 +
** Default to the modern <code>ifupdown2</code> for new installations using the Proxmox VE official ISO. The legacy <code>ifupdown</code> is still supported in Proxmox VE 7, but may be deprecated in a future major release.
 +
 
 +
* Time Synchronization:
 +
** Due to the design limitations of <code>systemd-timesync</code>, which make it problematic for server use, new installations will install <code>chrony</code> as the default NTP daemon.
 +
*: If you upgrade from a system using <code>systemd-timesyncd</code>, it's recommended that you manually install either <code>chrony</code>, <code>ntp</code> or <code>openntpd</code>.
 +
 
 +
* Ceph Server
 +
** Support for Ceph 16.2 Pacific
 +
** Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple <code>public_networks</code> defined.
 +
*: Note that multiple <code>public_networks</code> are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
 +
** Improved support for IPv6 and mixed setups, when creating a Ceph monitor.
 +
** Beginning with Ceph 16.2 Pacific, the [https://docs.ceph.com/en/pacific/rados/operations/balancer/ balancer module] is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs.
 +
** Newly created Bluestore OSDs will benefit from the newly enabled [https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#bluestore-rocksdb-sharding sharding configuration for rocksdb], which should lead to better caching of frequently read metadata and less space needed during compaction.
 +
 
 +
* Storage
 +
** Support for Btrfs as technology preview
 +
*** Add an existing Btrfs file system as storage to Proxmox VE, using it for virtual machines, container, as backup target or to store and server ISO and container appliance images.
 +
** The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit<ref name="linbit-drbd-gh" group="footnotes 7.0">https://github.com/LINBIT/linstor-proxmox</ref>.
 +
** More use of content-type checks instead of checking a hard-coded storage-type list in various places.
 +
** Support downloading ISO and Cont appliance images directly from a URL to a storage, including optional checksum verifications.
 +
 
 +
* Disk Management
 +
** Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential.
 +
*: Note that with using this feature any data on the disk will be destroyed permanently.
 +
 
 +
* pve-zsync
 +
** Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
 +
 
 +
* Firewall
 +
** The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations).
 +
 
 +
* Certificate management
 +
** The ACME standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface.
 +
 
 +
<div id="7.0-breaking-changes"></div>
 +
'''Breaking Changes'''
 +
* Pool permissions
 +
: The old permission <code>Pool.Allocate</code> now only allows users to edit pools, '''not''' to see them. Therefore, <code>Pool.Audit</code> must be added to existing custom roles with the old <code>Pool.Allocate</code> to preserve the same behavior. All built-in roles are updated automatically.
 +
 
 +
* VZDump
 +
** Hookscript: The <code>TARFILE</code> environment variable was deprecated in Proxmox VE 6, in favor of <code>TARGET</code>. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
 +
** The <code>size</code> parameter of <code>vzdump</code> has been deprecated, and setting it is now an error.
 +
 
 +
* API deprecations, moves and removals
 +
** The <code>upgrade</code> parameter of the <code>/nodes/{node}/(spiceshell|vncshell|termproxy)</code> API method has been replaced by providing <code>upgrade</code> as <code>cmd</code> parameter.
 +
** The <code>/nodes/{node}/cpu</code> API method has been moved to <code>/nodes/{node}/capabilities/qemu/cpu</code>
 +
** The <code>/nodes/{node}/ceph/disks</code> API method has been replaced by <code>/nodes/{node}/disks/list</code>
 +
** The <code>/nodes/{node}/ceph/flags</code> API method has been moved to <code>/cluster/ceph/flags</code>
 +
** The <code>db_size</code> and <code>wal_size</code> parameters of the <code>/nodes/{node}/ceph/osd</code> API method have been renamed to <code>db_dev_size</code> and <code>wal_dev_size</code> respectively.
 +
** The <code> /nodes/<node>/scan/usb</code> API method has been moved to <code>/nodes/<node>/hardware/usb</code>
 +
 
 +
* CIFS credentials have been stored in the namespaced <code>/etc/pve/priv/storage/<storage>.pw</code> instead of <code>/etc/pve/<storage>.cred</code> since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
 +
 
 +
* The external storage plugin mechanism had a ABI-version bump that reset the ABI-age, thus marking an incompatible breaking change, that external plugins must adapt before being able to get loaded again.
 +
 
 +
* <code>qm|pct status <VMID> --verbose</code>, and the respective status API call, only include the <code>template</code> line if the guest is a template, instead of outputting <code>template:</code> for guests which are not templates.
 +
 
 +
<div id="7.0-known-issues"></div>
 +
'''Known Issues'''
 +
 
 +
* '''Network''': Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
 +
** Some may change their name. For example, due to newly supported functions, a change from <code>enp33s0f0</code> to <code>enp33s0f0np0</code> could occur.
 +
**: We observed such changes with high-speed Mellanox models.
 +
** [https://sources.debian.org/src/bridge-utils/1.7-1/debian/NEWS/#L3-L23 Bridge MAC address selection has changed in Debian Bullseye] - it is now generated based on the interface name and the <code>machine-id (5)</code> of the system.
 +
*: Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
 +
: If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
 +
 
 +
* '''Container''':
 +
** cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 or newer support cgroupv2 [1], as do containers that do not use systemd as init system in the first place (e.g., Alpine Linux or Devuan).
 +
*: CentOS 7 and Ubuntu 16.10 are two prominent examples for Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, for details and possible fixes see:
 +
https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat
 +
 
 +
==Proxmox VE 7.0 beta1==
 +
Released 24. June 2021: See [[Downloads]]
 +
* Based on Debian Bullseye (11)
 +
* Ceph Pacific 16.2 as new default
 +
* Ceph Octopus 15.2 continued support
 +
* Kernel 5.11 default
 +
* LXC 4.0
 +
* QEMU 6.0
 +
* ZFS 2.0.4
 +
* Installer:
 +
** Rework the installer environment to use <code>switch_root</code> instead of <code>chroot</code>, when transitioning from initrd to the actual installer.
 +
**: This improves module and firmware loading, and slightly reduces memory usage during installation.
 +
** Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
 +
** Improve ISO detection:
 +
*** Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
 +
*** Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
 +
** Use <code>zstd</code> compression for the initrd image and the squashfs images.
 +
** Update to busybox 1.33.1 as the core-utils provider.
 +
 
 +
* Enhancements in the web interface (GUI)
 +
** <code>Notes</code> panels for Guests and Nodes can now interpret Markdown and render it as HTML.
 +
** On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
 +
** The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
 +
** Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
 +
** Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
 +
** Improved translations, among others:
 +
*** Arabic
 +
*** French
 +
*** German
 +
*** Polish
 +
 
 +
* Virtual Machines (KVM/QEMU):
 +
** QEMU 6.0 has support for <code>io_uring</code> as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
 +
*: The new default can be overridden in the guest config per drive via <code>qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native</code>.
 +
** EFI disks stored on Ceph now use the <code>writeback</code> caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
 +
** Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
 +
*** This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
 +
*** Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
 +
*: With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
 +
** VM snapshot states are now always removed when a VM gets destroyed.
 +
** Improved logging during live restore.
 +
 
 +
* Container
 +
** Support for containers on custom storages.
 +
** Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
 +
 
 +
* Migration
 +
** QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
 +
*: Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
 +
** Containers: The <code>force</code> parameter to <code>pct migrate</code>, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as <code>shared</code>.
 +
 
 +
* Backup and Restore
 +
** Backups of QEMU guests now support encryption using a master key.
 +
** It is now possible to back up VM templates with SATA and IDE disks.
 +
** The <code>maxfiles</code> parameter has been deprecated in favor of the more flexible <code>prune-options</code>.
 +
** <code>vzdump</code> now defaults to keeping all backups, instead of keeping only the latest one.
 +
 
 +
* Network:
 +
** Default to the modern <code>ifupdown2</code> for new installations using the Proxmox VE official ISO. The legacy <code>ifupdown</code> is still supported in Proxmox VE 7, but may be dropped in a future major release.
 +
 
 +
* Time Synchronization:
 +
** Due to the design limitations of <code>systemd-timesync</code>, which make it problematic for server use, new installations will install <code>chrony</code> as the default NTP daemon.
 +
*: If you upgrade from a system using <code>systemd-timesyncd</code>, it's recommended that you manually install either <code>chrony</code>, <code>ntp</code> or <code>openntpd</code>.
 +
 
 +
* Ceph Server
 +
** Support for Ceph 16.2 Pacific
 +
** Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple <code>public_networks</code> defined.
 +
*: Note that multiple <code>public_networks</code> are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
 +
** Improved support for IPv6 and mixed setups, when creating a Ceph monitor
 +
** Beginning with Ceph 16.2 Pacific, the [https://docs.ceph.com/en/pacific/rados/operations/balancer/ balancer module] is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs
 +
** Newly created Bluestore OSDs will benefit from the newly enabled [https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#bluestore-rocksdb-sharding sharding configuration for rocksdb], which should lead to better caching of frequently read metadata and less space needed during compaction
 +
 
 +
* Storage
 +
** The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit<ref name="linbit-drbd-gh" group="footnotes 7.0"/>.
 +
 
 +
* Disk Management
 +
** Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential. Any data on the disk will be destroyed.
 +
 
 +
* pve-zsync
 +
** Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
 +
 
 +
* Firewall
 +
** The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations)
 +
 
 +
* Certificate management
 +
** The ACME Standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface
 +
 
 +
<div id="7.0-beta-breaking-changes"></div>
 +
'''Breaking Changes'''
 +
 
 +
* Pool permissions
 +
: Added new permission <code>Pool.Audit</code> to allow users to see pools, without permitting them to change the pool. The old permission <code>Pool.Allocate</code> now only allows users to edit pools, '''not''' to see them. Therefore, <code>Pool.Audit</code> must be added to custom roles with the old <code>Pool.Allocate</code> to preserve the same behavior. All built-in roles are updated automatically.
 +
 
 +
* VZDump
 +
** Hookscript: The <code>TARFILE</code> environment variable was deprecated in Proxmox VE 6, in favor of <code>TARGET</code>. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
 +
** The <code>size</code> parameter of <code>vzdump</code> has been deprecated, and setting it is now an error.
 +
 
 +
* API deprecations, moves and removals
 +
** The <code>upgrade</code> parameter of the <code>/nodes/{node}/(spiceshell|vncshell|termproxy)</code> API method has been replaced by providing <code>upgrade</code> as <code>cmd</code> parameter.
 +
** The <code>/nodes/{node}/cpu</code> API method has been moved to <code>/nodes/{node}/capabilities/qemu/cpu</code>
 +
** The <code>/nodes/{node}/ceph/disks</code> API method has been replaced by <code>/nodes/{node}/disks/list</code>
 +
** The <code>/nodes/{node}/ceph/flags</code> API method has been moved to <code>/cluster/ceph/flags</code>
 +
** The <code>db_size</code> and <code>wal_size</code> parameters of the <code>/nodes/{node}/ceph/osd</code> API method have been renamed to <code>db_dev_size</code> and <code>wal_dev_size</code> respectively.
 +
** The <code> /nodes/<node>/scan/usb</code> API method has been moved to <code>/nodes/<node>/hardware/usb</code>
 +
 
 +
* CIFS credentials have been stored in the namespaced <code>/etc/pve/priv/storage/<storage>.pw</code> instead of <code>/etc/pve/<storage>.cred</code> since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
 +
 
 +
* <code>qm|pct status <VMID> --verbose</code>, and the respective status API call, only include the <code>template</code> line if the guest is a template, instead of outputting <code>template:</code> for guests which are not templates.
 +
 
 +
Note that there is no API-stability guarantee during the beta.
 +
 
 +
<div id="7.0-beta-known-issues"></div>
 +
'''Known Issues'''
 +
 
 +
* '''Network''': Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
 +
** Some may change their name. For example, due to newly supported functions, a change from <code>enp33s0f0</code> to <code>enp33s0f0np0</code> could occur.
 +
**: We observed such changes with high-speed Mellanox models.
 +
** [https://sources.debian.org/src/bridge-utils/1.7-1/debian/NEWS/#L3-L23 Bridge MAC address selection has changed in Debian Bullseye] - it is now generated based on the interface name and the <code>machine-id (5)</code> of the system.
 +
*: Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
 +
: If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
 +
* Misc. bug fixes and improvements
 +
 
 +
'''Notes'''
 +
<references group="footnotes 7.0" />
 +
 
 +
== Proxmox VE 6.4 ==
 +
Released 28. April 2021
 +
* Based on Debian Buster (10.9)
 +
* Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20
 +
* Kernel 5.4 default
 +
* Kernel 5.11 opt-in
 +
* LXC 4.0
 +
* QEMU 5.2
 +
* ZFS 2.0.4 - new [https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0 major version]
 +
 
 +
* Virtual Machines (KVM/QEMU):
 +
** Support pinning a VM to a specific QEMU machine version.
 +
** Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation.
 +
*: This improves stability and guarantees that the hardware layout can stay the same even with newer QEMU versions.
 +
** Address issues with hanging QMP commands, causing VMs to freeze on disk resize and indeterministic edge cases.
 +
*: Note that some QMP timeout log messages are still being investigated, they are harmless and only of informative nature.
 +
** cloud-init: re-add ''Stateless Address Autoconfiguration'' (SLAAC) option to IPv6 configuration.
 +
** Improve output in task log for mirroring drives and VM live-migration.
 +
 
 +
* Container
 +
** Improved cgroup v2 (control group) handling.
 +
** Support and provide appliance templates for Alpine Linux 3.13, Devuan 3, Fedora 34, Ubuntu 21.04.
 +
 
 +
* Backup and Restore
 +
** Implement unified single-file restore for virtual machine (VM) and container (CT) backup archives located on a Proxmox Backup Server.
 +
*: The file-restore is available in the GUI and in a new command line tool <code>proxmox-file-restore</code>.
 +
** Live-Restore of VM backup archives located on a Proxmox Backup Server.
 +
*: No more watching the task log, waiting for a restore to finish; VMs can now be brought up while the restore runs in the background.
 +
** Consistent handling of excludes for container backups across the different backup modes and storage types.
 +
** Container restores now default to the privilege setting from the backup archive.
 +
 
 +
* Ceph Server
 +
** Improve integration for placement group (PG) auto-scaler status and configuration.
 +
**: Allow configuration of the CRUSH-rule, <code>Target Size</code> and <code>Target Ratio</code> settings, and automatically calculating the optimal numbers of PGs based on this.
 +
 
 +
* Storage
 +
** Support editing of backup notes on any CephFS, CIFS or NFS storage.
 +
** Support configuring a namespace for accessing a Ceph pool.
 +
** Improve handling ZFS pool by doing separate checks for imported and mounted.
 +
*: This separation helps in situations where a pool was imported but not mounted, and executing another import command failed.
 +
 
 +
* Disk Management
 +
** Return partitions and display them in tree format.
 +
** Improve detection of disk and partition usage.
 +
 
 +
* Enhancements in the web interface (GUI)
 +
** Show current usage of host memory and CPU resources by each guest in a node's search-view.
 +
** Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, providing consistency with the units used in the current usage gauge.
 +
** Make columns in the firewall rule view more responsive and flexible by default.
 +
** Improve Ceph pool view, show auto-scaler related columns.
 +
** Support editing existing Ceph pools, adapting the CRUSH-rule, <code>Target Size</code> and <code>Target Ratio</code>, among other things.
 +
 
 +
* External metric servers:
 +
** Support the InfluxDB 1.8 and 2.0 HTTP(s) API.
 +
** Allow use of InfluxDB instances placed behind a reverse-proxy.
 +
 
 +
* Proxmox VE API Proxy Daemon (<code>pveproxy</code>)
 +
** Make listening IP configurable (in <code>/etc/default/pveproxy</code>). This can help to limit exposure to the outside (e.g. by only binding to an internal IP).
 +
** <code>pveproxy</code> now listens for both IPv4 and IPv6, by default
 +
 
 +
* Installation ISO:
 +
** Installation on ZFS:
 +
*** if booted with legacy BIOS (non-UEFI), now also copy the kernel images to the second VFAT partition (ESP), allowing the system to boot from there with grub, making it possible to enable all ZFS features on such systems.
 +
*** set up the boot-partition and boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable.
 +
** The installer environment attempts to do an NTP time synchronization before actually starting the installation, avoiding telemetry and cluster issues, if the RTC had a huge time-drift.
 +
 
 +
* pve-zsync
 +
** Improved snapshot handling allowing for multiple sync intervals for a source and destination pair.
 +
** Better detection of aborted syncs, which previously caused pve-zsync to stop the replication.
 +
 
 +
* Firewall
 +
** Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by <code>iptables-restore</code> due to its size limitations. We recommended you to create and use IP-Sets for that use case.
 +
** Improvements to the command-line parameter handling.
  
=Release History=
+
<div id="6.4-known-issues"></div>
See also [http://forum.proxmox.com/forums/7-Announcements Announcement forum]
+
'''Known Issues'''
 +
 
 +
* Please '''avoid''' using <code>zpool upgrade</code> on the "rpool" (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
 +
*: See [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_determine_bootloader_used the documentation for determining the bootloader used], if you're unsure.
 +
*: Setups installed with the Proxmox VE 6.4 ISO are '''not''' affected, as there the installer always sets up an easier to handle, vfat formatted, ESP to boot.
 +
*: See the [[ZFS: Switch Legacy-Boot to Proxmox Boot Tool]] article about how to switch over to a safer boot variant for legacy GRUB-booted setups with ZFS as root filesystem.
 +
 
 +
* New default bind address for pveproxy and spiceproxy, unifying the default behavior with Proxmox Backup Server
 +
** With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4 <code>0.0.0.0:8006</code> and IPv6 <code>[::]:8006</code>) by default.
 +
*: Should you wish to prevent it from listening on IPv6, simply configure the IPv4 wildcard as LISTEN_IP in <code>/etc/default/pveproxy</code>:
 +
*: <code>LISTEN_IP="0.0.0.0"</code>
 +
** Additionally, the logged IP address format changed for IPv4 in pveproxy's access log (<code>/var/log/pveproxy/access.log</code>). They are now logged as IPv4-mapped IPv6 addresses instead of:
 +
*: <code>192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854</code>
 +
*: the line now looks like:
 +
*: <code>::ffff:192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854</code>
 +
*:If you want to restore the old logging format, also set <code>LISTEN_IP="0.0.0.0"</code>
 +
 
 +
* Resolving the Ceph `insecure global_id reclaim` Health Warning
 +
*: With Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20 we released an update to fix a security issue (CVE-2021-20288) where Ceph was not ensuring that reconnecting/renewing clients were presenting an existing ticket when reclaiming their global_id value.
 +
*: Updating from an earlier version will result in the above health warning.
 +
*: See the [https://forum.proxmox.com/threads/ceph-nautilus-and-octopus-security-update-for-insecure-global_id-reclaim-cve-2021-20288.88038/#post-385756 forum post here] for more details and instructions to address this warning.
 +
 
 +
==Proxmox VE 6.3==
 +
Released 26. November 2020
 +
* Based on Debian Buster (10.6)
 +
* Ceph Octopus 15.2.6 (first stable release) and Ceph Nautilus 14.2.15
 +
* Kernel 5.4 LTS
 +
* LXC 4.0
 +
* QEMU 5.1
 +
* ZFS 0.8.5
 +
 
 +
* Proxmox Backup Server Integration
 +
** Stable Proxmox Backup Server integration: The stable version 1.0 of Proxmox Backup Server is now integrated and enterprise support is available from the Proxmox support team.
 +
** Data encrypted on client-side before backing up to Proxmox Backup Server.
 +
 
 +
* Ceph
 +
** Stable integration of Ceph Octopus.
 +
** Add selector to choose which supported Ceph version to install in the GUI configuration wizard.
 +
** Recovery progress is displayed in the Ceph status panel.
 +
** Show and allow setting of Placement Group (PG) auto-scaling mode of Ceph pools.
 +
** Set device class when creating OSDs, especially if the auto-detection yields the wrong class.
 +
 
 +
* Enhancements in the GUI
 +
** Improved VM boot device order editor:
 +
*** It is now possible to select multiple devices per type (disk, network) for booting.
 +
*** Booting from passed through PCI devices (e.g., NVMe drives) is supported.
 +
*** Improved user experience with a drag-and-drop UI.
 +
** GUI for editing external metric servers: You can now connect your Proxmox VE nodes to InfluxDB or Graphite using the GUI, instead of having to manually edit <code>/etc/pve/status.cfg</code>
 +
** Optional TLS certificate verification for LDAP and AD authentication realms.
 +
** Improve high-DPI display and browser zoom compatibility.
 +
** Split up storage content view by type.
 +
** Backup/Restore:
 +
*** Overview of all guests, which aren't included in any backup at all.
 +
*** Detailed view per backup job, showing all covered guests and which of their disks are backed up.
 +
** Display optional comments for all storage types.
 +
*** Proxmox Backup Server additionally displays the verification state of all backup snapshots.
 +
** Better usability for preventing accidental snapshot rollback
 +
*** The GUI now makes it difficult to accidentally confuse snapshot removal with snapshot rollback.
 +
 
 +
* Storage
 +
** Add highly flexible backup retention with "keep" settings: The new backup retention settings, which augment and replace the "Max Backups" setting, enable you to decide how many backups to keep per timeframe and implement enhanced retention policies per storage or backup job.
 +
** Better handling of container volume activation on ZFS.
 +
** Increased timeout for connecting to CIFS and NFS storage over slow links.
 +
** Improve querying SSD wear leveling.
 +
** Small improvements to the configuration handling of the LIO iSCSI provider for ZFS over iSCSI storage.
 +
** ZFS disk management: create a systemd service to unconditionally import a pool created using the GUI.
 +
 
 +
* Container
 +
** Add support for current Devuan containers.
 +
** Add support for Kali Linux containers.
 +
** Update list of supported releases for Ubuntu, Fedora and CentOS.
 +
** Support setting a custom timezone per container.
 +
** Improve startup monitoring.
 +
** Add a <code>debug</code> parameter to <code>pct start</code>, to output the debug log of the container start.
 +
** Support systems with up to 8192 cores.
 +
** Optionally ignore mount points while running <code>pct fstrim</code>.
 +
** Fix aborting of backups on Ceph backed containers with a large IO load, by issuing fsfreeze before snapshotting.
 +
 
 +
* QEMU
 +
** Fast, incremental backups to Proxmox Backup Server using dirty-bitmaps.
 +
** Handle guest shutdowns (power down from within a VM) during backups.
 +
** Improved boot order selection allowing booting from multiple virtual disks and from passed through PCI devices (e.g., NVMe drives).
 +
** Allow pass through of certain older Intel iGPU models with QEMU's 'legacy-igd' mode.
 +
** Support more CPU options, for example SSE4.2
 +
** Better support for hugepages across multiple NUMA nodes.
 +
 
 +
* General improvements for virtual guests
 +
** Improved handling of replicated guests when migrating.
 +
 
 +
* Clustering
 +
** Harden locking in the clustered configuration filesystem (pmxcfs), avoids a possible freeze when joining, messaging or leaving a closed-process-group.
 +
 
 +
* User and permission management
 +
** Improved support for using client certificates/keys when connecting to AD/LDAP realms.
 +
** Optional support for case-insensitive logins with AD/LDAP realms.
 +
** Fine-grained permissions for SDN and CloudInit.
 +
** Better handling of clock jumps for rotating keys.
 +
 
 +
* Firewall
 +
** Improved API for matching ICMP-types.
 +
 
 +
* Documentation
 +
** Clarify qdevice requirements.
 +
** Add section about ZFS pool design choices.
 +
** Add documentation on requirement for encrypted ZFS datasets as storage.
 +
** Add manpage for cpu-models.conf(5).
 +
 
 +
* Installer
 +
** Reboot automatically upon successful installation.
 +
** Drop ext3 as supported file system.
 +
** Start a shell on vt3 for debugging during installation.
 +
 
 +
* Experimental: support for Software-Defined Networking (SDN)
 +
** Support for IPAM with a plugin framework.
 +
** Add support for internal IPAM management and PowerDNS.
 +
 
 +
* Countless bug fixes and smaller improvements
 +
 
 +
<div id="_6.3_known_issues"></div>
 +
'''Known Issues'''
 +
* VM guests with multiple boot disk setups (e.g., mdadm, LVM, etc...) need to be configured with the new boot order config so that all required disks are marked "bootable", with the one containing the boot loader placed first - otherwise, the guest may fail to boot after being restarted with the new QEMU 5.1 version.
 +
* The "exclude-path" option for vzdump now supports non-anchored paths for all backup modes. Non-anchored paths do not start with a '/' and will match in any subdirectory. Previously, such paths only had an effect for "suspend" mode backups, so please ensure that you don't have any unwanted non-anchored paths configured.
 +
 
 +
==Proxmox VE 6.2==
 +
Released 12. May 2020
 +
*Based on Debian Buster (10.4)
 +
*Ceph Nautilus (14.2.9)
 +
*Kernel 5.4 LTS
 +
*LXC 4.0
 +
*Qemu 5.0
 +
*ZFS 0.8.3
 +
 
 +
*Web interface
 +
**Allow to deploy DNS based Let's Encrypt (ACME) certificates
 +
**Allow admins to see the full granted permission/privilege tree for a user
 +
**Add experimental Software Defined Network (SDN) GUI and documentation, based on modern mainline Linux Kernel network technology
 +
**Allow to collapse the Notes sub-panel in the guest summary panels. One can do this permanently or automatically if empty through the user settings.
 +
**Add 'Creation Date' column for storage content, allowing one, for example, to comfortably find backups from a certain date
 +
**Add device node creation to Container feature selection
 +
**Full support for up to 8 corosync links
 +
**Automatic guest console reload on VM/CT startup
 +
**Add Arabic translation
 +
**Allow seamlessly changing the language, without logging out and logging in again
 +
 
 +
* Container
 +
**LXC 4.0.2 and lxcfs 4.0.3 with initial full cgroupv2 support by Proxmox VE
 +
**Improve support for modern systemd based Containers
 +
**Improve default settings to support hundreds to thousands* of parallel running Containers per node (* thousands only with simple distributions like Alpine Linux)
 +
**Allow creating templates on directory-based storage
 +
 
 +
* Backup/Restore
 +
**Support for the highly efficient and fast Zstandard (zstd) compression
 +
 
 +
* Improvements to the HA stack
 +
** Allow to destroy virtual guests under HA control when purge is set
 +
 
 +
* QEMU
 +
** Fixed EFI disk behavior on block-based storage (see [[#_6.2_known_issues_ovmf|known issues]])
 +
** VirtIO Random Number Generator (RNG) support for VMs
 +
** Custom CPU types with user-defined flags and QEMU/KVM settings
 +
** Improved machine type versioning and compatibility checks
 +
** Various stability fixes, especially for backups and IO-Threads
 +
** Migration:
 +
*** Enable support for Live-Migration with replicated disks
 +
*** Allow specifying target storage for offline mode
 +
*** Allow specifying multiple source-target storage pairs (for now, CLI only)
 +
*** Improve behavior with unused disks
 +
*** Secure live-migration with local disks
 +
 
 +
* General improvements for virtual guests
 +
** Handle ZFS volumes with non-standard mount point correctly
 +
 
 +
* Cluster
 +
** Improve lock contention during high frequency config modifications
 +
** Add versioning for cluster join
 +
** Enable full support for up to 8 corosync links
 +
 
 +
* Ceph
 +
** Easier uninstall process, better informing which actions need to be taken.
 +
 
 +
* Storage
 +
** Storage migration: introduce allow-rename option and return new volume ID if a volume with the same ID is already allocated on the target storage
 +
** Support the 'snippet' content type for GlusterFS storage
 +
** Bandwidth limitations are now also available for SAMBA/CIFS based storage
 +
** Handle ZFS volumes with non-standard mount point correctly
 +
** Improve metadata calculation when creating a new LVM-Thin pool
 +
** Improve parsing of NVMe wearout metrics
 +
 
 +
* User and permission management
 +
** LDAP Sync users and groups automatically into Proxmox VE
 +
** LDAP mode LDAP+STARTTLS
 +
** Allow to add and manage authentication realms through the 'pveum' CLI tool
 +
** Full support and integration for API Tokens
 +
*** Shared or separated privileges
 +
*** Token lifetime
 +
*** Revoke it anytime without impacting your user login
 +
 
 +
* Documentation
 +
** Update hypervisor migration guide in the wiki
 +
** Reducing number of Ceph placement groups is possible
 +
** Improve layout, use a left-side based table-of-content
 +
 
 +
* Various improvements
 +
** Firewall: make config parsing more robust and improve ICMP-type filtering
 +
 
 +
<div id="_6.2_known_issues_ovmf"></div>
 +
'''Known Issues with OVMF/UEFI disks of VMs'''
 +
 
 +
A VM's EFI disk on a storage which doesn't allow for small (128 KB) images (for example: CEPH, ZFS, LVM(thin)), was not correctly mapped to the VM. While fixed now, such existing setup may need manual intervention:
 +
* You do not have to do anything if your EFI disks is using qcow2 or "raw" on a file based storage.
 +
* '''Before the upgrade''', make sure that on your ESP, the EFI boot binary exists at <code>\EFI\BOOT\BOOTX64.EFI</code> (the default EFI Boot fallback).
 +
** Windows and some Linux VMs using systemd-boot should do that automatically
 +
* '''If you already upgraded''' and it does not boot, see [[OVMF/UEFI Boot Entries]] on how to recreate the boot entry via the OVMF Boot Menu
 +
 
 +
==Proxmox VE 6.1==
 +
Released 4. December 2019: See [[Downloads]], updated ISO release 2 with installer bug fixes and updated packages on 6. March 2020
 +
 
 +
* Based on Debian Buster (10.2)
 +
* Ceph Nautilus (14.2.4.1)
 +
* Corosync 3.0
 +
* Kernel 5.3
 +
* LXC 3.2
 +
* Qemu 4.1.1
 +
* ZFS 0.8.2
 +
 
 +
* Web interface
 +
** More datacenter configuration options are now editable via GUI:
 +
*** U2F two-factor settings.
 +
*** Cluster-wide bandwidth limit for the following traffic types: migration, backup-restore, clone, disk-move.
 +
** Improvements to two-factor authentication allowing to use a hardware-based TOTP key.
 +
** Mobile GUI: implemented login for TOTP-based two-factor authentication enabled user accounts.
 +
** More icons have been converted from bitmap to vectorized icons from Font Awesome.
 +
** The scale mode of noVNC can be changed in "My Settings".
 +
** Addition of Hebrew language translations.
 +
** New "Run now" button for cluster-wide backup jobs.
 +
** With <nowiki>ifupdown2</nowiki> installed, it is now possible to change the network configuration and reload it from the GUI without a reboot.
 +
 
 +
* Container
 +
** Pending changes for containers. It is possible to make changes to a running container. They will be applied after the next reboot of the container.
 +
** Reboot of a running container via GUI, API, and command-line interface.
 +
** Hot plugging of mount points by using the new mount API available with Linux Kernel 5.3.
 +
** Support of recent releases of GNU/Linux distributions like Fedora 31, CentOS 8, and Ubuntu 19.10.
 +
 
 +
* SPICE
 +
** Audio devices can now be added via GUI (no need to edit the configuration file).
 +
** Folders can now be shared between SPICE client and VM (still experimental).
 +
** Video streaming can be enabled. This can help to improve the performance of fast-changing display areas like videos.
 +
** The SPICE USB device now supports USB3 (Qemu >= 4.1).
 +
 
 +
* Backup/Restore
 +
** VMs with IOThreads enabled can now be backed up.
 +
** Run scheduled backup jobs manually from the Datacenter in the GUI.
 +
 
 +
* Improvements to the HA stack
 +
** New 'migrate' shutdown policy. If enabled, running services will be migrated to another node on shutdown. Once the node comes back, and if the services have not been manually moved to another node in the meantime, those services will move back.
 +
** New 'crm-command stop'. Shuts down a VM/container with the specified timeout and performs a hard stop if the timeout is 0. A VM or container stop command will now be redirected to this new crm-command.
 +
 
 +
* QEMU
 +
** Allow domains other than '0000' for PCI(e) passthrough.
 +
** New 'reboot' API call. This allows to apply pending changes without having to wait for the guest to shutdown before starting it again.
 +
** QEMU monitor timeout issue, which prevented successful backups in certain configurations, has been fixed.
 +
** PCI(e) passthrough supports up to 16 PCI(e) devices
 +
** Support for QEMU Guest Agents using an ISA serial port (not VirtIO) for communication, allows to use the QGA with FreeBSD - among others.
 +
 
 +
* General improvements for virtual guests
 +
** Allow 'tags' in the guest configuration. This meta-information can be useful for things like configuration management (no GUI support yet).
 +
** VM/CT: Purge allows to remove the corresponding virtual machine or container from replication or backup jobs on destruction.
 +
 
 +
* Cluster stability
 +
** A number of bugs have been identified and fixed upstream in cooperation with corosync and kronosnet.
 +
** The initial issues some users have experienced with MTU changes are resolved.
 +
** pmxcfs has undergone a stability audit using ASAN (AddressSanitizer) and UBSAN (Undefined Behavior Sanitizer), fixing various possible issues for certain edge-cases.
 +
 
 +
* Storage
 +
** Allow configuration of non-default 'mountpoint' properties for ZFS.
 +
** Allow .img files as alternative to .iso images.
 +
** Various iSCSI improvements.
 +
** Overhaul of ZFS on iSCSI with the LIO target provider.
 +
** Supports all features offered by newer kernels with Ceph and KRBD.
 +
 
 +
* Various improvements
 +
** Firewall: Added support for 'raw' tables and use them to allow users to enable and configure Synflood-Attack protections.
 +
** Automatic certificate renewal for the self-signed certificate 2 weeks before expiration.
 +
** Reduced certificate lifetime of newly generated certificates (2 years instead of 10 years). Avoids that some modern Browser complain about very long certificate lifetime.
 +
 
 +
* Documentation
 +
** Proofreading of parts of the documentation (style and grammar).
 +
** Enhanced documentation for the Ceph administration.
 +
 
 +
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
 +
 
 +
==Proxmox VE 6.0==
 +
Released 16. July 2019: See [[Downloads]]
 +
 
 +
*Based on Debian Buster 10.0
 +
*Pre-upgrade checklist tool `pve5to6` - available on Proxmox VE 5.4 and V 6.0
 +
**Running `pve5to6` checks for common pitfalls known to interfere with a clean upgrade process.
 +
*Corosync 3.0.2 using Kronosnet as transport
 +
**Default transport method now uses unicast, this can simplify setups where the network had issues with multicast.
 +
**New Web GUI Network selection widget avoids making typos when choosing the correct link address.
 +
**Currently, there is no multicast support available (it's on the kronosnet roadmap).
 +
*LXC 3.1
 +
*Ceph Nautilus 14.2.x
 +
**Better performance monitoring for rbd images through `rbd perf image iotop` and `rbd perf image iostat`.
 +
**OSD creation, based on ceph-volume: integrated support for full disk encryption of OSDs.
 +
**More robust handling of OSDs (no more mounting and unmounting to identify the OSD).
 +
**'''ceph-disk has been removed''': After upgrading it is not possible to create new OSDs without upgrading to Ceph Nautilus.
 +
**Support for PG split and join: The number of placement groups per pool can now be increased and decreased. There is even an optional plugin in ceph-manager to automatically scale the number of PGs.
 +
**New  messenger v2 protocol brings support for encryption on the wire (currently this is still experimental).
 +
**See https://docs.ceph.com/en/nautilus/releases/nautilus/ for the complete release notes.
 +
*Improved Ceph administration via GUI
 +
**A cluster-wide overview for Ceph is now displayed in the 'Datacenter View' too.
 +
**The activity and state of the placement groups (PGs) is visualized.
 +
**The version of all Ceph services is now displayed, making detection of outdated services easier.
 +
**Configuration settings from the config file and database are displayed.
 +
**You can now select the public and cluster networks in the GUI with a new network selector.
 +
**Easy encryption for OSDs with a checkbox.
 +
*ZFS 0.8.1
 +
**Native encryption for datasets with comfortable key-handling by integrating the encryption directly into the `zfs` utilities. Encryption  is as flexible as volume creation and adding redundancy - the gained comfort w.r.t. dm-crypt is comparable to the difference between mdadm+lvm to zfs.
 +
**Allocation-classes for vdevs:  you can add a dedicated fast device to a pool which is used for storing often accessed data (metadata, small files).
 +
**TRIM-support - use `zpool trim` to notify devices about unused sectors.
 +
**Checkpoints on pool level.
 +
**See https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.0 and https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.1 for the complete release notes.
 +
*Support for ZFS on UEFI and on NVMe devices in the installer
 +
**You can now install Proxmox VE with its root on ZFS on UEFI booted systems.
 +
**You can also install ZFS on NVMe devices directly from the installer.
 +
**By using `systemd-boot` as bootloader all pool-level features can be enabled on the root pool.
 +
*Qemu 4.0.0
 +
**Live migration of guests with disks backed by local storage via GUI.
 +
**Added support for more Hyper-V enlightenments improving Windows performance in a virtual machine under Qemu/KVM.
 +
**Mitigations for the performance impact of recent Intel CPU vulnerabilities.
 +
**More VM CPU-flags can be set in the web interface.
 +
**Newer virtual PCIe port hardware for machine type q35 in version >= 4.0. This fixes some passthrough issues.
 +
**Support for custom Cloudinit configurations:
 +
***You can create a custom Cloudinit configuration and store it as snippet on a storage.
 +
***The `qm cloudinit dump` command can be used to get the current Cloudinit configuration as a starting point for extensions.
 +
*Firewall improvements
 +
**Improved detection of the local network so  that all used corosync cluster networks get automatically whitelisted.
 +
**Improved firewall behavior during cluster filesystem restart, e.g. on package upgrade.
 +
*Mount options for container images
 +
**You can now set certain performance and security related mount options for each container mountpoint.
 +
*Linux Kernel
 +
**Updated 5.0 Kernel based off the Ubuntu 19.04 "Disco" kernel with ZFS.
 +
**Intel in-tree NIC drivers are used:
 +
***Many recent improvements to the kernel networking subsystem introduced incompatibilities with the out of tree drivers provided by Intel, which sometimes lag behind on support for new kernel versions. This can lead to a change of the predictable network interface names for Intel NICs.
 +
*Automatic cleanup of old kernel images
 +
**Old kernel images are not longer marked as NeverAutoRemove - preventing problems when /boot is mounted on a small partition.
 +
**By default the following images are kept installed (all others can be automatically removed with `apt autoremove`):
 +
***the currently running kernel
 +
***the version being newly installed on package updates
 +
***the two latest kernels
 +
***the latest version of each kernel series (e.g. 4.15, 5.0)
 +
*Guest status display in the tree view: Additional states for guests (migration, backup, snapshot, locked) are shown directly in the tree overview.
 +
*Improved ISO detection in the installer: The way how the installer detects the ISO was reworked to include more devices, alleviating problems of detection on certain hardware.
 +
*Pool level backup: It is now possible to create a backup task for backing up a whole pool. By selecting a pool as backup target instead of an explicit list of guests, new members of the pool are automatically included, and removed guests are automatically excluded from the backup task.
 +
*New User Settings and Logout menu.
 +
*Automatic rotation of authentication key every 24h: by limiting the key lifetime to 24h the impact of key leakage or a malicious administrator are reduced.
 +
*The nodes Syslog view in the GUI was overhauled and is now faster.
 +
*Sheepdog is no longer maintained, and thus not supported anymore as Storage plugin.
 +
*`ceph-disk` has been removed in Ceph Nautilus - use `ceph-volume` instead.
 +
*Improved reference documentation
 +
*Upgrade documentation: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
 +
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
 +
 
 +
==Proxmox VE 6.0 beta1==
 +
Released 04.07.2019: See [[Downloads]]
 +
*Based on Debian Buster 10.0
 +
*Kernel 5.0.15
 +
*ZFS on EFI and/or NVMe installation
 +
*Ceph v14.2.1 Nautilus
 +
*ZFS 0.8.1
 +
*QEMU 4.0.0
 +
*LXC 3.1.0
 +
*Corosync 3
 +
*Misc. bug fixes and improvements
 +
 
 +
==Proxmox VE 5.4==
 +
Released 11.04.2019: See [[Downloads]]
 +
 
 +
*Based on Debian Stretch 9.8
 +
*Kernel 4.15.18
 +
*QEMU 2.12.1 (QEMU 3.0.1 since May 2019 via apt)
 +
*LXC 3.1.0
 +
*ZFS 0.7.13
 +
*Ceph 12.2.11 (Luminous LTS, stable), packaged by Proxmox
 +
**Ceph can now be installed via the GUI, thus closing the gap of needing the CLI for this initial step, before configuring a complete (hyperconverged) Ceph setup via PVE's GUI.
 +
*QDevice support via `pvecm`
 +
**primarily for small 2-node clusters adding a qdevice can help mitigate the downside of not being able to reboot one node without losing quorum (and thus the ability to make any changes in the cluster) ** Can also help in clusters with a larger even number of nodes by providing a tie-break vote.
 +
**Integration into pvecm and PVE stack vastly simplifies adding a qdevice (it was possible manually before as well)
 +
*Wake On Lan support via `pvenode`
 +
**It is now possible to associate a MAC-Address with a PVE-node.
 +
**This can be used to send a WOL (wake on lan) packet to that node from any other node in the PVE-Cluster, and thus power it on on-demand
 +
**Provides a comfortable way of saving power by keeping only part of the nodes permanently online, and starting others on demand (e.g. increased load, or for running guests during upgrades to the PVE-nodes.)
 +
*GUI Container wizard creates unprivileged containers by default
 +
*HA improvements and added flexibility
 +
**It is now possible to set a datacenter wide HA policy which can change the way guests are treated upon a Node shutdown or reboot. The choices are:
 +
***freeze: always freeze servivces, independent of the shutdown type (reboot, poweroff)
 +
***failover: never freeze services, this means that a service will get recovered to another node if possible and if the current node does not comes back up in the grace period of 1 minute.
 +
***default: this is the current behavior, freeze on reboot but do not freeze on poweroff
 +
* Suspend to disk/hibernate support for Qemu/KVM guests
 +
**qemu guests can be 'hibernated' (have their RAM contents and internal state saved to permanent storage) and resumed on the next start.
 +
**This enables users to preserve the running state of the qemu-guests across most upgrades to and reboots of the PVE-node.
 +
**Additionally it can speed up the startup of guests running complex workloads/ workloads which take lots of resources to setup initially, but which need not run permanently.
 +
*Support for guest (both Qemu and LXC) hookscripts:
 +
**Hook-scripts are small executables which can be configured for each guest, and are called at certain steps of the guest's lifetime ('pre-start', 'post-start', 'pre-stop', 'post-stop').
 +
**This gives Administrators great flexibility in the way they can prepare the environment for the guest (e.g. adding necessary network resources (routes, vlans), firewall-rules, unlocking encrypted files/devices,...) and cleaning them up when the guest is shutdown or stopped.
 +
*Improved Qemu Guest creation wizard:
 +
**Certain often requested options (e.g. Machine-type (q35, pc-i440fx), Firmware (Seabios, UEFI), SCSI controller) can now be selected directly in the VM Creation Wizard, and dependent options get set to sensible values directly.
 +
*Clusterwide Subscription Check and Display
 +
**The Datacenter Summary page now gives Administrators a global overview of the subscription status of all nodes in a PVE cluster, thus helping to verify the support-status of the complete cluster on one quick glance
 +
* Improved UI in the installer
 +
**It is now possible to go back to a previous screen in the pve-installer, and adapt choices made there, without the need to restart the complete installation process
 +
**Before the actual installation is performed an information-page is displayed containing all relevant information
 +
*U2F Authentication for the WebUI
 +
**PVE's WebUI now can optionally use two-factor authentication for users.
 +
**This is generally considered good security practice, and in certain domains and environments it is required.
 +
**The new U2F authentication and the TOTP second factor authentication can be configured by each user for themselves (before all 2FA solutions needed to be configured by a user with User.Modify permission).
 +
*Improved reference documentation
 +
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
 +
 
 +
==Proxmox VE 5.3==
 +
Released 4.12.2018: See [[Downloads]], updated ISO image with installer and apt bug fixes and updated packages on 23.1.2019
 +
 
 +
*Based on Debian Stretch 9.6
 +
*Kernel 4.15.18
 +
*QEMU 2.12.1
 +
*LXC 3.0.2
 +
*ZFS 0.7.12
 +
*Ceph 12.2.8 (Luminous LTS, stable), packaged by Proxmox
 +
*Installer with ZFS: no swap space is created by default, instead an optional limit of the used space in the advanced options can be defined, thus leaving unpartitioned space at the end for a swap partition.
 +
*Disk Management on GUI (ZFS, LVM, LVMthin, xfs, ext4)
 +
*Create CephFS via GUI (MDS)
 +
*CephFS Storage plugin (supports backups, templates and ISO images)
 +
*LIO support for ZFS over iSCSI storage plugin
 +
*ifupdown2 package and partial GUI support
 +
*Delete unused disk images on the storage view
 +
*Enable/disable the local cursor in noVNC
 +
*Enable/disable autoresize in noVNC
 +
*Edit /etc/hosts/ via GUI, which is necessary whenever the IP or hostname is changed
 +
*Editable VGA Memory for VM, enables higher resolutions (UHD) for e.g. SPICE
 +
*VM Generation ID: Users can set a 'vmgenid' device, which enables safe snapshots, backup and clone for various software inside the VM, like Microsoft Active Directory. This vmgenid will be added to all newly created VMs automatically. Users have to manually add it to older VMs.
 +
*qemu-server: add ability to fstrim a guest filesystem after moving a disk (during move_disk or migrate with local disks)
 +
*Emulating ARM virtual machines (experimental, mostly useful for development purposes)
 +
*vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias vGPUS.
 +
*pvesh rewrite to use the new CLI Formatter framework - providing a configurable output format for all API calls (formats include: json, yaml and a pleasant table output with borders). Provides unified support for json output for the PVE CLI Tools.
 +
*Nesting for Containers (privileged & unprivileged containers): Allows running lxc, lxd or docker inside containers, also supports using AppArmor inside containers
 +
*Mounting CIFS/NFS inside containers (privileged): Allows using samba or nfs shares directly from within containers
 +
*Improved reference documentation
 +
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
  
 
==Proxmox VE 5.2==
 
==Proxmox VE 5.2==
Line 33: Line 830:
 
*LXC: templates and clones, move volume/disk
 
*LXC: templates and clones, move volume/disk
 
*Create and edit new roles via GUI
 
*Create and edit new roles via GUI
*I/O bandwith limits, e.g. for restore operations (globally, per storage or per restore job)
+
*I/O bandwith limits for restore operations (globally, per storage or per restore job)
 
*new and improved xterm.js integration including reconnect logic (on container reboots or restart migrations)
 
*new and improved xterm.js integration including reconnect logic (on container reboots or restart migrations)
 +
*Basic/Advanced GUI
 
*ebtables support
 
*ebtables support
 
*Improved reference documentation
 
*Improved reference documentation
Line 137: Line 935:
 
*installer: LVM thin or ZFS plugin pre-configured
 
*installer: LVM thin or ZFS plugin pre-configured
 
*LVM thin support
 
*LVM thin support
*[http://pve.proxmox.com/wiki/HTTPSCertificateConfiguration#Let.27s_Encrypt_using_acme.sh Let´s Encrypt] support
+
*[https://pve.proxmox.com/wiki/HTTPSCertificateConfiguration#Let.27s_Encrypt_using_acme.sh Let´s Encrypt] support
 
*DRBD9: drbd 9.0.2 kernel module and drbdmanage 0.95
 
*DRBD9: drbd 9.0.2 kernel module and drbdmanage 0.95
 
*LXC: improved container setup
 
*LXC: improved container setup
Line 181: Line 979:
 
*full IPv6 support
 
*full IPv6 support
 
*include LXC (and removed OpenVZ) - see [[Convert OpenVZ to LXC]]
 
*include LXC (and removed OpenVZ) - see [[Convert OpenVZ to LXC]]
*added [http://pve.proxmox.com/wiki/DRBD9 DRBD9] with drbdmanage (technology preview)
+
*added [https://pve.proxmox.com/wiki/DRBD9 DRBD9] with drbdmanage (technology preview)
 
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
 
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
  
Line 202: Line 1,000:
 
*based on Debian Jessie 8.1
 
*based on Debian Jessie 8.1
 
*use kernel 3.19.8
 
*use kernel 3.19.8
*new HA manager, see [http://pve.proxmox.com/wiki/High_Availability_Cluster_4.x High_Availability_Cluster_4.x]
+
*new HA manager, see [https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x High_Availability_Cluster_4.x]
 
*QEMU 2.3
 
*QEMU 2.3
*include LXC (and removed openvz), see [http://pve.proxmox.com/wiki/Linux_Container Linux Container]
+
*include LXC (and removed openvz), see [https://pve.proxmox.com/wiki/Linux_Container Linux Container]
*[http://pve.proxmox.com/wiki/DRBD9 DRBD9]
+
*[https://pve.proxmox.com/wiki/DRBD9 DRBD9]
 
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
 
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
  
Line 325: Line 1,123:
 
** add multi-monitor support
 
** add multi-monitor support
 
** GUI: use split-button to easily select SPICE or VNC
 
** GUI: use split-button to easily select SPICE or VNC
** more details on http://pve.proxmox.com/wiki/SPICE
+
** more details on https://pve.proxmox.com/wiki/SPICE
 
* update qemu to 1.7.0
 
* update qemu to 1.7.0
 
** add 'pvscsi' to the list of scsi controllers (emulate the VMware PVSCSI device)
 
** add 'pvscsi' to the list of scsi controllers (emulate the VMware PVSCSI device)
Line 341: Line 1,139:
 
* Ceph Server (Technology Preview)
 
* Ceph Server (Technology Preview)
 
** new GUI to manage Ceph server running on PVE nodes
 
** new GUI to manage Ceph server running on PVE nodes
** more details on http://pve.proxmox.com/wiki/Ceph_Server
+
** more details on https://pve.proxmox.com/wiki/Ceph_Server
 
* added Open vSwitch support (Technology Preview)
 
* added Open vSwitch support (Technology Preview)
 
* Optional 3.10 Kernel (based on RHEL7 beta, currently without OpenVZ support, for testing only)
 
* Optional 3.10 Kernel (based on RHEL7 beta, currently without OpenVZ support, for testing only)
* storage: new ZFS plugin (Technology Preview), see http://pve.proxmox.com/wiki/Storage:_ZFS
+
* storage: new ZFS plugin (Technology Preview), see https://pve.proxmox.com/wiki/Storage:_ZFS
 
* storage: remove nexenta plugin (ZFS plugin is faster)
 
* storage: remove nexenta plugin (ZFS plugin is faster)
 
* updated GlusterFS to 3.4.2
 
* updated GlusterFS to 3.4.2
Line 445: Line 1,243:
 
** New upstream release (fix for udev rules)
 
** New upstream release (fix for udev rules)
 
* glusterfs
 
* glusterfs
** support for glusterfs (technology preview, http://pve.proxmox.com/wiki/Storage:_GlusterFS)
+
** support for glusterfs (technology preview, https://pve.proxmox.com/wiki/Storage:_GlusterFS)
 
* SPICE
 
* SPICE
** support for Spice protocol (technology preview, http://pve.proxmox.com/wiki/SPICE)
+
** support for Spice protocol (technology preview, https://pve.proxmox.com/wiki/SPICE)
 
* based on Debian 7.1 (Wheezy)
 
* based on Debian 7.1 (Wheezy)
 
* Countless bug fixes (for all details see [https://bugzilla.proxmox.com bugtracker] and [https://git.proxmox.com GIT]
 
* Countless bug fixes (for all details see [https://bugzilla.proxmox.com bugtracker] and [https://git.proxmox.com GIT]
Line 572: Line 1,370:
 
(First public beta released on 30.09.2011, rc1 on 16.02.2012, final release on 30.03.2012)
 
(First public beta released on 30.09.2011, rc1 on 16.02.2012, final release on 30.03.2012)
  
[http://pve.proxmox.com/wiki/Category:Proxmox_VE_2.0 Proxmox VE 2.0 Documentation]
+
[https://pve.proxmox.com/wiki/Category:Proxmox_VE_2.0 Proxmox VE 2.0 Documentation]
  
 
*Complete new GUI
 
*Complete new GUI
Line 600: Line 1,398:
 
**All jobs can be monitored as “Recent tasks”
 
**All jobs can be monitored as “Recent tasks”
 
*OpenVZ
 
*OpenVZ
**Appliance Downloader, including [http://www.turnkeylinux.org/partners TurnKey Linux] channel
+
**Appliance Downloader, including [https://www.turnkeylinux.org/partners TurnKey Linux] channel
 
**Multiple storages for OpenVZ container, no limit to /var/lib/vz anymore!!!
 
**Multiple storages for OpenVZ container, no limit to /var/lib/vz anymore!!!
 
**vswap support
 
**vswap support
 
**Improved init.log (shows start-up logs of a OpenVZ container)
 
**Improved init.log (shows start-up logs of a OpenVZ container)
 
*Commitment to Free Software (FOSS): public code repository and bug tracker for the 2.x code base.
 
*Commitment to Free Software (FOSS): public code repository and bug tracker for the 2.x code base.
*Licensed under GNU Affero General Public License, version 3 http://www.gnu.org/licenses/agpl-3.0.html.
+
*Licensed under GNU Affero General Public License, version 3 https://www.gnu.org/licenses/agpl-3.0.html.
  
 
==Proxmox VE 1.9==
 
==Proxmox VE 1.9==
Line 892: Line 1,690:
 
*Support new storage model
 
*Support new storage model
 
*Allow to specify MAC address (for KVM and OpenVZ in veth mode)
 
*Allow to specify MAC address (for KVM and OpenVZ in veth mode)
*Added iSCSI Tape support, see http://scst.sourceforge.net/
+
*Added iSCSI Tape support, see https://scst.sourceforge.net/
 
*Added SCST modules to the kernel
 
*Added SCST modules to the kernel
 
*Provide SCST userspace package
 
*Provide SCST userspace package

Revision as of 12:32, 23 September 2021

Roadmap

  • Cross cluster authentication mechanism
  • More flexible scheduling for backup jobs
  • Make TFA protection more powerful, allow multiple factors (like Proxmox Backup Server)
  • Software TPM support for virtual machines

Release History

See also Announcement forum

Proxmox VE 7.0

Released 6. July 2021: See Downloads, updated ISO release 2 with updated base system to final Debian 11.0 Bullseye and bug fixes 31. August 2021

  • Based on Debian Bullseye (11)
  • Ceph Pacific 16.2 as new default
  • Ceph Octopus 15.2 continued support
  • Kernel 5.11 default
  • LXC 4.0
  • QEMU 6.0
  • ZFS 2.0.4

Changelog Overview

  • Installer:
    • Rework the installer environment to use switch_root instead of chroot, when transitioning from initrd to the actual installer.
      This improves module and firmware loading, and slightly reduces memory usage during installation.
    • Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
    • Improve ISO detection:
      • Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
      • Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
    • Use zstd compression for the initrd image and the squashfs images.
    • Setup Btrfs as root file system through the Proxmox VE Installer (Technology preview)
    • Update to busybox 1.33.1 as the core-utils provider.
  • Enhancements in the web interface (GUI):
    • The node summary panel shows a high level status overview, while the separate Repository panel shows in-depth status and list of all configured repositories. Basic repository management, for example, activating or deactivating a repository, is also supported.
    • Notes panels for Guests and Nodes can now interpret Markdown and render it as HTML.
    • On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
    • The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
    • Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
    • Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
    • Improved translations, among others:
      • Arabic
      • French
      • German
      • Japan
      • Polish
      • Turkish
  • Access Control:
    • Single-Sign-On (SSO) with the new OpenID Connect access realm type.
    You can integrate external authorization servers, either using existing public services or your own identity and access management solution, for example, Keycloack or LemonLDAP::NG.
    • Added new permission Pool.Audit to allow users to see pools, without permitting them to change the pool.
    See breaking changes below for some possible impact in custom created roles.
  • Virtual Machines (KVM/QEMU):
    • QEMU 6.0 has support for io_uring as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
    The new default can be overridden in the guest config via qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native (where, for example, DRIVE would be scsi0 and the OPTS could be get from qm config VMID output).
    • EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
    • Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
      • This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
      • Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
    With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
    • VM snapshot states are now always removed when a VM gets destroyed.
    • Improved logging during live restore.
  • Container
    • Support for containers on custom storages.
    • Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
  • Migration
    • QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
    Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
    • Containers: The force parameter to pct migrate, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as shared.
  • High Availability (HA):
    • Release LRM locks and disable watchdog protection if all services of the node the LRM is running on, got removed and no new ones were added for over 10 minutes.
    This reduced the possible subtle impact of an active watchdog after a node was cleared of HA services, for example, when HA services were previously only configured for evaluation.
    • Add a new HA service state recovery and transform the fence state in a transition to that new state.
    This gains a clear distinction between to be fenced services and the services whose node already got fenced and are now awaiting recovery.
    • Continuously retry recovery, even if no suitable node was found.
    This improves recovery for services in restricted HA groups, as only with that the possibility of a quorate and working partition but no available new node for a specific service exists.
    For example, if HA is used for ensuring that a HA service using local resource, like a VM using local storage, will be restarted and up as long as the node is running.
    • Allow manually disabling HA service that currently are in recovery state, for more admin control in those situations.
  • Backup and Restore
    • Backups of QEMU guests now support encryption using a master key.
    • It is now possible to back up VM templates with SATA and IDE disks.
    • The maxfiles parameter has been deprecated in favor of the more flexible prune-options.
    • vzdump now defaults to keeping all backups, instead of keeping only the latest one.
    • Caching during live restore got reworked, reducing total restore time required and improving time to fully booted guest both significantly.
    • Support file-restore for VMs using ZFS or LVM for one, or more, storages in the guest OS.
  • Network:
    • Default to the modern ifupdown2 for new installations using the Proxmox VE official ISO. The legacy ifupdown is still supported in Proxmox VE 7, but may be deprecated in a future major release.
  • Time Synchronization:
    • Due to the design limitations of systemd-timesync, which make it problematic for server use, new installations will install chrony as the default NTP daemon.
    If you upgrade from a system using systemd-timesyncd, it's recommended that you manually install either chrony, ntp or openntpd.
  • Ceph Server
    • Support for Ceph 16.2 Pacific
    • Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple public_networks defined.
    Note that multiple public_networks are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
    • Improved support for IPv6 and mixed setups, when creating a Ceph monitor.
    • Beginning with Ceph 16.2 Pacific, the balancer module is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs.
    • Newly created Bluestore OSDs will benefit from the newly enabled sharding configuration for rocksdb, which should lead to better caching of frequently read metadata and less space needed during compaction.
  • Storage
    • Support for Btrfs as technology preview
      • Add an existing Btrfs file system as storage to Proxmox VE, using it for virtual machines, container, as backup target or to store and server ISO and container appliance images.
    • The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit[footnotes 7.0 1].
    • More use of content-type checks instead of checking a hard-coded storage-type list in various places.
    • Support downloading ISO and Cont appliance images directly from a URL to a storage, including optional checksum verifications.
  • Disk Management
    • Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential.
    Note that with using this feature any data on the disk will be destroyed permanently.
  • pve-zsync
    • Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
  • Firewall
    • The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations).
  • Certificate management
    • The ACME standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface.

Breaking Changes

  • Pool permissions
The old permission Pool.Allocate now only allows users to edit pools, not to see them. Therefore, Pool.Audit must be added to existing custom roles with the old Pool.Allocate to preserve the same behavior. All built-in roles are updated automatically.
  • VZDump
    • Hookscript: The TARFILE environment variable was deprecated in Proxmox VE 6, in favor of TARGET. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
    • The size parameter of vzdump has been deprecated, and setting it is now an error.
  • API deprecations, moves and removals
    • The upgrade parameter of the /nodes/{node}/(spiceshell|vncshell|termproxy) API method has been replaced by providing upgrade as cmd parameter.
    • The /nodes/{node}/cpu API method has been moved to /nodes/{node}/capabilities/qemu/cpu
    • The /nodes/{node}/ceph/disks API method has been replaced by /nodes/{node}/disks/list
    • The /nodes/{node}/ceph/flags API method has been moved to /cluster/ceph/flags
    • The db_size and wal_size parameters of the /nodes/{node}/ceph/osd API method have been renamed to db_dev_size and wal_dev_size respectively.
    • The /nodes/<node>/scan/usb API method has been moved to /nodes/<node>/hardware/usb
  • CIFS credentials have been stored in the namespaced /etc/pve/priv/storage/<storage>.pw instead of /etc/pve/<storage>.cred since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
  • The external storage plugin mechanism had a ABI-version bump that reset the ABI-age, thus marking an incompatible breaking change, that external plugins must adapt before being able to get loaded again.
  • qm|pct status <VMID> --verbose, and the respective status API call, only include the template line if the guest is a template, instead of outputting template: for guests which are not templates.

Known Issues

  • Network: Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
    • Some may change their name. For example, due to newly supported functions, a change from enp33s0f0 to enp33s0f0np0 could occur.
      We observed such changes with high-speed Mellanox models.
    • Bridge MAC address selection has changed in Debian Bullseye - it is now generated based on the interface name and the machine-id (5) of the system.
    Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
  • Container:
    • cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 or newer support cgroupv2 [1], as do containers that do not use systemd as init system in the first place (e.g., Alpine Linux or Devuan).
    CentOS 7 and Ubuntu 16.10 are two prominent examples for Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, for details and possible fixes see:

https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat

Proxmox VE 7.0 beta1

Released 24. June 2021: See Downloads

  • Based on Debian Bullseye (11)
  • Ceph Pacific 16.2 as new default
  • Ceph Octopus 15.2 continued support
  • Kernel 5.11 default
  • LXC 4.0
  • QEMU 6.0
  • ZFS 2.0.4
  • Installer:
    • Rework the installer environment to use switch_root instead of chroot, when transitioning from initrd to the actual installer.
      This improves module and firmware loading, and slightly reduces memory usage during installation.
    • Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
    • Improve ISO detection:
      • Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
      • Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
    • Use zstd compression for the initrd image and the squashfs images.
    • Update to busybox 1.33.1 as the core-utils provider.
  • Enhancements in the web interface (GUI)
    • Notes panels for Guests and Nodes can now interpret Markdown and render it as HTML.
    • On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
    • The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
    • Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
    • Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
    • Improved translations, among others:
      • Arabic
      • French
      • German
      • Polish
  • Virtual Machines (KVM/QEMU):
    • QEMU 6.0 has support for io_uring as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
    The new default can be overridden in the guest config per drive via qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native.
    • EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
    • Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
      • This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
      • Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
    With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
    • VM snapshot states are now always removed when a VM gets destroyed.
    • Improved logging during live restore.
  • Container
    • Support for containers on custom storages.
    • Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
  • Migration
    • QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
    Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
    • Containers: The force parameter to pct migrate, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as shared.
  • Backup and Restore
    • Backups of QEMU guests now support encryption using a master key.
    • It is now possible to back up VM templates with SATA and IDE disks.
    • The maxfiles parameter has been deprecated in favor of the more flexible prune-options.
    • vzdump now defaults to keeping all backups, instead of keeping only the latest one.
  • Network:
    • Default to the modern ifupdown2 for new installations using the Proxmox VE official ISO. The legacy ifupdown is still supported in Proxmox VE 7, but may be dropped in a future major release.
  • Time Synchronization:
    • Due to the design limitations of systemd-timesync, which make it problematic for server use, new installations will install chrony as the default NTP daemon.
    If you upgrade from a system using systemd-timesyncd, it's recommended that you manually install either chrony, ntp or openntpd.
  • Ceph Server
    • Support for Ceph 16.2 Pacific
    • Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple public_networks defined.
    Note that multiple public_networks are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
    • Improved support for IPv6 and mixed setups, when creating a Ceph monitor
    • Beginning with Ceph 16.2 Pacific, the balancer module is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs
    • Newly created Bluestore OSDs will benefit from the newly enabled sharding configuration for rocksdb, which should lead to better caching of frequently read metadata and less space needed during compaction
  • Storage
    • The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit[footnotes 7.0 1].
  • Disk Management
    • Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential. Any data on the disk will be destroyed.
  • pve-zsync
    • Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
  • Firewall
    • The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations)
  • Certificate management
    • The ACME Standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface

Breaking Changes

  • Pool permissions
Added new permission Pool.Audit to allow users to see pools, without permitting them to change the pool. The old permission Pool.Allocate now only allows users to edit pools, not to see them. Therefore, Pool.Audit must be added to custom roles with the old Pool.Allocate to preserve the same behavior. All built-in roles are updated automatically.
  • VZDump
    • Hookscript: The TARFILE environment variable was deprecated in Proxmox VE 6, in favor of TARGET. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
    • The size parameter of vzdump has been deprecated, and setting it is now an error.
  • API deprecations, moves and removals
    • The upgrade parameter of the /nodes/{node}/(spiceshell|vncshell|termproxy) API method has been replaced by providing upgrade as cmd parameter.
    • The /nodes/{node}/cpu API method has been moved to /nodes/{node}/capabilities/qemu/cpu
    • The /nodes/{node}/ceph/disks API method has been replaced by /nodes/{node}/disks/list
    • The /nodes/{node}/ceph/flags API method has been moved to /cluster/ceph/flags
    • The db_size and wal_size parameters of the /nodes/{node}/ceph/osd API method have been renamed to db_dev_size and wal_dev_size respectively.
    • The /nodes/<node>/scan/usb API method has been moved to /nodes/<node>/hardware/usb
  • CIFS credentials have been stored in the namespaced /etc/pve/priv/storage/<storage>.pw instead of /etc/pve/<storage>.cred since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
  • qm|pct status <VMID> --verbose, and the respective status API call, only include the template line if the guest is a template, instead of outputting template: for guests which are not templates.

Note that there is no API-stability guarantee during the beta.

Known Issues

  • Network: Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
    • Some may change their name. For example, due to newly supported functions, a change from enp33s0f0 to enp33s0f0np0 could occur.
      We observed such changes with high-speed Mellanox models.
    • Bridge MAC address selection has changed in Debian Bullseye - it is now generated based on the interface name and the machine-id (5) of the system.
    Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
  • Misc. bug fixes and improvements

Notes

Proxmox VE 6.4

Released 28. April 2021

  • Based on Debian Buster (10.9)
  • Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20
  • Kernel 5.4 default
  • Kernel 5.11 opt-in
  • LXC 4.0
  • QEMU 5.2
  • ZFS 2.0.4 - new major version
  • Virtual Machines (KVM/QEMU):
    • Support pinning a VM to a specific QEMU machine version.
    • Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation.
    This improves stability and guarantees that the hardware layout can stay the same even with newer QEMU versions.
    • Address issues with hanging QMP commands, causing VMs to freeze on disk resize and indeterministic edge cases.
    Note that some QMP timeout log messages are still being investigated, they are harmless and only of informative nature.
    • cloud-init: re-add Stateless Address Autoconfiguration (SLAAC) option to IPv6 configuration.
    • Improve output in task log for mirroring drives and VM live-migration.
  • Container
    • Improved cgroup v2 (control group) handling.
    • Support and provide appliance templates for Alpine Linux 3.13, Devuan 3, Fedora 34, Ubuntu 21.04.
  • Backup and Restore
    • Implement unified single-file restore for virtual machine (VM) and container (CT) backup archives located on a Proxmox Backup Server.
    The file-restore is available in the GUI and in a new command line tool proxmox-file-restore.
    • Live-Restore of VM backup archives located on a Proxmox Backup Server.
    No more watching the task log, waiting for a restore to finish; VMs can now be brought up while the restore runs in the background.
    • Consistent handling of excludes for container backups across the different backup modes and storage types.
    • Container restores now default to the privilege setting from the backup archive.
  • Ceph Server
    • Improve integration for placement group (PG) auto-scaler status and configuration.
      Allow configuration of the CRUSH-rule, Target Size and Target Ratio settings, and automatically calculating the optimal numbers of PGs based on this.
  • Storage
    • Support editing of backup notes on any CephFS, CIFS or NFS storage.
    • Support configuring a namespace for accessing a Ceph pool.
    • Improve handling ZFS pool by doing separate checks for imported and mounted.
    This separation helps in situations where a pool was imported but not mounted, and executing another import command failed.
  • Disk Management
    • Return partitions and display them in tree format.
    • Improve detection of disk and partition usage.
  • Enhancements in the web interface (GUI)
    • Show current usage of host memory and CPU resources by each guest in a node's search-view.
    • Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, providing consistency with the units used in the current usage gauge.
    • Make columns in the firewall rule view more responsive and flexible by default.
    • Improve Ceph pool view, show auto-scaler related columns.
    • Support editing existing Ceph pools, adapting the CRUSH-rule, Target Size and Target Ratio, among other things.
  • External metric servers:
    • Support the InfluxDB 1.8 and 2.0 HTTP(s) API.
    • Allow use of InfluxDB instances placed behind a reverse-proxy.
  • Proxmox VE API Proxy Daemon (pveproxy)
    • Make listening IP configurable (in /etc/default/pveproxy). This can help to limit exposure to the outside (e.g. by only binding to an internal IP).
    • pveproxy now listens for both IPv4 and IPv6, by default
  • Installation ISO:
    • Installation on ZFS:
      • if booted with legacy BIOS (non-UEFI), now also copy the kernel images to the second VFAT partition (ESP), allowing the system to boot from there with grub, making it possible to enable all ZFS features on such systems.
      • set up the boot-partition and boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable.
    • The installer environment attempts to do an NTP time synchronization before actually starting the installation, avoiding telemetry and cluster issues, if the RTC had a huge time-drift.
  • pve-zsync
    • Improved snapshot handling allowing for multiple sync intervals for a source and destination pair.
    • Better detection of aborted syncs, which previously caused pve-zsync to stop the replication.
  • Firewall
    • Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by iptables-restore due to its size limitations. We recommended you to create and use IP-Sets for that use case.
    • Improvements to the command-line parameter handling.

Known Issues

  • Please avoid using zpool upgrade on the "rpool" (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
    See the documentation for determining the bootloader used, if you're unsure.
    Setups installed with the Proxmox VE 6.4 ISO are not affected, as there the installer always sets up an easier to handle, vfat formatted, ESP to boot.
    See the ZFS: Switch Legacy-Boot to Proxmox Boot Tool article about how to switch over to a safer boot variant for legacy GRUB-booted setups with ZFS as root filesystem.
  • New default bind address for pveproxy and spiceproxy, unifying the default behavior with Proxmox Backup Server
    • With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4 0.0.0.0:8006 and IPv6 [::]:8006) by default.
    Should you wish to prevent it from listening on IPv6, simply configure the IPv4 wildcard as LISTEN_IP in /etc/default/pveproxy:
    LISTEN_IP="0.0.0.0"
    • Additionally, the logged IP address format changed for IPv4 in pveproxy's access log (/var/log/pveproxy/access.log). They are now logged as IPv4-mapped IPv6 addresses instead of:
    192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
    the line now looks like:
    ::ffff:192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
    If you want to restore the old logging format, also set LISTEN_IP="0.0.0.0"
  • Resolving the Ceph `insecure global_id reclaim` Health Warning
    With Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20 we released an update to fix a security issue (CVE-2021-20288) where Ceph was not ensuring that reconnecting/renewing clients were presenting an existing ticket when reclaiming their global_id value.
    Updating from an earlier version will result in the above health warning.
    See the forum post here for more details and instructions to address this warning.

Proxmox VE 6.3

Released 26. November 2020

  • Based on Debian Buster (10.6)
  • Ceph Octopus 15.2.6 (first stable release) and Ceph Nautilus 14.2.15
  • Kernel 5.4 LTS
  • LXC 4.0
  • QEMU 5.1
  • ZFS 0.8.5
  • Proxmox Backup Server Integration
    • Stable Proxmox Backup Server integration: The stable version 1.0 of Proxmox Backup Server is now integrated and enterprise support is available from the Proxmox support team.
    • Data encrypted on client-side before backing up to Proxmox Backup Server.
  • Ceph
    • Stable integration of Ceph Octopus.
    • Add selector to choose which supported Ceph version to install in the GUI configuration wizard.
    • Recovery progress is displayed in the Ceph status panel.
    • Show and allow setting of Placement Group (PG) auto-scaling mode of Ceph pools.
    • Set device class when creating OSDs, especially if the auto-detection yields the wrong class.
  • Enhancements in the GUI
    • Improved VM boot device order editor:
      • It is now possible to select multiple devices per type (disk, network) for booting.
      • Booting from passed through PCI devices (e.g., NVMe drives) is supported.
      • Improved user experience with a drag-and-drop UI.
    • GUI for editing external metric servers: You can now connect your Proxmox VE nodes to InfluxDB or Graphite using the GUI, instead of having to manually edit /etc/pve/status.cfg
    • Optional TLS certificate verification for LDAP and AD authentication realms.
    • Improve high-DPI display and browser zoom compatibility.
    • Split up storage content view by type.
    • Backup/Restore:
      • Overview of all guests, which aren't included in any backup at all.
      • Detailed view per backup job, showing all covered guests and which of their disks are backed up.
    • Display optional comments for all storage types.
      • Proxmox Backup Server additionally displays the verification state of all backup snapshots.
    • Better usability for preventing accidental snapshot rollback
      • The GUI now makes it difficult to accidentally confuse snapshot removal with snapshot rollback.
  • Storage
    • Add highly flexible backup retention with "keep" settings: The new backup retention settings, which augment and replace the "Max Backups" setting, enable you to decide how many backups to keep per timeframe and implement enhanced retention policies per storage or backup job.
    • Better handling of container volume activation on ZFS.
    • Increased timeout for connecting to CIFS and NFS storage over slow links.
    • Improve querying SSD wear leveling.
    • Small improvements to the configuration handling of the LIO iSCSI provider for ZFS over iSCSI storage.
    • ZFS disk management: create a systemd service to unconditionally import a pool created using the GUI.
  • Container
    • Add support for current Devuan containers.
    • Add support for Kali Linux containers.
    • Update list of supported releases for Ubuntu, Fedora and CentOS.
    • Support setting a custom timezone per container.
    • Improve startup monitoring.
    • Add a debug parameter to pct start, to output the debug log of the container start.
    • Support systems with up to 8192 cores.
    • Optionally ignore mount points while running pct fstrim.
    • Fix aborting of backups on Ceph backed containers with a large IO load, by issuing fsfreeze before snapshotting.
  • QEMU
    • Fast, incremental backups to Proxmox Backup Server using dirty-bitmaps.
    • Handle guest shutdowns (power down from within a VM) during backups.
    • Improved boot order selection allowing booting from multiple virtual disks and from passed through PCI devices (e.g., NVMe drives).
    • Allow pass through of certain older Intel iGPU models with QEMU's 'legacy-igd' mode.
    • Support more CPU options, for example SSE4.2
    • Better support for hugepages across multiple NUMA nodes.
  • General improvements for virtual guests
    • Improved handling of replicated guests when migrating.
  • Clustering
    • Harden locking in the clustered configuration filesystem (pmxcfs), avoids a possible freeze when joining, messaging or leaving a closed-process-group.
  • User and permission management
    • Improved support for using client certificates/keys when connecting to AD/LDAP realms.
    • Optional support for case-insensitive logins with AD/LDAP realms.
    • Fine-grained permissions for SDN and CloudInit.
    • Better handling of clock jumps for rotating keys.
  • Firewall
    • Improved API for matching ICMP-types.
  • Documentation
    • Clarify qdevice requirements.
    • Add section about ZFS pool design choices.
    • Add documentation on requirement for encrypted ZFS datasets as storage.
    • Add manpage for cpu-models.conf(5).
  • Installer
    • Reboot automatically upon successful installation.
    • Drop ext3 as supported file system.
    • Start a shell on vt3 for debugging during installation.
  • Experimental: support for Software-Defined Networking (SDN)
    • Support for IPAM with a plugin framework.
    • Add support for internal IPAM management and PowerDNS.
  • Countless bug fixes and smaller improvements

Known Issues

  • VM guests with multiple boot disk setups (e.g., mdadm, LVM, etc...) need to be configured with the new boot order config so that all required disks are marked "bootable", with the one containing the boot loader placed first - otherwise, the guest may fail to boot after being restarted with the new QEMU 5.1 version.
  • The "exclude-path" option for vzdump now supports non-anchored paths for all backup modes. Non-anchored paths do not start with a '/' and will match in any subdirectory. Previously, such paths only had an effect for "suspend" mode backups, so please ensure that you don't have any unwanted non-anchored paths configured.

Proxmox VE 6.2

Released 12. May 2020

  • Based on Debian Buster (10.4)
  • Ceph Nautilus (14.2.9)
  • Kernel 5.4 LTS
  • LXC 4.0
  • Qemu 5.0
  • ZFS 0.8.3
  • Web interface
    • Allow to deploy DNS based Let's Encrypt (ACME) certificates
    • Allow admins to see the full granted permission/privilege tree for a user
    • Add experimental Software Defined Network (SDN) GUI and documentation, based on modern mainline Linux Kernel network technology
    • Allow to collapse the Notes sub-panel in the guest summary panels. One can do this permanently or automatically if empty through the user settings.
    • Add 'Creation Date' column for storage content, allowing one, for example, to comfortably find backups from a certain date
    • Add device node creation to Container feature selection
    • Full support for up to 8 corosync links
    • Automatic guest console reload on VM/CT startup
    • Add Arabic translation
    • Allow seamlessly changing the language, without logging out and logging in again
  • Container
    • LXC 4.0.2 and lxcfs 4.0.3 with initial full cgroupv2 support by Proxmox VE
    • Improve support for modern systemd based Containers
    • Improve default settings to support hundreds to thousands* of parallel running Containers per node (* thousands only with simple distributions like Alpine Linux)
    • Allow creating templates on directory-based storage
  • Backup/Restore
    • Support for the highly efficient and fast Zstandard (zstd) compression
  • Improvements to the HA stack
    • Allow to destroy virtual guests under HA control when purge is set
  • QEMU
    • Fixed EFI disk behavior on block-based storage (see known issues)
    • VirtIO Random Number Generator (RNG) support for VMs
    • Custom CPU types with user-defined flags and QEMU/KVM settings
    • Improved machine type versioning and compatibility checks
    • Various stability fixes, especially for backups and IO-Threads
    • Migration:
      • Enable support for Live-Migration with replicated disks
      • Allow specifying target storage for offline mode
      • Allow specifying multiple source-target storage pairs (for now, CLI only)
      • Improve behavior with unused disks
      • Secure live-migration with local disks
  • General improvements for virtual guests
    • Handle ZFS volumes with non-standard mount point correctly
  • Cluster
    • Improve lock contention during high frequency config modifications
    • Add versioning for cluster join
    • Enable full support for up to 8 corosync links
  • Ceph
    • Easier uninstall process, better informing which actions need to be taken.
  • Storage
    • Storage migration: introduce allow-rename option and return new volume ID if a volume with the same ID is already allocated on the target storage
    • Support the 'snippet' content type for GlusterFS storage
    • Bandwidth limitations are now also available for SAMBA/CIFS based storage
    • Handle ZFS volumes with non-standard mount point correctly
    • Improve metadata calculation when creating a new LVM-Thin pool
    • Improve parsing of NVMe wearout metrics
  • User and permission management
    • LDAP Sync users and groups automatically into Proxmox VE
    • LDAP mode LDAP+STARTTLS
    • Allow to add and manage authentication realms through the 'pveum' CLI tool
    • Full support and integration for API Tokens
      • Shared or separated privileges
      • Token lifetime
      • Revoke it anytime without impacting your user login
  • Documentation
    • Update hypervisor migration guide in the wiki
    • Reducing number of Ceph placement groups is possible
    • Improve layout, use a left-side based table-of-content
  • Various improvements
    • Firewall: make config parsing more robust and improve ICMP-type filtering

Known Issues with OVMF/UEFI disks of VMs

A VM's EFI disk on a storage which doesn't allow for small (128 KB) images (for example: CEPH, ZFS, LVM(thin)), was not correctly mapped to the VM. While fixed now, such existing setup may need manual intervention:

  • You do not have to do anything if your EFI disks is using qcow2 or "raw" on a file based storage.
  • Before the upgrade, make sure that on your ESP, the EFI boot binary exists at \EFI\BOOT\BOOTX64.EFI (the default EFI Boot fallback).
    • Windows and some Linux VMs using systemd-boot should do that automatically
  • If you already upgraded and it does not boot, see OVMF/UEFI Boot Entries on how to recreate the boot entry via the OVMF Boot Menu

Proxmox VE 6.1

Released 4. December 2019: See Downloads, updated ISO release 2 with installer bug fixes and updated packages on 6. March 2020

  • Based on Debian Buster (10.2)
  • Ceph Nautilus (14.2.4.1)
  • Corosync 3.0
  • Kernel 5.3
  • LXC 3.2
  • Qemu 4.1.1
  • ZFS 0.8.2
  • Web interface
    • More datacenter configuration options are now editable via GUI:
      • U2F two-factor settings.
      • Cluster-wide bandwidth limit for the following traffic types: migration, backup-restore, clone, disk-move.
    • Improvements to two-factor authentication allowing to use a hardware-based TOTP key.
    • Mobile GUI: implemented login for TOTP-based two-factor authentication enabled user accounts.
    • More icons have been converted from bitmap to vectorized icons from Font Awesome.
    • The scale mode of noVNC can be changed in "My Settings".
    • Addition of Hebrew language translations.
    • New "Run now" button for cluster-wide backup jobs.
    • With ifupdown2 installed, it is now possible to change the network configuration and reload it from the GUI without a reboot.
  • Container
    • Pending changes for containers. It is possible to make changes to a running container. They will be applied after the next reboot of the container.
    • Reboot of a running container via GUI, API, and command-line interface.
    • Hot plugging of mount points by using the new mount API available with Linux Kernel 5.3.
    • Support of recent releases of GNU/Linux distributions like Fedora 31, CentOS 8, and Ubuntu 19.10.
  • SPICE
    • Audio devices can now be added via GUI (no need to edit the configuration file).
    • Folders can now be shared between SPICE client and VM (still experimental).
    • Video streaming can be enabled. This can help to improve the performance of fast-changing display areas like videos.
    • The SPICE USB device now supports USB3 (Qemu >= 4.1).
  • Backup/Restore
    • VMs with IOThreads enabled can now be backed up.
    • Run scheduled backup jobs manually from the Datacenter in the GUI.
  • Improvements to the HA stack
    • New 'migrate' shutdown policy. If enabled, running services will be migrated to another node on shutdown. Once the node comes back, and if the services have not been manually moved to another node in the meantime, those services will move back.
    • New 'crm-command stop'. Shuts down a VM/container with the specified timeout and performs a hard stop if the timeout is 0. A VM or container stop command will now be redirected to this new crm-command.
  • QEMU
    • Allow domains other than '0000' for PCI(e) passthrough.
    • New 'reboot' API call. This allows to apply pending changes without having to wait for the guest to shutdown before starting it again.
    • QEMU monitor timeout issue, which prevented successful backups in certain configurations, has been fixed.
    • PCI(e) passthrough supports up to 16 PCI(e) devices
    • Support for QEMU Guest Agents using an ISA serial port (not VirtIO) for communication, allows to use the QGA with FreeBSD - among others.
  • General improvements for virtual guests
    • Allow 'tags' in the guest configuration. This meta-information can be useful for things like configuration management (no GUI support yet).
    • VM/CT: Purge allows to remove the corresponding virtual machine or container from replication or backup jobs on destruction.
  • Cluster stability
    • A number of bugs have been identified and fixed upstream in cooperation with corosync and kronosnet.
    • The initial issues some users have experienced with MTU changes are resolved.
    • pmxcfs has undergone a stability audit using ASAN (AddressSanitizer) and UBSAN (Undefined Behavior Sanitizer), fixing various possible issues for certain edge-cases.
  • Storage
    • Allow configuration of non-default 'mountpoint' properties for ZFS.
    • Allow .img files as alternative to .iso images.
    • Various iSCSI improvements.
    • Overhaul of ZFS on iSCSI with the LIO target provider.
    • Supports all features offered by newer kernels with Ceph and KRBD.
  • Various improvements
    • Firewall: Added support for 'raw' tables and use them to allow users to enable and configure Synflood-Attack protections.
    • Automatic certificate renewal for the self-signed certificate 2 weeks before expiration.
    • Reduced certificate lifetime of newly generated certificates (2 years instead of 10 years). Avoids that some modern Browser complain about very long certificate lifetime.
  • Documentation
    • Proofreading of parts of the documentation (style and grammar).
    • Enhanced documentation for the Ceph administration.
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 6.0

Released 16. July 2019: See Downloads

  • Based on Debian Buster 10.0
  • Pre-upgrade checklist tool `pve5to6` - available on Proxmox VE 5.4 and V 6.0
    • Running `pve5to6` checks for common pitfalls known to interfere with a clean upgrade process.
  • Corosync 3.0.2 using Kronosnet as transport
    • Default transport method now uses unicast, this can simplify setups where the network had issues with multicast.
    • New Web GUI Network selection widget avoids making typos when choosing the correct link address.
    • Currently, there is no multicast support available (it's on the kronosnet roadmap).
  • LXC 3.1
  • Ceph Nautilus 14.2.x
    • Better performance monitoring for rbd images through `rbd perf image iotop` and `rbd perf image iostat`.
    • OSD creation, based on ceph-volume: integrated support for full disk encryption of OSDs.
    • More robust handling of OSDs (no more mounting and unmounting to identify the OSD).
    • ceph-disk has been removed: After upgrading it is not possible to create new OSDs without upgrading to Ceph Nautilus.
    • Support for PG split and join: The number of placement groups per pool can now be increased and decreased. There is even an optional plugin in ceph-manager to automatically scale the number of PGs.
    • New messenger v2 protocol brings support for encryption on the wire (currently this is still experimental).
    • See https://docs.ceph.com/en/nautilus/releases/nautilus/ for the complete release notes.
  • Improved Ceph administration via GUI
    • A cluster-wide overview for Ceph is now displayed in the 'Datacenter View' too.
    • The activity and state of the placement groups (PGs) is visualized.
    • The version of all Ceph services is now displayed, making detection of outdated services easier.
    • Configuration settings from the config file and database are displayed.
    • You can now select the public and cluster networks in the GUI with a new network selector.
    • Easy encryption for OSDs with a checkbox.
  • ZFS 0.8.1
    • Native encryption for datasets with comfortable key-handling by integrating the encryption directly into the `zfs` utilities. Encryption is as flexible as volume creation and adding redundancy - the gained comfort w.r.t. dm-crypt is comparable to the difference between mdadm+lvm to zfs.
    • Allocation-classes for vdevs: you can add a dedicated fast device to a pool which is used for storing often accessed data (metadata, small files).
    • TRIM-support - use `zpool trim` to notify devices about unused sectors.
    • Checkpoints on pool level.
    • See https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.0 and https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.1 for the complete release notes.
  • Support for ZFS on UEFI and on NVMe devices in the installer
    • You can now install Proxmox VE with its root on ZFS on UEFI booted systems.
    • You can also install ZFS on NVMe devices directly from the installer.
    • By using `systemd-boot` as bootloader all pool-level features can be enabled on the root pool.
  • Qemu 4.0.0
    • Live migration of guests with disks backed by local storage via GUI.
    • Added support for more Hyper-V enlightenments improving Windows performance in a virtual machine under Qemu/KVM.
    • Mitigations for the performance impact of recent Intel CPU vulnerabilities.
    • More VM CPU-flags can be set in the web interface.
    • Newer virtual PCIe port hardware for machine type q35 in version >= 4.0. This fixes some passthrough issues.
    • Support for custom Cloudinit configurations:
      • You can create a custom Cloudinit configuration and store it as snippet on a storage.
      • The `qm cloudinit dump` command can be used to get the current Cloudinit configuration as a starting point for extensions.
  • Firewall improvements
    • Improved detection of the local network so that all used corosync cluster networks get automatically whitelisted.
    • Improved firewall behavior during cluster filesystem restart, e.g. on package upgrade.
  • Mount options for container images
    • You can now set certain performance and security related mount options for each container mountpoint.
  • Linux Kernel
    • Updated 5.0 Kernel based off the Ubuntu 19.04 "Disco" kernel with ZFS.
    • Intel in-tree NIC drivers are used:
      • Many recent improvements to the kernel networking subsystem introduced incompatibilities with the out of tree drivers provided by Intel, which sometimes lag behind on support for new kernel versions. This can lead to a change of the predictable network interface names for Intel NICs.
  • Automatic cleanup of old kernel images
    • Old kernel images are not longer marked as NeverAutoRemove - preventing problems when /boot is mounted on a small partition.
    • By default the following images are kept installed (all others can be automatically removed with `apt autoremove`):
      • the currently running kernel
      • the version being newly installed on package updates
      • the two latest kernels
      • the latest version of each kernel series (e.g. 4.15, 5.0)
  • Guest status display in the tree view: Additional states for guests (migration, backup, snapshot, locked) are shown directly in the tree overview.
  • Improved ISO detection in the installer: The way how the installer detects the ISO was reworked to include more devices, alleviating problems of detection on certain hardware.
  • Pool level backup: It is now possible to create a backup task for backing up a whole pool. By selecting a pool as backup target instead of an explicit list of guests, new members of the pool are automatically included, and removed guests are automatically excluded from the backup task.
  • New User Settings and Logout menu.
  • Automatic rotation of authentication key every 24h: by limiting the key lifetime to 24h the impact of key leakage or a malicious administrator are reduced.
  • The nodes Syslog view in the GUI was overhauled and is now faster.
  • Sheepdog is no longer maintained, and thus not supported anymore as Storage plugin.
  • `ceph-disk` has been removed in Ceph Nautilus - use `ceph-volume` instead.
  • Improved reference documentation
  • Upgrade documentation: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 6.0 beta1

Released 04.07.2019: See Downloads

  • Based on Debian Buster 10.0
  • Kernel 5.0.15
  • ZFS on EFI and/or NVMe installation
  • Ceph v14.2.1 Nautilus
  • ZFS 0.8.1
  • QEMU 4.0.0
  • LXC 3.1.0
  • Corosync 3
  • Misc. bug fixes and improvements

Proxmox VE 5.4

Released 11.04.2019: See Downloads

  • Based on Debian Stretch 9.8
  • Kernel 4.15.18
  • QEMU 2.12.1 (QEMU 3.0.1 since May 2019 via apt)
  • LXC 3.1.0
  • ZFS 0.7.13
  • Ceph 12.2.11 (Luminous LTS, stable), packaged by Proxmox
    • Ceph can now be installed via the GUI, thus closing the gap of needing the CLI for this initial step, before configuring a complete (hyperconverged) Ceph setup via PVE's GUI.
  • QDevice support via `pvecm`
    • primarily for small 2-node clusters adding a qdevice can help mitigate the downside of not being able to reboot one node without losing quorum (and thus the ability to make any changes in the cluster) ** Can also help in clusters with a larger even number of nodes by providing a tie-break vote.
    • Integration into pvecm and PVE stack vastly simplifies adding a qdevice (it was possible manually before as well)
  • Wake On Lan support via `pvenode`
    • It is now possible to associate a MAC-Address with a PVE-node.
    • This can be used to send a WOL (wake on lan) packet to that node from any other node in the PVE-Cluster, and thus power it on on-demand
    • Provides a comfortable way of saving power by keeping only part of the nodes permanently online, and starting others on demand (e.g. increased load, or for running guests during upgrades to the PVE-nodes.)
  • GUI Container wizard creates unprivileged containers by default
  • HA improvements and added flexibility
    • It is now possible to set a datacenter wide HA policy which can change the way guests are treated upon a Node shutdown or reboot. The choices are:
      • freeze: always freeze servivces, independent of the shutdown type (reboot, poweroff)
      • failover: never freeze services, this means that a service will get recovered to another node if possible and if the current node does not comes back up in the grace period of 1 minute.
      • default: this is the current behavior, freeze on reboot but do not freeze on poweroff
  • Suspend to disk/hibernate support for Qemu/KVM guests
    • qemu guests can be 'hibernated' (have their RAM contents and internal state saved to permanent storage) and resumed on the next start.
    • This enables users to preserve the running state of the qemu-guests across most upgrades to and reboots of the PVE-node.
    • Additionally it can speed up the startup of guests running complex workloads/ workloads which take lots of resources to setup initially, but which need not run permanently.
  • Support for guest (both Qemu and LXC) hookscripts:
    • Hook-scripts are small executables which can be configured for each guest, and are called at certain steps of the guest's lifetime ('pre-start', 'post-start', 'pre-stop', 'post-stop').
    • This gives Administrators great flexibility in the way they can prepare the environment for the guest (e.g. adding necessary network resources (routes, vlans), firewall-rules, unlocking encrypted files/devices,...) and cleaning them up when the guest is shutdown or stopped.
  • Improved Qemu Guest creation wizard:
    • Certain often requested options (e.g. Machine-type (q35, pc-i440fx), Firmware (Seabios, UEFI), SCSI controller) can now be selected directly in the VM Creation Wizard, and dependent options get set to sensible values directly.
  • Clusterwide Subscription Check and Display
    • The Datacenter Summary page now gives Administrators a global overview of the subscription status of all nodes in a PVE cluster, thus helping to verify the support-status of the complete cluster on one quick glance
  • Improved UI in the installer
    • It is now possible to go back to a previous screen in the pve-installer, and adapt choices made there, without the need to restart the complete installation process
    • Before the actual installation is performed an information-page is displayed containing all relevant information
  • U2F Authentication for the WebUI
    • PVE's WebUI now can optionally use two-factor authentication for users.
    • This is generally considered good security practice, and in certain domains and environments it is required.
    • The new U2F authentication and the TOTP second factor authentication can be configured by each user for themselves (before all 2FA solutions needed to be configured by a user with User.Modify permission).
  • Improved reference documentation
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.3

Released 4.12.2018: See Downloads, updated ISO image with installer and apt bug fixes and updated packages on 23.1.2019

  • Based on Debian Stretch 9.6
  • Kernel 4.15.18
  • QEMU 2.12.1
  • LXC 3.0.2
  • ZFS 0.7.12
  • Ceph 12.2.8 (Luminous LTS, stable), packaged by Proxmox
  • Installer with ZFS: no swap space is created by default, instead an optional limit of the used space in the advanced options can be defined, thus leaving unpartitioned space at the end for a swap partition.
  • Disk Management on GUI (ZFS, LVM, LVMthin, xfs, ext4)
  • Create CephFS via GUI (MDS)
  • CephFS Storage plugin (supports backups, templates and ISO images)
  • LIO support for ZFS over iSCSI storage plugin
  • ifupdown2 package and partial GUI support
  • Delete unused disk images on the storage view
  • Enable/disable the local cursor in noVNC
  • Enable/disable autoresize in noVNC
  • Edit /etc/hosts/ via GUI, which is necessary whenever the IP or hostname is changed
  • Editable VGA Memory for VM, enables higher resolutions (UHD) for e.g. SPICE
  • VM Generation ID: Users can set a 'vmgenid' device, which enables safe snapshots, backup and clone for various software inside the VM, like Microsoft Active Directory. This vmgenid will be added to all newly created VMs automatically. Users have to manually add it to older VMs.
  • qemu-server: add ability to fstrim a guest filesystem after moving a disk (during move_disk or migrate with local disks)
  • Emulating ARM virtual machines (experimental, mostly useful for development purposes)
  • vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias vGPUS.
  • pvesh rewrite to use the new CLI Formatter framework - providing a configurable output format for all API calls (formats include: json, yaml and a pleasant table output with borders). Provides unified support for json output for the PVE CLI Tools.
  • Nesting for Containers (privileged & unprivileged containers): Allows running lxc, lxd or docker inside containers, also supports using AppArmor inside containers
  • Mounting CIFS/NFS inside containers (privileged): Allows using samba or nfs shares directly from within containers
  • Improved reference documentation
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.2

Released 16.05.2018: See Downloads

  • Based on Debian Stretch 9.4
  • Kernel 4.15.17
  • QEMU 2.11.1
  • LXC 3.0.0
  • Ceph 12.2.5 (Luminous LTS, stable), packaged by Proxmox
  • ZFS 0.7.8
  • Cloudinit GUI support
  • Cluster create/join nodes via GUI
  • Certificate management including Let´s Encrypt GUI
  • SMB/CIFS Storage plugin (supports backups, images, templates, iso and containers)
  • Display IP for VM (using qemu-guest-agent)
  • LXC: templates and clones, move volume/disk
  • Create and edit new roles via GUI
  • I/O bandwith limits for restore operations (globally, per storage or per restore job)
  • new and improved xterm.js integration including reconnect logic (on container reboots or restart migrations)
  • Basic/Advanced GUI
  • ebtables support
  • Improved reference documentation
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.1

Released 24.10.2017: See Downloads, updated ISO image with installer bug fixes and updated packages on 18.12.2017

  • Based on Debian Stretch 9.2
  • Kernel 4.13.3
  • QEMU 2.9.1
  • LXC: update to 2.1
  • Ceph 12.2.1 (Luminous LTS, stable), packaged by Proxmox
  • ZFS 0.7.2
  • Improved reference documentation with screenshots
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.0

Released 04.07.2017: See Downloads, updated ISO image with installer bug fixes and updated packages on 09.08.2017

  • Based on Debian Stretch 9.0
  • Kernel 4.10.15
  • QEMU 2.9
  • LXC: update to 2.0.8
  • New asynchronous Storage Replication feature (needs ZFS, technology preview)
  • New/updated LXC templates (Debian, Ubuntu, CentOS, Fedora, OpenSUSE, Arch Linux, Gentoo and Alpine)
  • Updated/improved noVNC console]
  • Ceph v12.1.0 Luminous (technology preview), packaged by Proxmox
  • live migration with local storage
  • GUI improvements
    • USB und Host PCI address visibility
    • improved bulk and filtering options
  • Improved installation ISO
  • Importing Qemu/KVM_Virtual_Machines#_importing_virtual_machines_from_foreign_hypervisors
  • improved reference documentation with screenshots
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.0 beta2

Released 23.05.2017: See Downloads

  • Based on Debian Stretch
  • Kernel 4.10
  • Update to QEMU 2.9
  • Improved handling of re-used disks in installer
  • Improved handling of network devices in installer
  • Many small improvements and bug fixes

Proxmox VE 5.0 beta1

Released 22.03.2017: See Downloads

  • Based on Debian Stretch
  • Kernel 4.10
  • Many small improvements and bug fixes

Proxmox VE 4.4

Released 13.12.2016: See Downloads

  • update kernel to 4.4.35
  • KVM: update to qemu 2.7.0
  • LXC improvements
    • update to LXC 2.0.6
    • implement new restart migration
    • unprivileged containers (now the GUI)
    • updated template repository (Debian, Ubuntu, CentOS, Fedora, Arch and Alpine)
  • GUI improvements
    • new ceph dashboard
    • new cluster dashboard
    • improved disk management, smart status supporting more SSD types
    • HA improved GUI
  • ISO installer supports advanced ZFS settings
  • dedicated network for VM/CT migrations (CLI only)
  • improved reference documentation, e.g. contains now also screenshots
  • DRBD9 (removed from core, now maintained directly by Linbit, due to license change)
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.3

Released 27.09.2016: See Downloads

  • GUI improvements
    • new search (“ctrl-shift-f”)
    • vertical menus with groups and icons
    • double click to open VM/CT console
    • new status overview for host, VM and containers, signal colors for overloaded CPU, etc.
    • Task log window remembers its size
  • VM wizard proposes now optimal settings dependent to the selected operating system (for Linux the default is “virtio scsi disk” and "virtio net")
  • comprehensive reference documentation in html, pdf and epub format, including contextual help function on several places - based on asciiDoc
  • new disk management, including SMART monitoring and SSD wearout level display (for Intel, Sandisk and Samsung SSD).
  • based on Debian Jessie 8.6.0
  • Linux kernel 4.4.19
  • KVM/qemu 2.6.1
  • LXC: update to 2.0.4
  • many new and improved manual/help pages
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.2

Released 27.04.2016: See Downloads

  • GUI update to Sencha Ext JS 6, including new icon set and dynamic performance charts
  • based on Debian Jessie 8.4.0
  • Linux kernel 4.4.6
  • KVM/qemu 2.5.1
  • installer: LVM thin or ZFS plugin pre-configured
  • LVM thin support
  • Let´s Encrypt support
  • DRBD9: drbd 9.0.2 kernel module and drbdmanage 0.95
  • LXC: improved container setup
  • LXC: network rate limits
  • LXC: adding mount-points via GUI
  • LXC: pct can mount rootfs for read/write access of stopped or running containers
  • LXC: backup/snapshot improvements, including proper backup and restore of ACL
  • LXC: add support for Alpine Linux, Ubuntu 16.04
  • HA manager improvements
  • new confirmation dialog to prevent accidentally removal of a virtual machine or container
  • updated GUI translation (especially French, Italian and German)
  • many new and improved manual pages
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.1

Released 11.12.2015: See Downloads

  • based on Debian Jessie 8.2.0
  • Linux kernel 4.2.6
  • improved startup/shutdown behavior (systemd)
  • enable NTP by default
  • installer: allow up to 8 disks for ZFS
  • KVM: add qemu agent GUI option
  • KVM: update network boot ROMs
  • Improve HA GUI for users with restricted permissions
  • add Galician language to GUI
  • LXC: add rootfs resize to GUI
  • LXC: add support for Fedora 22 and Debian stretch/sid, Ubuntu 15.10
  • LXC: support unpriviledged containers (technology preview)
  • storage: added LVM thin support (technology preview)
  • Support for Turnkey Linux LXC appliances
  • added new pvereport command
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.0

Released 05.10.2015: See Downloads

  • based on Debian Jessie 8.2.0
  • Linux kernel 4.2
  • QEMU 2.4
  • new high availability manager
  • bash completion support
  • full IPv6 support
  • include LXC (and removed OpenVZ) - see Convert OpenVZ to LXC
  • added DRBD9 with drbdmanage (technology preview)
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.0 beta2

Released 10.09.2015: See Downloads

  • based on Debian Jessie 8.2.0
  • Linux kernel 4.2
  • QEMU 2.4
  • LXC improvements Linux Container
    • Containers can be created on local storage, NFS, Ceph RBD, DRBD9, ...
    • Migration (offline)
  • IPv6 support
  • DRBD9
  • Ceph Server packages
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.0 beta1

Released 22.06.2015: See Downloads

Proxmox VE 3.4

Released 19.02.2015: See Downloads - updated ISO image with ZFS 0.6.4.1 on 27.05.2015

  • add ZFS support
    • installer supports all ZFS RAID levels.
  • pve-manager (GUI) updates
    • openvz: implement suspend/resume
    • qemu GUI: support new pending changes API
    • new 'disconnect' button on the network edit panel
    • vzdump: new option --stop to abort running backup job
    • add NUMA option to processor form
    • add GUI for local ZFS storage
    • add startall/stopall/migrateall menu buttons
    • add multiqueues field to NIC form
  • update to qemu 2.1.3
  • qemu-server updates
    • shutdown by Qemu Guest Agent if the agent flag is set
    • snapshot_create: call fsfreeze if agent flag is set
    • vzdump: use qga freeze in vzdump in snapshot mode
    • add custom NUMA topology support
    • serial: allow to pass arbitrary device names
    • add link_down flag to network config
    • support additional e1000 variants for VM machines
    • enable hotplug by default
    • hotplug config: allow to enable specific features
    • include memory hotplug patch (experimental)
  • improved hotplug support
  • based on Debian Wheezy 7.8
  • kernel updates
  • countless bug fixes and package updates, for all details see bugtracker and GIT

Proxmox VE 3.3

Released 15.09.2014: See Downloads

  • pve-manager (GUI) updates
    • new Proxmox VE Firewall
    • noVNC console
    • openvz: add bridge vlan && firewall options to gui
    • new Proxmox VE Mobile, GUI for mobile devices
    • add new 'Pool View'
    • ZFS storage can now be configured on GUI
    • glusterfs: new option to specify backup volfile server
    • add new email_from option to datacenter.cfg
    • add Persian (Farsi) translation.
    • improved Spanish translation
    • update Chinese translation
    • Countless updates and fixes
  • update to qemu 2.1.0
    • pci passthrough improvements
    • hotplug improvements
    • migration : enable auto-converge capability
    • add cpu_hotplug (and maxcpus config)
    • add virtio-net multiqueue support
    • new option smbios1 to specify SMBIOS type 1 fields
    • set uuid for newly created machines
    • support new q35 machine type
    • add Broadwell cpu model
    • compile with new libiscsi (1.12.0)
    • use glusterfs 3.5.2 libraries
    • support drive option 'discard'
    • add support for new qemu throttling burst max parameters
    • add 'vmxnet3', 'lsi53c810', and 'pvscsi' to the list of available network card models
  • improved Console support
    • HTML5 Console for shell, VM and container console (noVNC)
    • noVNC console is now the default
    • vncterm: new option -notls (for novnc, which use 'wss')
    • vncterm: updated signature for java applet to avoid warnings
  • pve-kernel-2.6.32-32-pve: 2.6.32-136
    • update aacraid, arcmsr, netxtreme2, ixgbe, igb, megaraid_sas and e1000e drivers
    • update to vzkernel-2.6.32-042stab093.4.src.rpm
    • allow to use grub-efi-ia32 boot loader
  • pve-kernel-3.10.0-4-pve: 3.10.0-17
    • enable vfio xfga
    • update arcmsr, netxtreme2, ixgbe, igb, e1000e drivers
    • update to kernel-3.10.0-123.6.3.el7.src.rpm
    • allow to use grub-efi-ia32 boot loader
    • Note: there is still no OpenVZ support with this kernel
  • update ceph packages to 'firefly' (0.80.5)
    • Note: Please upgrade ceph packages first if you run ceph server on proxmox nodes (see ceph upgrade instructions).
  • update gluster packages to 3.5.2
  • fence-agents-pve: 4.0.10
    • update to 4.0.10
    • add fence_ovh and fence_amt
    • remove baytech, bullpap, cpint, egenera, mcdata, nss_wrapper,rackswitch, vixel, xcat. Those agents are no longer included in upstream package.
    • removed fence_scsi
    • Note: This includes updates for fence_ipmilan (fence_ilo3, fence_ilo4, fence_imm, and fence_idrac), and some parameter names changed (see 'man fence_ipmilan'). Please verify that your fence device still works if you use HA.
  • based on Debian Wheezy 7.6
  • countless bug fixes and package updates, for all details see bugtracker and GIT

Proxmox VE 3.2

Released 10.03.2014: See Downloads

  • improved SPICE support
    • spiceterm: console for OpenVZ and host
    • add new console option to datacenter.cfg (java applet vs. spice)
    • add multi-monitor support
    • GUI: use split-button to easily select SPICE or VNC
    • more details on https://pve.proxmox.com/wiki/SPICE
  • update qemu to 1.7.0
    • add 'pvscsi' to the list of scsi controllers (emulate the VMware PVSCSI device)
    • add 'lsi53c810' to the list of scsi controllers (supported on some very old Windows NT versions)
    • add 'vmxnet3' to the list of available network card models (emulate VMware paravirtualized network card)
    • add drive option 'discard'
    • add support for new qemu throttling burst max parameters
    • improved live backup
  • pve-kernel-2.6.32-27-pve: 2.6.32-121
    • update to vzkernel-2.6.32-042stab084.20.src.rpm
    • update e1000, igb, ixgbe, netxtreme2, megaraid_sas
    • include latest ARECA RAID drivers
    • update Broadcom bnx2/bnx2x drivers to 7.6.62
    • update aacraid to aacraid-1.2.1-30300.src.rpm
  • Ceph Server (Technology Preview)
  • added Open vSwitch support (Technology Preview)
  • Optional 3.10 Kernel (based on RHEL7 beta, currently without OpenVZ support, for testing only)
  • storage: new ZFS plugin (Technology Preview), see https://pve.proxmox.com/wiki/Storage:_ZFS
  • storage: remove nexenta plugin (ZFS plugin is faster)
  • updated GlusterFS to 3.4.2
  • ISO installer uses now always GPT partition table
    • added 'gdisk' to manage and view partitions via CLI
  • based on Debian Wheezy 7.4
  • countless bug fixes and package updates (for all details see bugtracker and GIT

Proxmox VE 3.1

Released 21.08.2013: See Downloads

  • pve-kernel-2.6.32-23-pve (2.6.32-109)
    • update to vzkernel-2.6.32-042stab079.5.src.rpm
    • remove fix-ipoib-add-missing-lock.patch (now upstream)
    • include Highpoint 2710 RAID driver
    • update megaraid_sas to 6.600.18.00
  • pve-cluster (3.0-7)
    • suppress another warning about uninitialized value at boot time
    • avoid warning when adding a new node (can't create shared ssh key database...)
    • avoid warning if /etc/pve/priv/ already exists
    • add migration_unsecure param to datacenter.cfg
  • pve-firmware (1.0-23)
    • update for RHEL6.4 based kernels
  • pve-manager (3.1-3)
    • add new favicon.ico
    • fix perl dependencies
    • display subscription status after login
    • add GUI for GlusterFS storage
    • vzdump: wait one second before unmounting the snapshot (avoid device busy error)
    • apt: try to detect new packages
    • tell apt to use http_proxy settings
    • support to view changelogs for available pve packages
    • allow to configure cipher list for pveproxy
    • automatically add pve repository to apt sources list
    • allow to use serial terminal as display
    • avoid apt autoremoval for pve-kernel packages
    • pveproxy: use same cipher list as debian apache ssl.conf
    • avoid apt autoremoval for pve-kernel packages
    • avoid and fix several proxy loops
    • correctly initialize $rpcenv in spiceproxy
    • fix bug #437: include pve-libspice-server1 in package list
    • improved spice proxy code
    • add button to start SPICE console
    • add spiceproxy server
    • fix upgrade shell (quoting bug)
    • new pveupgrade script, used to display additional information to the user (reboot required? database up to date? ...)
    • add button to display package versions, to provide same info as 'pveversion -v' on GUI.
    • fix bug #408: reset ENV before starting shell
    • fix bug #379: allow to overwrite existing CTs if user has VM.Backup permissions
    • Allow to send notification mail about new packages.
    • package manager: run system upgrade inside VNC shell, group packages by 'Origin', remove apt/upgrade API completely.
    • package manager: improve changelog viewer, bug fixes
    • add package update manager to GUI
    • update Norwegian translations
    • rrd display : avoid 2 refresh
    • avoid gzip http response for png & gif & jar
  • pve-qemu-kvm (1.4-17)
    • enable glusterfs backend
    • add patch to use pce certs for spice by default
    • disable spice-socket.patch
    • Allow to query spice ticket
    • add usbredir support to spice
    • add experimental code to support spice
  • qemu-server (3.1-1)
    • allow pass through of usb parallel devices (--parallel0 /dev/usb/lp0)
    • fix bugs in migration code (wrong qw() usage)
    • vncproxy: load config from correct node
    • allow to use a socket for serial devices
    • implement 'qm terminal' to open terminal via serial device
    • add ability to run without graphic card ('vga: serial[n]')
    • add support for unsecure/fast migration (setting in datacenter.cfg)
    • remove spice cert paths (depend on pve-qemu-kvm >= 1.4-16)
    • implement spice seamless migration
    • support usb redirection devices for spice (usb[n]: spice)
    • disable tablet device by default for spice
    • spiceproxy API: allow client to choose proxy address
    • spiceproxy API: read cert subject name directly using Net::SSLeay
    • spice: use TLS (encrypt whole traffic)
    • allow to pass SCSI generic devices to guests, for example "scsi0: /dev/sg5"
    • cpu flags optimization
    • add support for SPICE
  • vncterm (1.1-4)
    • Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users previously used apache option SSLCertificateChainFile for that).
  • pve-libspice-server1 (0.12.4-1)
    • initial release and update to 0.12.4
  • ceph (0.67.1-1~bpo70+1)
    • New upstream release
  • libpve-access-control (3.0-6)
    • use shorter spiceproxy tickets
    • add code to generate tickets for SPICE
  • libpve-common-perl (3.0-6)
    • auth.conf: avoid warning if file does not exist, fix file permissions
    • add parser for "/etc/apt/auth.conf"
  • libpve-storage-perl (3.0-10)
    • add Glusterfs Plugin
    • API: add scan method for glusterfs
    • sheepdog: fixes for storeid != 'sheepdog'
  • lvm2 (2.02.98-pve4)
    • New upstream release (fix for udev rules)
  • glusterfs
  • SPICE
  • based on Debian 7.1 (Wheezy)
  • Countless bug fixes (for all details see bugtracker and GIT

Proxmox VE 3.0

Released 24.05.2013: See Downloads

  • pve-kernel-2.6.32 2.6.32-100
    • fix CVE-2013-2094
  • update ceph packages to 0.61.2
  • libpve-common-perl 3.0-4:
    • fix bug #381: use persistent reservation file for ports
    • new function PVE::Tools::next_migrate_port()
  • libpve-storage-perl 3.0-6
    • rbd : clone volume to same pool that base volume
    • extend storage list API with useful parameters for clone
    • fix iscsi session scan bug
  • pve-cluster 3.0-4
    • depend on fuse (seems most utilities moved from fuse-utils to fuse)
    • remove depencency on fuse-utils (this package is no longer required)
    • fix warning about uninitialize value
    • fix bug 383: restart pveproxy instead of apache2
  • pve-manager 3.0-20
    • fix login problem when http_proxy is set
    • updated Chinese, German and French translation
    • allow to upload files with spaces in filename (replace with '_')
    • allow to delete pools with non-existent VMs/Storage
    • use correct icons for templates
    • sort templates after regular VMs
    • StorageSelector: allow to use new target option, used by Clone
    • fix bug #385: correctly handle Accept-Encoding
    • fix bug #374: correctly remove destroyed container from pool
  • pve-qemu-kvm 1.4-12
    • vma create: only store basename of config file.
  • qemu-server 3.0-15
    • clone disk : keep source volume params
    • clone: check is we can clone to target storage
    • fix bug #381: use PVE::Tools::next_migrate_port()
    • restore: do not restore template flag
  • vncterm 1.1-3
    • re-enable javascript-events.patch (needed by migrate)
  • based on Debian 7.0 (Wheezy)
  • new VM clone feature
  • new event driven API server (pveproxy)
    • completely replace apache2
    • efficient support for HTTP keep-alive
  • support bootlogd (boot log can be viewed on the GUI)
  • update pve-qemu-kvm to 1.4.1
  • update kernel to vzkernel-2.6.32-042stab076.7.src.rpm
    • changed default IO Scheduler to 'deadline'
    • updated Intel network drivers for e1000e, igb and ixgbe
  • Countless bug fixes (for all details see bugtracker and GIT

Proxmox VE 2.3

Released 04.03.2013: See Downloads

  • update qemu-kvm to 1.4.0
  • new kvm backup implementation, see Backup and Restore
  • added RBD (ceph) support on GUI
  • update kernel to vzkernel-2.6.32-042stab072.10.src.rpm
  • include latest Broadcom bnx2/bnx2x drivers
  • include latest Adaptec aacraid driver 1.2-1[29900]
  • update e1000e to 2.2.14
  • update igb to 4.1.2
  • update ixgbe to 3.12.6
  • enable CONFIG_RT_GROUP_SCHED (also update corosync if you install this kernel)
  • extend memory GUI to support ballooning
  • implement auto-ballooning
  • add HD resize feature to expand disks
  • updated network drivers (bnx2/bnx2x/e1000e/igb/ixgbe)
  • added omping binaries (for testing multicast between nodes)
  • update to latest Debian version 6.0.7
  • qcow2 as default storage format, cache=none (previously raw)
  • KVM64 as default CPU type (previously qemu64)
  • e1000 as default NIC (previously rtl8139)
  • task history per VM
  • Node Summary: added "KSM sharing" and "CPU Socket count"
  • enable/disable tablet for VM on GUI without stop/start of VM (you can use vmmouse instead, for lower CPU usage, works on modern Linux and on all Windows VMs as long as you install the vmmouse drivers)
  • bug fixes (for all details see bugtracker and GIT

Proxmox VE 2.2

Released 24.10.2012: See Downloads

  • update kernel to vzkernel-2.6.32-042stab062.2.src.rpm
  • update Intel nics drivers (e1000e to 2.1.4, ixgbe to 3.11.33, igb to 4.0.17)
  • update qemu-kvm to 1.2.0
  • openvz: update vzctl to 4.0
  • openvz: use real console instead of 'vzctl enter'
  • add Live Snapshots support (qcow2)
  • added Slovenian translation
  • kvm: new option to select SCSI controller hardware
  • kvm: support up to 32 network devices
  • kvm: support up to 16 virtio devices
  • kvm: add SATA to GUI
  • updated cluster packages
  • update to latest Debian version 6.0.6
  • bug fixes (for all details see bugtracker and GIT

Proxmox VE 2.1

Released 27.04.2012: See Downloads

  • simplify GUI for users without permissions
  • implemented startup/shutdown ordering
  • improved task logs on node startup/shutdown
  • added Russian, Italian and Turkish translation
  • updated corosync cluster stack to 1.4.3
  • update LVM to 2.02.95 (fixes snapshot problems)
  • bug fixes

Proxmox VE 2.0

Updated ISO 5.4.2012: See Downloads

  • pve-kernel-2.6.32-11-pve (2.6.32-65)
    • rebase on vzkernel-2.6.32-042stab053.5.src.rpm
  • fix for windows timing issues ("args: -no-hpet -no-kvm-pit-reinjection" are now defaults for newer Windows guests)
  • subscription key management (commercial subscribers)
  • vzdump fixes (qmrestore)

Released 30.03.2012: See Downloads

(First public beta released on 30.09.2011, rc1 on 16.02.2012, final release on 30.03.2012)

Proxmox VE 2.0 Documentation

  • Complete new GUI
    • based on Ext JS 4 JavaScript framework
    • fast search-driven interface, capable of handling hundreds and probably thousands of VM´s
    • secure VNC console, supporting external VNC viewer with SSL support
    • role based user- and permission management for all objects (VM´s, storages, nodes, etc.)
    • Support for multiple authentication sources (MS ADS, LDAP, Linux PAM, Proxmox VE authentication)
  • High Availability Cluster support for KVM guests and OpenVZ containers
    • resource agents for KVM and OpenVZ
    • GUI for managing KVM and OpenVZ HA settings
  • Based on Debian 6.0 Squeeze
    • long-term 2.6.32 Kernel (based on RHEL6.x) with KVM and OpenVZ as default
  • New cluster communication based on corosync, including:
    • Proxmox Cluster file system (pmxcfs): Database-driven file system for storing configuration files, replicated in realtime on all nodes using corosync
    • creates multi-master clusters (no single master anymore!)
    • cluster-wide logging
    • basis for HA setup´s with KVM guests and OpenVZ containers config files
  • RESTful web API Proxmox_VE_API
    • Resource Oriented Architecture (ROA)
    • declarative API definition using JSON Schema
    • enable easy integration for third party management tools
  • Backup/Restore via GUI
    • GUI and CLI, works for OpenVZ containers and KVM VM´s
    • "Backup Now" via GUI
    • Restore via GUI
    • All jobs can be monitored as “Recent tasks”
  • OpenVZ
    • Appliance Downloader, including TurnKey Linux channel
    • Multiple storages for OpenVZ container, no limit to /var/lib/vz anymore!!!
    • vswap support
    • Improved init.log (shows start-up logs of a OpenVZ container)
  • Commitment to Free Software (FOSS): public code repository and bug tracker for the 2.x code base.
  • Licensed under GNU Affero General Public License, version 3 https://www.gnu.org/licenses/agpl-3.0.html.

Proxmox VE 1.9

Updated ISO 27.9.2011, including the following, see Downloads

Release Notes:

  • pve-kernel-2.6.32-6-pve (2.6.32-47)
    • update to vzkernel-2.6.32-042stab037.1.src.rpm
    • use bnx from upstream
    • update igb driver to 3.2.9
    • update e1000e driver to 1.6.2
    • enable GDTH driver
    • update ata_generic to support intel IDE-R
    • use BIOS default settings for ASPM (fix-aspm-policy.patch)
  • pve-firmware (1.0-14)
    • include firmware for new bnx2 drivers
  • vzctl (3.0.29-2pve1)
    • update to 3.0.29.2
  • vzdump (1.2-16) unstable; urgency=low
    • use rsync option '--sparse' on first pass (take less disk space - as suggested by John Stevens)

Original 1.9 ISO, released 13.09.2011: See Downloads

Release Notes:

  • pve-kernel-2.6.32 (2.6.32-43)
    • rebase on vzkernel-2.6.32-042stab036.1.src.rpm (new stable Openvz kernel branch), updates for drivers including e1000e to 1.5.1, ARECA RAID driver, megaraid_sas, bnx2, igb to 3.1.16, ixgbe to 3.5.14-NAPI, drbd 8.3.10, ...
  • vzctl (3.0.28-1pve5)
    • update to latest upstream
    • set defaull template to debian-6.0-standard
    • set CONFIGFILE="pve.auto" in /etc/vz/vz.conf
    • merge some fixes from upstream
  • pve-manager (1.9-24)
    • fix uptime display for 2.6.32 kernel with 1000HZ
    • support newer vzctl versions.
    • support 'maxfiles' backup option.
  • pve-qemu-kvm (0.15.0-1)
    • update to upstream 0.15.0
    • use pxe roms from upstream qemu-kvm
  • qemu-server (1.1-32)
    • small fixes for new qemu-kvm 0.15.0
  • libpve-storage-perl (1.0-19)
    • set LC_ALL instead of LANG (avoid bug when user sets LC_ environment variables)
    • iscsi: tolerate errors when not all portals are online.
  • vzdump (1.2-15)
    • run pre-restart hook after snapshot

Proxmox VE 1.8

Updated ISO 16.6.2011, includes current stable packages Downloads

  • pve-kernel-2.6.32 (2.6.32-33)
  • aacraid with support for Adaptec 6 series (e.g. 6805)
  • KVM 0.14.1
  • vzctl 3.0.27
  • vzdump 1.2-13
  • bug fixes

Released 30.03.2011: See Downloads

Release Notes:

  • update to Debian Lenny 5.0.8
  • update to kvm 0.14.0
  • updated kernels with many fixes
  • protect against Cross Site Request Forgery (added anti-CSRF tokens)
  • bug fixes

Detailed change log:

  • qemu-server (1.1-30)
    • use 'cache=none' for raw images and devices to improve performance
  • pve-qemu-kvm (0.14.0-2)
    • also install vgabios-qxl.bin vgabios-stdvga.bin vgabios-vmware.bin
    • update to 0.14.0
    • add fix for eventfd support (msix-eventfd-fix.patch)
    • removed kvmtrace (removed from upstream?)
    • add vnc keyboard fixes for fr-ca (reported by Pierre-Yves)
  • pve-kernel-2.6.32 (2.6.32-32)
    • update to Debian Squeeze kernel 2.6.32-32
    • remove patch use-init-task-credential.patch (fixed in upstream)
    • enable CONFIG_SENSORS_K10TEMP=m
  • pve-kernel-2.6.18 (2.6.18-14)
    • update to ovzkernel-2.6.18-238.5.1.el5.028stab085.3.src.rpm
    • increased kernel API version to '5-pve'
    • update to SCST 2.0.0.1
    • update to OCFS2 1.4.7
  • pve-qemu-kvm-2.6.18 (0.9.1-12)
    • update to kvm-83-224.el5.src.rpm
  • pve-kernel-2.6.35 (2.6.35-10)
    • update to Ubuntu-2.6.35-27.48
  • pve-manager (1.8-15)
    • protect against Cross Site Request Forgery (added anti-CSRF tokens)
    • correctly encode html entities inside notes (avoid persistant XSS vulnerabilities)
    • use http_proxy in cron APL download if configured

Proxmox VE 1.7

Released 30.11.2010: See Downloads

Change log:

  • libpve-storage-perl (1.0-16)
    • several fixes for multipath
  • pve-kernel-2.6.18 (2.6.18-10)
    • update to ovzkernel-2.6.18-194.26.1.el5.028stab070.14.src.rpm
  • pve-kernel-2.6.32 (2.6.32-28)
    • update to Debian Squeeze kernel 2.6.32-28 (openvz 2.6.32-dzhanibekov)
    • update igb driver to version 2.4.8
    • update ixgbe driver to version 3.0.14
  • pve-manager (1.7-10)
    • we only display the warning about missing openvz if we have no kvm.
    • fix config change detection in pvectl
    • on the create VM page, query capabilities of remote host (kvm/openvz enabled?) and select resonable defaults.
  • pve-qemu-kvm (0.13.0-2)
    • do not install unnecessary blobs (manually install blobs, use --disable-blobs)
    • update migration and vnc keymap patches for 0.13.0
  • vzdump (1.2-9)
    • fix veth numbers when restoring openvz containers (vzrestore)
  • qemu-server (1.1-25)
    • new --repeat option for qmrestore
    • use new -netdev syntax for network devices. This enables tcp offload settings, and we can use 'vhost=on' for virtio-net
  • Small bug fixes

Proxmox VE 1.6 (updated) - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5

Released 18.10.2010: See Downloads

Change log:

  • pve-kernel-2.6.32 (2.6.32-24)
    • include fix for OpenVZ bug #1603
    • ixgbe driver update (3.0.12)
    • upgrade to Debian Squeeze kernel 2.6.32-24 (OpenVZ 2.6.32-dyomin)
    • strip debug info
    • remove default initrd image (no longer needed)
    • do not set CONFIG_DMAR_DEFAULT_ON
    • enable CAPIDRV module
  • pve-qemu-kvm (0.12.5-2)
    • enable up to 32 NICs (up to 29 are usable)
  • pve-manager (1.6-5)
    • updated Japanese translations
    • minor fixes in bin/pveversion (adjust for proxmox-ve-2.6.35)
    • update Czech translations.
  • iscsi-scst (1.0.1.1-4)
    • create /var/lib/scst/pr (required by the kernel module)
  • qemu-server (1.1-22)
    • use larger block size for dd (bs=256k) to speed up restore.
    • new backup=yes|no option for drives. VZDump ignores drives with option backup=no (many thanks to draguz)
    • new qmrestore option '--unique': create unique MAC address on restore (many thanks to Seth)
  • vzdump (1.2-8)
    • correctly escape output from 'find' (escape backslash using sed). It is now possible to backup files which contain a backslash character.
    • added new hook 'log-end' (see hook-script.pl example)
  • dab (1.1-9)
    • correct update-rc.d invocation (add '.' after runlevel)
    • fix postgres startup script for squeeze
    • support dependency based startup (squeeze)
    • use postgresql 8.4 for squeeze
  • pve-kernel-2.6.35 (2.6.35-6)
    • first public release (KVM only, KSM enabled)
  • pve-kernel-2.6.18 (2.6.18-8)
    • update to ovzkernel-2.6.18-194.17.1.el5.028stab070.7.src.rpm
    • remove initrd code (no longer needed)
    • strip debug symbols (reduce kernel package size)
  • pve-qemu-kvm-2.6.18 (0.9.1-8)
    • update to kvm-83-164.el5_5.23.src.rpm
    • enable all (previously disabled) kvm features (scsi, usb, ...)
  • pve-firmware (1.0-9)
    • include firmware for new 2.6.35 kernels
  • Small bug fixes

Proxmox VE 1.6 - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5

Released 30.08.2010: See Downloads

Change log:

  • New 2.6.32 kernel including OpenVZ support, based on latest Debian Squeeze Kernel
  • Removed KSM (OpenVZ does not support KSM)
  • DRBD tools: update to drbd 8.3.7
  • New KVM user-space tools (0.12.5)
  • New OpenVZ user-space tools (vzctl 3.0.24)
  • vzdump: use ionice to set io priority. By default, we now set "ionice -c2 -n7" to limit io (instead of using a default bwlimit)
  • vzrestore: new --force option to overwrite existing data
  • dab bug fix: only print a warning for packages installed directly with 'dpkg' (when there is no information in 'available' list)
  • dab: only mount /proc/ if not already mounted (avoid problems with vzctl 3.0.24)
  • Bug fixes

Update 20.9.2010:

2.6.32 Kernel:

  • pve-kernel-2.6.32 (2.6.32-19)
  • include latest Broadcom bnx2, bnx2x drivers
  • update to Debian kernel 2.6.32-22
  • fix CVE-2010-3081 kernel
  • include latest ixgbe driver

2.6.24 Kernel:

  • pve-kernel-2.6.24 (2.6.24-25)
  • update to Ubuntu-2.6.24-28.7 (fix CVE-2010-3081)

2.6.18 Kernel:

  • pve-kernel-2.6.18 (2.6.18-6)
  • fix CVE-2010-3081

Proxmox VE 1.5 - New Kernel 2.6.24 and 2.6.32, including KVM 0.12.4 and gPXE

Released 17.05.2010: See Downloads

Change log 2.6.24 (KVM and OpenVZ):

  • update to qemu-kvm-0.12.4
  • Seabios
  • include gPXE 1.0 network boot (WDS is working now)
  • update e1000e to e1000e-1.1.2.tar.gz
  • update igb to igb-2.1.9.tar.gz
  • update e1000 to e1000-8.0.19.tar.gz
  • update bnx2 to netxtreme2-5.2.50.tar.gz
  • update tg3 to tg3-3.105h.tar.gz
  • include alt1e (Atheros nic)
  • kernel update (latest 2.6.24)
  • Fixed drive order bug (KVM)
  • Allow up to 4094 bridges
  • update to the latest vzctl
  • Use option –sparse for qmigrate

Change log 2.6.32 (KVM only):

  • update to qemu-kvm-0.12.4
  • Seabios
  • include gPXE 1.0 network boot (WDS is working now)
  • kernel update (latest 2.6.32)
  • Fixed drive order bug (KVM)
  • Allow up to 4094 bridges
  • Use option –sparse for qmigrate

Proxmox VE 1.5

Released 20.01.2010: See Downloads

Including:

  • many Linux kernel improvements
  • support different kernel versions
  • new 2.6.18 Linux kernel - default (OpenVZ, KVM)
  • new 2.6.24 Linux kernel (OpenVZ, KVM)
  • new 2.6.32 Linux kernel (KVM only, KSM support)
  • use update-initramfs
  • DRBD: update to drbd 8.3.4
  • AOE: update to AOE6-73
  • SCST: update to latest SCST modules
  • new KVM user-space tools
  • we now use the qemu-kvm stable branch (0.11.1)
  • new virtual packages for better kernel dependency handling - proxmox-ve-2.6.18, proxmox-ve-2.6.24 and proxmox-ve-2.6.32
  • updated most appliances
  • installation CD now use/install proxmox-ve-2.6.18
  • installer uses UUID for /boot partition
  • updated vzctl, with improved init-logger patch
  • updated Russian and Japanese translation
  • many other small improvements and bug fixes

Proxmox VE 1.4

Released 19.10.2009: See Downloads

Including:

  • First release with new storage model
  • iSCSI support
  • NFS support
  • LVM (managed storage, shared LVM devices like FC)
  • DRBD storage replication with active/active mode
  • Multiple ISO storages (NFS)
  • Multiple backup locactions (including NFS)
  • Zero downtime live migration (KVM); over ssh channel - all traffic is encrypted
  • Updated web interface (pve-manager)
  • Support new storage model
  • Allow to specify MAC address (for KVM and OpenVZ in veth mode)
  • Added iSCSI Tape support, see https://scst.sourceforge.net/
  • Added SCST modules to the kernel
  • Provide SCST userspace package
  • Updated kernel; update broadcom bnx2 and tg3 drivers; updated ARCEA RAID (acrmsr) drivers; updated kvm modules (2.6.30.1); update realtek network drivers; update intel igb driver (igb-1.3.28.4.tar.gz); update intel e1000e driver (e1000e-1.0.2.5.tar.gz); cciss driver updates; ich10 updates for Nehalem (corei7) based systems
  • Updated kvm modules (2.6.30.1)
  • Update DRBD to 8.3.2
  • New KVM userspace tools
  • New feature: assign Sockets/Cores to KVM guests
  • We now use the qemu-kvm stable branch (0.11.0)
  • Rename package pve-kvm to pve-qemu-kvm
  • Qemu-server updates
  • Support up to 1000 vlans
  • New migrate_speed and migrate_downtime settings
  • New VZDump - complete rewrite to support the new storage architecture; do not waste space in 'stop' mode
  • Slightly changed naming of backup files; added date and time to file name; added VM type (Openvz/Qemu) to file name; new plugin architecture; support hook scripts


Proxmox VE 1.4 beta2

Released 29.09.2009: See Downloads

Including:

  • Updated vzdump web interface with support new storage model
  • Updated kernel: update broadcom bnx2 and tg3 drivers, updated kvm modules (2.6.30.1), updated realtek network drivers, more ich10 updates for Nehalem (corei7) based systems
  • New kvm userspace tools
  • New feature: assign Sockets/Cores to KVM guests
  • We now use the qemu-kvm stable branch (0.11.0)
  • Rename package pve-kvm to pve-qemu-kvm
  • Qemu-server updates: support up to 1000 vlans
  • New migrate_speed and migrate_downtime settings

Note: the sources.list does not point to the pvetest repo, so please adapt this accordingly if you want to update/install packages from pvetest

Proxmox VE 1.4 beta1

Released 11.09.2009: See Downloads

Including:

  • First release with new storage model
  • iSCSI support
  • NFS support
  • LVM (managed storage, shared LVM devices)
  • DRBD storage replication with active/active mode (DRBD 8.3.2)
  • Multiple ISO storages (NFS)
  • Zero downtime live migration (KVM) over ssh channel - all traffic is encrypted
  • Updated kernel:update Intel igb driver (igb-1.3.28.4.tar.gz), Intel e1000e driver (e1000e-1.0.2.5.tar.gz),cciss, ich10 updates, update DRBD to 8.3.2
  • new VZDump: complete rewrite to support the new storage architecture; do not waste space in 'stop' mode; new plugin architecture

Note: the sources.list does not point to the pvetest repo, so please adapt this accordingly if you want to update/install packages from pvetest

Proxmox VE 1.3

Released 4.06.2009: See Downloads

Including:

  • Updated Kernel
  • Support for Highpoint RR3120
  • Removed OpenVZ limit of 50 machines
  • Update to kvm-86
  • Vzdump: minor bug fixes
  • Qemu-server: added new 'tablet' option

Proxmox VE 1.2

Released 12.05.2009: See Downloads

Including:

  • Based on new Debian Lenny (Debian 5.0)
  • New KVM with many improvements (kvm-85)
  • New Kernel (still 2.6.24 based but with a lot of backported drivers)
  • Update to aoe6-71
  • Update to drbd-8.3.1
  • Include HighPoint RocketRAID 3xxx/4xxx Controller Driver
  • Update to busybox 1.14.0
  • Use busybox mdev as firmware loader
  • Compiled with gcc 4.3.2 (default on Debian Lenny)
  • Load virtio_blk (if you run Proxmox VE inside Proxmox VE as a KVM guest)
  • New OpenVZ vzctl (mainly bug fixes)
  • Vncterm: better terminal emulation
  • Everything updated to Debian Lenny
  • Many bug fixes

Proxmox VE 1.1

Released 15.01.2009: See Downloads

Including:

  • Updated Kernel - new e1000/e1000e drivers
  • New KVM with many improvements (KVM-83)
  • New Chinese translations
  • Minor pve-manager updates and bug fixes

Proxmox VE 1.0 - First stable release

Released 29.10.2008: See Downloads

Including:

  • KVM and OpenVZ live migration
  • Web interface for vzdump backups
  • Improved vzdump, now support KVM guests
  • New templates available
  • Support multiple languages (see Translations for details)
  • Improved installer (use grub as boot loader, automatic country/keyboard detection)
  • Support up to 16 CPUs
  • Based on Debian 4.0r5
  • KVM updates (kvm-75)
  • OpenVZ updates, Fedora support added
  • Improved Kernel (includes drivers for ISCSI, AoE 6-64, DRBD 8.2.6, qlogic firmware for ISCSI/FC HBAs)
  • Many bugfixes

Proxmox VE 0.9beta2

Released 22.7.2008, see Proxmox VE 0.9beta2 released!

Including:

  • Extend pre-built Virtual Appliances downloads
  • Multi-bridge support
  • Bonding
  • VLAN support
  • Extend web interface for KVM guests (add hard drives, network cards, etc.)
  • Many bug fixes

Proxmox VE 0.9

First public release: 15.4.2008