Roadmap: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
(→‎Release History: updated to newest packages/kernels)
 
(262 intermediate revisions by 5 users not shown)
Line 1: Line 1:
=Roadmap for 2.x=
<div class="toclimit-3 toclimit-4 toclimit-5">__TOC__</div>
(no release date announced)
 
*Complete new web interface (new javascript framework, RIA)
== Roadmap ==
*Based on Debian Squeeze
* <s>Offline updates</s> [https://forum.proxmox.com/threads/proxmox-offline-mirror-released.115218/ done]
*New cluster communication based on corosync
* <s>Cross-cluster migration mechanism</s> - foundation and CLI integration released with Proxmox VE 7.3
*HA for KVM guests
* Enhancement and stabilization of the integrated Software Defined Network solution
*User management (advanced)
** <s>Integrate local network devices into access control framework.</s> released with Proxmox VE 8.0
*RESTful web API
** Stabilize VLAN and source NAT management as first parts of bringing Proxmox VE SDN out of tech preview.
*Extend pre-built Virtual Appliances downloads
* Project "Cattle and Pets"
*Resource monitoring
** Improve user and virtual machine managing experience on big setups
*Firewall
** Add cluster-wide update and status control center
Already finished:
** Assist on Ceph upgrades with semi-automated restarts of services and OSDs
*Appliance builder (Released on 20th of January 2009 - [[Debian_Appliance_Builder]])
** <s>Improve and extend notifications by allowing one to add more endpoints besides email, each with separate filters</s> released with Proxmox VE 8.1
*Move to Debian Lenny (Released on 12th of May 2009)
** <s>Automated/unattended installation</s> done with Proxmox VE 8.2
*Flexible storage pools, including direct attached and remote storage like iSCSI and FC (Released on 11th of September 2009)
* Text based installer UI
*[[DRBD]] support (DRBD 8.3.4 or higher in all kernel branches 2.6.18, 2.6.24, 2.6.32 - see [[Proxmox VE Kernel]])
** <s>Provide a simple text based installer as alternative to the GTK one</s> released with Proxmox VE 8.0
** <s>Make text based installer ready to fully work even if just a serial console is present.</s> released with Proxmox VE 8.1
* Cluster Resource Scheduling Improvements
*: Short/Mid-Term:
** <s>Re-balance service on fresh start up (request-stop to request-start configuration change)</s> released with Proxmox VE 7.4
** Account for non-HA virtual guests
*: Mid/Long-Term:
** Add Dynamic-Load scheduling mode
** Add option to schedule non-HA virtual guests too
* Stabilizing the Software-Define Network stack
** <s>Fully support core SDN stack</s> - done with Proxmox VE 8.1
** Implement DHCP-based IP address management - tech-preview with Proxmox VE 8.1
** Improve and polish user interface experience


=Release History=
=Release History=
See also [https://forum.proxmox.com/forums/announcements.7/ Announcement forum]
== Proxmox VE 8.2 ==
Released 24. April 2024: See [[Downloads]] (updated ISO release 2 to ship newer EFI secure boot shim, on 22. August 2024.)
* Based on Debian Bookworm (12.5)
* Latest 6.8 Kernel as new stable default
* QEMU 8.1.5
* LXC 6.0.0
* ZFS 2.2.3
* Ceph Reef 18.2.2
* Ceph Quincy 17.2.7
=== Highlights ===
* New import wizard to migrate guests directly from other hypervisors.
*: Connect to other hypervisors using their public APIs and directly migrate guests to Proxmox VE.
*: First implementation is for VMware ESXi.
*: Guests can be started on Proxmox VE while their data is still being imported to the target storage in the background.
* Support for automated and unattended installation of Proxmox VE.
*: Proxmox VE now ships a tool that prepares a Proxmox VE ISO for automated installation.
*: The prepared ISO retrieves all required settings for automated installation from an answer file.
*: The answer file can be provided directly in the ISO, on an additional disk such as a USB flash drive, or over the network.
* Backup fleecing (advanced feature).
*: When creating a backup of a running VM, a slow backup target can negatively impact guest IO during the backup process.
*: Fleecing can reduce this impact by using fast local storage as a buffer for data blocks.
*: However, the fleecing approach requires the use of temporary local storage space.
*: Backup fleecing can make sense when backing up IO-heavy guests to a remote Proxmox Backup Server with a slow network connection.
* Modernized Proxmox VE firewall implementation based on <code>nftables</code> (opt-in technology preview).
*: The new implementation is written in Rust and intended to replace the current firewall based on <code>iptables</code> in the future.
*: Moving to <code>nftables</code> will improve robustness and make it possible to fix long-standing issues with the old implementation.
*: The <code>nftables</code> firewall is provided as a technology preview to gather feedback from the community and needs to be enabled manually.
*: With a few exceptions, its feature set is nearly on par with the old firewall implementation.
* Seamless upgrade from Proxmox VE 7.4, see [[Upgrade from 7 to 8]]
=== Changelog Overview ===
==== Enhancements in the web interface (GUI) ====
* Fix an issue where the VM creation wizard would not allow creating a VM without an ISO.
* Fix an issue that prevented users from seeing accessible replication jobs.
* Fix an issue where the node summary page would not display the version of a running foreign kernel ([https://bugzilla.proxmox.com/show_bug.cgi?id=5121 issue 5121]).
* When editing LVM or directory storages, add an explanatory tooltip to the "Shared" checkbox.
* Expose the <code>saferemove</code> setting for LVM storages in the GUI.
* Add some missing packages to the "Package versions" window.
* When editing ZFS storages, display <code>16k</code> as the blocksize placeholder to reflect the current ZFS defaults.
* Change "Syslog" button to "System Log" button to better reflect that the log is gathered from the systemd journal.
* Fix an issue where the GUI used an incorrect language code for Korean, and provide a clean transition for users who still have a cookie with the incorrect language code set.
* Improve documentation of the <code>/nodes/{node}/status</code> API call.
* Fix an issue where the date picker would choose the wrong date after changing to a different month.
* When granting permissions to a group, make the group selector searchable, similar to the user and token selectors.
* Clarify the confirmation prompt for removing a certificate without a name.
* Fix an issue where edit windows would not be correctly masked while loading.
* Fix an issue where adding a storage to a pool would break edit windows of the same browser session, and guard against similar issues in the future.
* Display the end-of-life message as a notice up until three weeks before the end-of-life date, and display it as a warning from that point on.
* Move the "Reset" button for edit windows to an icon-only button in the title bar ([https://bugzilla.proxmox.com/show_bug.cgi?id=5277 issue 5277]).
*: This reduces the risk of misclicking and accidentally resetting form data.
* Fix xterm.js not loading in certain OS+Browser constellations, for example iOS ([https://bugzilla.proxmox.com/show_bug.cgi?id=5063 issue 5063]).
* Change the icon for editing guest notes from a gear to a more fitting pencil.
* Allow to select note contents by double-clicking.
*: Previously, this was not possible because double-clicking on a note would always open the editor window.
*: This behavior can now be optionally disabled by a browser-local GUI setting.
* The TFA input field now sets an autocompletion hint for improved compatibility with password managers ([https://bugzilla.proxmox.com/show_bug.cgi?id=5251 issue 5251]).
* Fix an issue where the mobile UI did not allow to log in with TOTP.
* Display the subscription reminder in the mobile UI as well, for improved consistency with the GUI.
* The user edit window now disables the "Key IDs" field if the user has TFA configured.
* The HTTP server now supports compressing responses using <code>deflate</code> in addition to <code>gzip</code>.
* Improved translations, among others:
** French
** German
** Italian
** Japanese
** Korean
** Simplified Chinese
** Spanish
** Traditional Chinese
** Ukrainian
==== Virtual machines (KVM/QEMU) ====
* The new ESXi importer is available as a storage plugin.
*: This allows to import guests from an ESXi node (preferred for performance reasons) or a vCenter instance.
*: Guests need to be stopped on the VMware side before the import.
*: Live import allows starting the guests on Proxmox VE immediately and migrate their disks in the background.
*: Most settings are read from the source and translated to equivalent Proxmox VE configuration options.
*: Overrides for configuration options can be defined manually.
* New QEMU version 8.1.5
*: See the [https://wiki.qemu.org/ChangeLog/8.1 upstream changelog] for details.
* Apply some upstream QEMU patches:
** Fix a potential deadlock resizing disks with IO thread enabled.
** Fix a potential deadlock when taking a snapshot or a suspend-mode backup with VirtIO block disks and IO thread.
** Fix an issue where guest IO on a VirtIO block or VirtIO SCSI disk with IO thread could get stuck on backup, storage move or hotplug.
* Increase the limit of open files for QEMU processes ([https://bugzilla.proxmox.com/show_bug.cgi?id=4507 issue 4507]).
*: This fixes issues where a VM with many network devices with multi-queue or many virtual RBD disks would exceed the limit, resulting in VM crashes or stuck guest IO.
* Fix an issue where live-importing certain VMDK images would trigger an assertion failure.
* If a backup fails, increase the chance of showing the relevant error message by ignoring irrelevant error messages.
* Add CPU models introduced in QEMU 8.1.
* Mark Windows Server 2025 as supported by the <code>win11</code> OS type.
* Add support for emulated IOMMU (vIOMMU) which enables hypervisor VMs to passthrough PCI devices to nested VMs ([https://bugzilla.proxmox.com/show_bug.cgi?id=3784 issue 3784]).
* Allow up to 256 cores per socket.
* Add Intel E1000E as a supported network device model in the GUI, as it is a popular option in VMware guests.
* Allow specifying vendor and product information for virtual SCSI disks ([https://bugzilla.proxmox.com/show_bug.cgi?id=4957 issue 4957]). Fix a regression that temporarily prevented creation of SCSI Cloud-init disks ([https://bugzilla.proxmox.com/show_bug.cgi?id=5363 issue 5363]).
* Make IP addresses selectable for copy & paste in the guest agent network information.
* Fail live migration early if the VNC clipboard is enabled, as this is not yet supported by QEMU.
* Prevent starting a VM with a 32-bit CPU type and a 64-bit OVMF BIOS as it is unsupported.
* Prevent CPU hotplug for non-<code>x86_64</code> VMs as it is unsupported.
* Fix an issue where rebooting a VM with a passed-through mediated device would fail due to a race condition.
* Fix an issue where insecure live migration of multiple VMs in parallel could cause some migrations to fail due to a port conflict ([https://bugzilla.proxmox.com/show_bug.cgi?id=4501 issue 4501]).
* Fix an issue where removing a snapshot of a running VM would fail if a disk was moved since VM start ([https://bugzilla.proxmox.com/show_bug.cgi?id=2258 issue 2258]).
* Fix an issue where migration would fail to deactivate volumes on the source node in case the volume has to be renamed on the target node.
* Fix an issue where the storage for Cloud-init <code>cicustom</code> snippets would not be activated when starting a VM, causing the VM startup to fail ([https://bugzilla.proxmox.com/show_bug.cgi?id=4085 issue 4085]).
* Cloning a VM now succeeds even if deactivating volumes on the source node fails ([https://bugzilla.proxmox.com/show_bug.cgi?id=1734 issue 1734]).
*: Deactivating a volume can fail if a template is cloned multiple times in parallel, and previously caused the clone task to fail.
* Increase the default timeout for <code>drive-mirror</code> operations to 10 minutes, to avoid timeouts when moving storage of running VMs.
* When importing a disk, emit a warning when a fallback format is used instead of the requested format.
* Allow configuring the VNC clipboard in the GUI.
* Fix an issue where VMs that were suspended in-guest would be wrongly terminated after backup.
==== Containers (LXC) ====
* New LXC version 6.0.0
*: See the [https://discuss.linuxcontainers.org/t/lxc-6-0-lts-has-been-released/19567 upstream changelog] for details.
* Allow filtering firewall logs of containers, similarly to node and VM firewall logs.
* Add a <code>keep-env</code> option to the <code>pct enter</code> and <code>pct exec</code> commands that controls whether the current environment is passed to the container.
*: For backwards compatibility, <code>keep-env</code> currently defaults to enabled, and will default to disabled in the next major release.
* Make host device passthrough for containers available in the GUI ([https://bugzilla.proxmox.com/show_bug.cgi?id=754 issue 754]).
*: API and CLI already supported device passthrough since Proxmox VE 8.1.
* Avoid passing environment variables for the TLS certificate store location from the host to the container ([https://bugzilla.proxmox.com/show_bug.cgi?id=5194 issue 5194]).
*: These variables did not leak any sensitive information, but could cause errors in the container if the TLS certificate store was at a different location.
* When changing the power status of a container, lock its config asynchronously in the task worker to avoid blocking the API.
* Fix an issue that prevented hotplug of mount points when running kernel 6.5 or later ([https://bugzilla.proxmox.com/show_bug.cgi?id=5160 issue 5160]).
* Change the <code>vmid</code> field type in the <code>/nodes/{node}/lxc</code> response from string to integer to conform with the schema.
* Fix an issue where volumes could not be hotplugged due to missing volume activation.
* Fix an issue where a device passthrough entry would be added to the container config even though the entry was invalid.
* Fix an issue where executing <code>pct fsck</code> would fail due to missing volume activation, and deactivate volumes afterwards even if the command failed.
* Avoid passing the deprecated <code>noacl</code> mount option for <code>ext4</code> formatted volumes ([https://bugzilla.proxmox.com/show_bug.cgi?id=4846 issue 4846]).
==== General improvements for virtual guests ====
* Stopping a VM or container can now overrule active shutdown tasks ([https://bugzilla.proxmox.com/show_bug.cgi?id=4474 issue 4474]).
*: Previously, stopping a guest could fail if there was a running shutdown task for that guest.
*: When stopping a guest, the GUI now offers to abort running shutdown tasks.
*: Overruling shutdown tasks is currently not implemented for HA guests.
* Allow moving unused disks to another storage.
* Fractional rate limits for virtual network devices can now be entered without a leading zero.
==== Improved management for Proxmox VE clusters ====
* Modernize handling of host keys for SSH connections between cluster nodes ([https://bugzilla.proxmox.com/show_bug.cgi?id=4886 isse 4886]).
*: Previously, <code>/etc/ssh/ssh_known_hosts</code> was a symlink to a shared file containing all node hostkeys.
*: This could cause problems if conflicting hostkeys appeared in <code>/root/.ssh/known_hosts</code>, for example after re-joining a node to the cluster under its old name.
*: Now, each node advertises its own host key over the cluster filesystem. When Proxmox VE initiates an SSH connection from one node to another, it pins the advertised host key.
*: For existing clusters, <code>pvecm updatecerts</code> can optionally unmerge the existing <code>/etc/ssh/ssh_known_hosts</code>.
* Fix an issue where UTF-8-encoded files under <code>/etc/pve</code> would be incorrectly re-encoded as UTF-8.
* Increase key size for new SSH keys from 2048 to 4096 bit.
* Improvements and fixes to the notification system introduced in Proxmox VE 8.1:
** Fix an issue where notification recipients could not be removed from a <code>sendmail</code> or SMTP target ([https://bugzilla.proxmox.com/show_bug.cgi?id=5074 issue 5074]).
** Backup job notifications include the total backup size, which was previously missing ([https://bugzilla.proxmox.com/show_bug.cgi?id=5067 issue 5067]).
** Clarify notification fields in the backup job window.
** Deprecate notification options that were only relevant in a short transition period before the final notification system was introduced.
** As built-in targets and matchers cannot be removed, change the corresponding button labels from "Remove" to "Reset".
** Fix an issue where notification emails would contain a localized <code>Date</code> header that caused issues with some mail clients.
** A mail forwarded to an SMTP target by <code>proxmox-mail-forward</code> now contains the original message as the message body instead of a message part.
** Notification mails now contain the <code>Auto-Submitted</code> header.
** Support lists of values for the <code>exact</code> match-field mode.
** Notification events for forwarded mails now include the <code>hostname</code> and <code>type</code>.
** The Gotify notification target now also sets the <code>X-Gotify-Key</code> header for authentication, to improve compatibility with older versions of Gotify ([https://bugzilla.proxmox.com/show_bug.cgi?id=5274 issue 5274]).
==== Backup/Restore ====
* Support backup fleecing as an advanced feature.
*: When creating a backup of a running VM, the backup process starts sending data blocks to the backup target in the background.
*: When the guest attempts to write new data to a block that is not yet backed up, the old data needs to be backed up before the guest write can proceed. This delays the guest write.
*: If the backup target storage is slow, for example because it is a Proxmox Backup Server with a slow network connection, this delay can negatively impact VM performance.
*: With fleecing enabled, the old data can be buffered on a fast local "fleecing storage" instead, allowing the guest write operations to proceed with a reduced delay.
*: Backups with fleecing can have lower impact on VM performance, but temporarily take up additional space on the fleecing storage.
*: Fleecing can be configured for a datacenter-wide backup job in the GUI, and be used through the CLI and API.
* Replace confusing output of <code>null</code> value when successfully removing a backup snapshot on a Proxmox Backup Server storage with a line that indicates success ([https://bugzilla.proxmox.com/show_bug.cgi?id=4971 issue 4971]).
* Show the full error context output in operations on a Proxmox Backup Server storage.
* Fix an issue where the connection to a Proxmox Backup Server presenting a certificate signed by a CA not trusted by the Proxmox VE node would fail, even if a fingerprint is provided ([https://bugzilla.proxmox.com/show_bug.cgi?id=5248 issue 5248]).
* Advanced options for backup jobs, such as performance settings or bandwidth limits, can now be specified in the GUI ([https://bugzilla.proxmox.com/show_bug.cgi?id=4513 issue 4513]). Previously, these settings had to set via API or CLI.
* Fallbacks for performance-related backup settings are now computed per setting. Previously, a fallback was computed for the <code>performance</code> setting as a whole.
* Improve logging of failed hook scripts during backup.
* Switch to modern <code>ntfs3g</code> driver for the single file-restore image, since it supports more features found in the filesystems of current Windows guests ([https://bugzilla.proxmox.com/show_bug.cgi?id=5259 issue 5259]).
==== Storage ====
* Replication jobs now fail early with an informative error in case source and target volumes both exist, but at least one of them has no snapshots.
* Fix an issue where a snapshot created by vzdump could not be deleted because it was wrongly assumed to be a replication snapshot.
* Clarify error message if an <code>qemu-img</code> invocation fails.
* Fix an issue where insecure migration or replication would fail if it had to wait for a lock.
* Support migration of VM templates with disks on LVM-thin storage ([https://bugzilla.proxmox.com/show_bug.cgi?id=1611 issue 1611]).
==== Ceph ====
* Fix an issue where the <code>ceph-crash</code> daemon could not post crash reports, and logged recurring warnings to the journal ([https://bugzilla.proxmox.com/show_bug.cgi?id=4759 issue 4759]).
*: Since <code>ceph-crash</code> does not run as root, it is not allowed to read the Ceph admin keyring.
*: The issue is resolved by creating a <code>client.crash</code> keyring with appropriate rights and permissions both for new and existing Ceph clusters.
*: The new keyring is created automatically on upgrade, if necessary.
* Integrate patches that make it possible to manually install and use the upstream Ceph dashboard.
*: Due to upstream changes, Proxmox VE 8.0 and 8.1 were incompatible with the Ceph dashboard.
* Fix an issue where open TCP connections to VMs could freeze or timeout while upgrading Ceph if the firewall is active ([https://bugzilla.proxmox.com/show_bug.cgi?id=5213 issue 5213]).
* Integrate upstream patch that ensures RocksDB is built with compile-time optimizations for increased performance.
* Fix permissions on subdirectories of <code>/var/lib/ceph</code> to avoid errors when posting crash reports.
* Fix an issue where usage was reported as "NaN %" if no OSDs were created yet ([https://bugzilla.proxmox.com/show_bug.cgi?id=5293 issue 5293]).
* When expanding the Ceph status in the GUI, wrap the title to make sure it is completely visible ([https://bugzilla.proxmox.com/show_bug.cgi?id=5106 issue 5106]).
* Avoid edge case in which running <code>pveceph purge</code> would remove too many files.
* Fix an issue where a monitor could not be deleted due to an incorrect existence check ([https://bugzilla.proxmox.com/show_bug.cgi?id=5198 issue 5198]).
==== Access control ====
* Keep ACL entries in the <code>user.cfg</code> file sorted, to ease tracking changes with tools like <code>ansible</code> or <code>etckeeper</code> ([https://bugzilla.proxmox.com/show_bug.cgi?id=5335 issue 5335]).
* Require non-root users to enter their current password on password change.
*: This is to hedge against a scenario where an attacker has local or even physical access to a computer where a user is logged in.
* Add dedicated <code>Sys.AccessNetwork</code> privilege to guard API calls that could be abused to gather information about the internal network, for example downloading ISOs ([https://bugzilla.proxmox.com/show_bug.cgi?id=5254 issue 5254]).
*: This new privilege can be granted to allow users to download ISOs without requiring the more powerful <code>Sys.Modify</code> privilege.
* Restrict the values of legacy user-keys to the ones that were allowed in the deprecated initial TFA system.
* Align the validation regular expression for the ACR value in OpenID Connect realms with the settings in Proxmox Backup Server (see related [https://bugzilla.proxmox.com/show_bug.cgi?id=5190 issue 5190]).
* Restrict the attributes and their values collected during LDAP sync to avoid issues with unexpected LDAP entries.
* Add liberal length restrictions on user attributes (names, email addresses), to avoid bloating the <code>user.cfg</code>, which would needlessly restrict the number of ACL entries that can be stored.
==== Firewall & Software-Defined Networking ====
* Firewall based on <code>nftables</code> as an optional technology preview.
*: Introduce a new modern Proxmox VE firewall implementation written in Rust using <code>nftables</code>.
*: After installing and enabling the new implementation, firewall rules are enforced using <code>nftables</code> instead of <code>iptables</code>.
*: Moving to <code>nftables</code> will make firewall bridges obsolete, which reduces complexity and allows fixing long-standing issues with the old implementation.
*: The new implementation is very close to feature parity with the old implementation. Some minor differences, such as the handling of <code>REJECT</code> rules, still need to be addressed.
*: The <code>nftables</code> firewall is provided as a technology preview to gather feedback from the community.
* IP allocation for VNets with active DHCP plugin and using both IPv4 and IPv6 are now atomic, in the sense that a failure to obtain an allocation for one family removes a successful one for the other family.
* Allow arbitrary names for bridges. Previously, bridges had to be named <code>vmbrN</code> to be recognized ([https://bugzilla.proxmox.com/show_bug.cgi?id=545 issue 545]).
* Fix guest startup when a zone has DHCP enabled, but no DHCP ranges configured for any of the subnets.
* Ensure that addresses for both IPv4 and IPv6 are requested in case that a DHCP range was added for one family after a guest has already got an allocation for the other.
* Add support for <code>ipv6 prefix-list</code> filtering in <code>frr</code>
* Adapt the generated FRR config to work around an [https://github.com/FRRouting/frr/issues/14419 upstream issue] and update to FRR version 8.5.2.
* Use the correct ordering and add support for IPv6 routes in EVPN controller.
* Use the proxy configured for the datacenter in SDN for external API calls.
* Improve interaction with the external IPAM plugins phpIPAM and NetBox.
* Remove deprecated and now forbidden <code>priority</code> field from change request to PowerDNS ([https://bugzilla.proxmox.com/show_bug.cgi?id=5275 issue 5275]).
* Correctly detect array values, such as DHCP ranges, as changed for validation.
* Handle VNets that are pending, but not yet committed, when creating the list of all VNets.
* Fix an issue that triggered Perl's Taint Mode when an API daemon attempted to delete DHCP config files.
* Improve the testing framework for the SDN stack, by refactoring the code to enable mocking interactions with the system.
* Fix an issue where firewall rules using IPs/CIDRs could not be edited ([https://bugzilla.proxmox.com/show_bug.cgi?id=4963 issue 4963]) and changes would not be detected properly.
* Fix an issue where editing an EVPN zone would not accept an empty primary exit node.
* Provide explicit type information for the <code>pending</code> field in the API returns.
* Derive the BGP router-id from the interface MAC-Address in IPv6 underlay networks ([https://bugzilla.proxmox.com/show_bug.cgi?id=5364 issue 5364]).
* Fix the syntax for IPv6 route map entries in the generated FRR config ([https://bugzilla.proxmox.com/show_bug.cgi?id=5361 issue 5361]).
* Verify ISIS Network ID for syntactical correctness ([https://bugzilla.proxmox.com/show_bug.cgi?id=5344 issue 5344]).
* Add support for parsing and preserving BGP <code>community-list</code> entries in the FRR config ([https://bugzilla.proxmox.com/show_bug.cgi?id=5319 issue 5319]).
* Set the configured MTU of a network as DHCP option.
* Keep aliases in <code>cluster.fw</code> file sorted, to ease tracking changes with tools like <code>ansible</code> or <code>etckeeper</code> ([https://bugzilla.proxmox.com/show_bug.cgi?id=5335 issue 5335]).
* Fix the interaction between <code>ifupdown2</code> and packages which install hooks for network configuration ([https://bugzilla.proxmox.com/show_bug.cgi?id=5009 issue 5009]).
*: This fixes an issue where hosts could hang during network setup after a reboot if both <code>ntpsec</code> and <code>ntpsec-ntpdate</code> are installed.
==== Improved management of Proxmox VE nodes ====
* A new meta package <code>proxmox-secure-boot-support</code> pulls in all the needed packages to enable secure-booting Proxmox systems with the default keys. This makes it easier to switch an existing system to Secure Boot.
* The UEFI firmware update executable <code>fwupd-amd64-signed</code> is now signed with the Proxmox secure-boot keys, making it possible to use <code>fwupd</code> in Proxmox systems with secure-boot enabled.
* Proxmox repackaged the <code>r8125-dkms</code> driver needed for 2.5G RealTek NICs, as the driver is not merged in the upstream kernel, in order to add compatibility with the new 6.8 kernel series.
* Wake on LAN now allows configuring the bind interface and broadcast address used for sending the magic packet ([https://bugzilla.proxmox.com/show_bug.cgi?id=5255 issue 5255]).
* Automatically rewrite the LVM config to filter out RBD disks, if possible. Previously, LVM on the host would recognize LVM Physical Volumes located on VM disks backed by KRBD.
* Order the ZFS pool specific import units created by Proxmox VE for new pools before the general <code>zfs-import-cache</code> and <code>zfs-import-scan</code> units, preventing harmless but misleading errors in the journal while booting ([https://bugzilla.proxmox.com/show_bug.cgi?id=4835 issue 4835]).
* Fix a network configuration issue where creating a new VLAN on a bridge would overwrite the <code>vlan-id</code> of an existing VLAN ([https://bugzilla.proxmox.com/show_bug.cgi?id=5141 issue 5141]).
* Clarify the description of the internal <code>pvesubscription set-offline-key</code> command.
* Avoid misleading error message in <code>pvesubscription</code> in case the offline key activation was successful.
* Avoid logging warnings on EFI systems without Secure Boot support.
* Increase the maximum size before rotating the <code>pveam</code> logs from 50 KiB to 256 KiB, and warn if rotation fails.
* Support for adding custom ACME enabled CA's with optional authentication through External Account Binding (EAB) is now also present in the GUI ([https://bugzilla.proxmox.com/show_bug.cgi?id=4497 issue 4497], [https://bugzilla.proxmox.com/show_bug.cgi?id=5093 issue 5093]).
* Avoid spurious warnings about uninitialized values in various situations.
* When a service restart fails while upgrading <code>pve-manager</code> or <code>pve-ha-manager</code>, avoid failing the upgrade.
* Improved <code>pvereport</code> to provide a better status overview:
** Add packet counter to <code>iptables-save</code> output and format it as a table.
** Add the current kernel command line.
** List currently defined jobs.
** Include more details about recognized APT sources and packages that are held back.
** Extend the list of reported packages, by recently added library packages
* Improvements to Proxmox Offline Mirror:
** Consider the subscription status when configuring mirrors for the <code>ceph</code> repositories provided by Proxmox.
** Improve UX in <code>promxox-offline-mirror-helper</code>, when having multiple subscription keys available at the chosen mountpoint.
** Add dark mode to the documentation.
** Fix an wrong configuration setting for allowing weak RSA cryptographic parameters.
** Improve path handling with command line arguments.
** Support repositories that do not provide a <code>Priority</code> field ([https://bugzilla.proxmox.com/show_bug.cgi?id=5249 issue 5249]).
==== Installation ISO ====
* Support for automated and unattended installation of Proxmox VE.
*: Introduce the <code>proxmox-auto-install-assistant</code> tool that prepares an ISO for automated installation.
*: The automated installation ISO reads all required settings from an answer file in TOML format.
*: One option to provide the answer file is to directly add it to the ISO. Alternatively, the installer can retrieve it from a specifically-labeled partition or via HTTPS from a specific URL.
*: If the answer file is retrieved via HTTPS, URL and fingerprint can be directly added to the ISO, or obtained via DHCP or DNS.
*: See the [https://pve.proxmox.com/wiki/Automated_Installation wiki page on Automated Installation] for more details.
* memtest86+ got updated to version 7.0.
* Fix indefinite hangs during country detection ([https://bugzilla.proxmox.com/show_bug.cgi?id=4872 issue 4872]).
* The parsing of hostname options was fixed to catch invalid entries ([https://bugzilla.proxmox.com/show_bug.cgi?id=5230 issue 5230]).
* Fix an issue where setting ZFS compression to <code>off</code> did not have any effect, due to a change in upstream defaults.
* Deprecated checksum options got removed in the advanced options for ZFS pools.
* Improve the layout of widgets in the GTK-based installer for consistent margins and alignment.
==== Notable changes ====
* Kernel 6.8 is reported to fix a long-standing issue where VMs running on hosts with multiple NUMA nodes could temporarily freeze for short durations. The issue was more likely to occur if KSM and/or NUMA balancing was active. See the corresponding [https://forum.proxmox.com/threads/130727/page-10#post-651280 forum thread] for more information.
* Previously, the LXC power management endpoints under <code>/api2/json/nodes/{node}/lxc/{vmid}/status/</code> acquired the config lock synchronously in the API handler, meaning that a successful HTTP response implied that the task worker was able to acquire the lock. Now, they acquire the config lock asynchronously in the task worker, meaning that the task worker may still fail to acquire the lock despite the successful HTTP response. This change may require adjustment of tools that use the API directly and relied on the previous behavior.
* When creating a backup and specifying custom <code>performance</code> settings, fallbacks for unspecified settings are now computed per performance setting. Previously, a fallback was computed for the <code>performance</code> setting as a whole. In a few edge cases, this may change the effective performance settings for backups. See the [https://git.proxmox.com/?p=pve-manager.git;a=commit;h=097fe04599d69c1dd8a5b48ea4e1e055479e05b2 commit for details].
<div id="8.2-known-issues"></div>
=== Known Issues & Breaking Changes ===
==== Kernel 6.8 ====
The Proxmox VE 8.2 releases will install and use the 6.8 Linux kernel by default, a major kernel change can have a few, hardware specific, side effects.
You can avoid installing the 6.8 kernel by pinning the <code>proxmox-default-kernel</code> package version before the upgrade. The last version to depend on kernel 6.5 is <code>1.0.1</code>.
To pin the package to that version, create a file in <code>/etc/apt/preferences.d/proxmox-default-kernel</code> with the following content. This will keep <code>proxmox-default-kernel</code> on the old version until that file is deleted, and a new upgrade is initiated:
Package: proxmox-default-kernel
Pin: version 1.0.1
Pin-Priority: 1000
==== Kernel: Change in Network Interface Names ====
Upgrading kernels always carries the risk of network interface names changing, which can lead to invalid network configurations after a reboot.
In this case, you must either update the network configuration to reflect the name changes, or pin the network interface to its name beforehand.
See [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names the reference documentation] on how to pin the interface names based on MAC Addresses.
Currently, the following models are known to be affected at higher rates:
* Models using <code>i40e</code>. Their names can get an additional port suffix like <code>p0</code> added.
==== Kernel: DKMS ====
Since the default kernel changes to 6.8, it may happen that installed DKMS modules will not build anymore. For instance, as of April 2024 this is the case for the latest NVIDIA GRID driver 17.1.
Aside from simply removing and not using those modules, one possible workaround is to pin the <code>proxmox-default-kernel</code> package to an older version before the upgrade. The last version to depend on kernel 6.5 is <code>1.0.1</code>.
==== Kernel: Split Lock Detection Slowing Down VMs ====
If the host CPU supports split lock detection and a VM is performing misaligned memory accesses, the host kernel might artificially slow down the VM as of kernel version 5.19 or later.
There's now a [https://pve.proxmox.com/wiki/Split_lock_detection wiki article on split lock detection] with more information.
==== Old Ceph Crash Reports ====
The Ceph crash service monitors other Ceph daemons for crashes and reports them as a health warning. This service did not work properly with Proxmox VE's ceph server integration due to some internal ceph restructuring.
This has been fixed with the ceph packages uploaded with the Proxmox VE 8.2 release, but this also means that it will now catch up on reporting older crashes, even if they happened a few weeks or months ago.
If you see a Ceph health warning due to services being reported as crashed, please also check the date of the crash to avoid mistaking it for a recent crash caused by the upgrade.
==== Kernel: <code>intel_iommu</code> now defaults to on ====
The <code>intel_iommu</code> parameter defaults to <code>on</code> in the kernel 6.8 series. Enabling IOMMU can cause problems with older hardware, or systems with not up to date BIOS, due to bugs in the BIOS.
The issue can be fixed by explicitly disabling <code>intel_iommu</code> on the kernel commandline (<code>intel_iommu=off</code>) [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline following the reference documentation].
==== Kernel: Broadcom Infiniband driver issue ====
The <code>bnxt_re</code> module causes issues with some Broadcom NICs, which have their Infiniband functionality enabled.
As Infiniband is not used in most deployments simply preventing the module from loading mitigates the issue. Create a file <code>/etc/modprobe.d/bnxt-re-blacklist.conf</code> containing:
blacklist bnxt_re
Afterwards make sure to update the initramfs with <code>update-initramfs -k all -u</code>.
Alternatively you can also install Broadcom's <code>niccli</code> utility and the corresponding dkms module to disable the Infiniband functionality of the NIC permamently. See the [https://forum.proxmox.com/threads/opt-in-linux-6-8-kernel-for-proxmox-ve-8-available-on-test-no-subscription.144557/post-652507 relevant post in our community forum].
== Proxmox VE 8.1 ==
Released 23. November 2023: See [[Downloads]] (updated ISO release 2 with current package set including updated kernel and ZFS 2.2.2, on 07. February 2024.)
* Based on Debian Bookworm (12.2)
* Latest 6.5 Kernel as new stable default
* QEMU 8.1.2 (8.1.2 with ISO refresh)
* LXC 5.0.2
* ZFS 2.2.0 with stable fixes backported (2.2.2 with ISO refresh)
* Ceph Reef 18.2.0
* Ceph Quincy 17.2.7
=== Highlights ===
* Secure Boot support.
*: Proxmox VE now includes a signed shim bootloader trusted by most hardware's UEFI implementations. All necessary components of the boot chain are available in variants signed by Proxmox.
*: The Proxmox VE installer can now be run in environments where Secure Boot is required and enabled, and the resulting installation can boot in such environments.
*: Existing Proxmox VE installations can be switched over to Secure Boot without reinstallation by executing some manual steps, see the [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_secure_boot documentation] for details.
*: How to use custom secure boot keys has been documented in the [[Secure Boot Setup]] wiki. For using DKMS modules with secure boot see the [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_secure_boot reference documentation].
* The core of Proxmox VE's Software-Defined Network stack moved from experimental to supported and includes new features.
*: Proxmox VE SDN allows fine-grained control of virtual guest networks at the datacenter level.
*: The new automatic DHCP IP address management (IPAM) plugin can be used to transparently assign IPs to virtual guests in Simple zones. (tech-preview)
*: The web UI now allows inspecting and editing DHCP leases managed by the built-in IPAM plugin.
* New flexible notification system.
*:Send notifications not only via the local Postfix MTA, but also via authenticated SMTP or to [https://gotify.net/ Gotify] instances.
*:Flexible notification routing with matcher-based rules to decide which targets receive notifications about which events.
* Proxmox Server Solution GmbH, the company behind Proxmox VE development and infrastructure, was assigned an official Organizationally Unique Identifier (OUI) <code>BC:24:11</code> from the IEEE to use as default MAC prefix for virtual guests.
*: This OUI can be used for virtual guests inside private networks by all users and is set as new default MAC-Address prefix in the datacenter options.
* Ceph Reef is now supported and the default for new installations.
*: Reworked defaults brings improved performance and increased reading speed out of the box, with less tuning required.
* Seamless upgrade from Proxmox VE 7.4, see [[Upgrade from 7 to 8]].
=== Changelog Overview ===
==== Enhancements in the web interface (GUI) ====
* Improvements to bulk actions:
** Add a new "Bulk Suspend" action to suspend a selection of guests in one action.
** Add a new section above the guest list for configuring guest filters and add a button for clearing filters.
** Allow to filter guests by their tags.
** Reorder fields and drop obvious warning about local storage to improve on screen-space utilization.
** Reword the message displayed for bulk actions in the task log to "Bulk Start/Stop/Migrate". The message shown previously was "Start/Stop/Migrate all" and could be misleading in case not all guests were affected ([https://bugzilla.proxmox.com/show_bug.cgi?id=2336 issue 2336]).
** The "Bulk Migrate" action is now hidden on standalone nodes, as there is no valid migration target in that case.
* Improvements to the node summary panel:
** The summary now indicates whether the node was booted in legacy (BIOS) mode, EFI mode, or EFI mode with Secure Boot enabled.
** The currently running kernel is now reported more compactly by indicating only the version and the build date.
* Allow to automatically decompress an ISO file when downloading from a URL to a storage. The decompression algorithm can be set in the GUI ([https://bugzilla.proxmox.com/show_bug.cgi?id=4849 issue 4849]).
* Allow moving VMs and containers from one pool to a different pool in one operation.
* Avoid needlessly reloading the GUI after ordering a certificate via ACME for a different cluster node.
* The permission editor now also shows the ACL paths for notifications and PCI/USB mappings.
* The resource tree now displays the usage in percent when hovering over a storage.
* If the configured tree shape for tags is not "Full", the resource tree now displays a tooltip with the tag name when hovering over the configured shape.
* Ensure the SPICE config is downloaded with the correct file extension on Safari to avoid problems with macOS application association ([https://bugzilla.proxmox.com/show_bug.cgi?id=4947 issue 4947])
* Fix an issue where the "Migrate" button stayed disabled even if selecting a valid target node.
* Fix a bug where the backup job editor window would add an invalid entry for VMID 0 when using selection mode "all".
* Improve error message when creating a VM with an invalid hostname: Clarify that a valid hostname, not a valid DNS name, is expected ([https://bugzilla.proxmox.com/show_bug.cgi?id=4874 issue 4874]).
* When uploading a subscription, ignore surrounding whitespace in the subscription key to avoid confusing errors.
* Improve the focus handling when editing tags to allow tabbing though editable tag fields.
* Allow adding tags already when creating VMs and containers.
* Increase height of the VM and container creation wizard to obtain a 4:3 ratio.
* When creating an IP/CIDR inside an IPSet, the GUI now requires that an IP/CIDR is provided. Previously, the GUI accepted an empty field, but the API threw an error.
* Update external links to proxmox.com that changed during the website redesign.
* Fix an issue where the OK button would stay disabled when editing an ACME DNS challenge plugin ([https://bugzilla.proxmox.com/show_bug.cgi?id=4531 issue 4531]).
* Fix an issue where clicking "Reset" in the zpool creation window could cause an error when re-opening the window ([https://bugzilla.proxmox.com/show_bug.cgi?id=4951 issue 4951]).
* Fix an issue where users could write notes with links containing JavaScript code. This JavaScript code would be executed when a (different) user clicked on such a link.
* HTML-encode API results before rendering as additional hardening against XSS.
* Improved translations, among others:
** Croatian (NEW!)
** Georgian (NEW!)
** Arabic
** Catalan
** German
** Italian
** Polish
** Simplified Chinese
** Traditional Chinese
** Ukrainian
** Several remaining occurrences of the <code>GiB</code> unit in the GUI can now be translated ([https://bugzilla.proxmox.com/show_bug.cgi?id=4551 issue 4551]).
<div id="8.1-virtual-machines"></div>
==== Virtual machines (KVM/QEMU) ====
* New QEMU version 8.1.
*: See the [https://wiki.qemu.org/ChangeLog/8.1 upstream changelog] for details.
* Add clipboard support to the VNC console in the GUI. For now the feature cannot be enabled in the GUI and has to be manually enabled via API or CLI. After installing the [https://www.spice-space.org/ SPICE guest tools], text can be copied from/to the guest clipboard using the noVNC clipboard button.
* When creating a Windows VM, allow adding a second CD/DVD drive directly in the creation wizard.
*: This makes it easier to add the Windows VirtIO drivers ISO before starting the VM.
* Remove the 10-minute timeout for allocating VM disks when restoring from backup, as this timeout may be exceeded if disks are large or network storage is involved ([https://bugzilla.proxmox.com/show_bug.cgi?id=2817 issue 2817]).
* Log a warning when starting a VM with a deprecated machine version.
* Fix issues where shutdown and reboot commands would time out (instead of failing immediately) on ACPI-suspended VMs.
* Enabling or disabling CPU hotplug for a running VM did not work reliably and is not allowed anymore.
* Avoid leaving potentially large amounts of memory assigned to the QEMU process after backup.
* Fix an issue where heavy network traffic or connection issues during a backup to Proxmox Backup Server could cause an unsolicited write to the first sector of a backed-up SATA disk, which usually contains the boot-sector ([https://bugzilla.proxmox.com/show_bug.cgi?id=2874 issue 2874]).
* Fix an issue where a race condition could cause a VM crash during backup if <code>iothread</code> is enabled.
* Fix an issue where each pause and resume operation (for example when taking a snapshot) would increase the number of open file descriptors of the QEMU process, which could eventually lead to crashes.
* Fix an issue where starting a VM with machine type <code>q35</code> and multiple IDE drives would fail.
* cloud-init: Fix issues where non-root users could not regenerate the cloud-init drive or set the <code>ciupgrade</code> option.
* Start VMs using PCI passthrough with a higher timeout that is calculated from the configured memory. Previously, the timeout was reported to be too short when using PCI passthrough.
* Fix an issue where qmeventd failed to retrieve VMIDs from processes on hybrid cgroup systems and logged errors to the journal.
* Fix an issue where remote migration would fail for certain combinations of source/target storage, for example from qcow2 on directory to LVM-thin.
* Fix an issue where backup of a VM template with a TPM would fail ([https://bugzilla.proxmox.com/show_bug.cgi?id=3963 issue 3963]).
* Fix an issue where the VNC proxy would fail if the <code>LC_PVE_TICKET</code> was not set ([https://bugzilla.proxmox.com/show_bug.cgi?id=4522 issue 4522]).
* Backports of several upstream kernel patches:
** Fix an issue where VMs with a restricted CPU type could get stuck after live-migration from a host with kernel 5.15 to a host with kernel 6.2.
** Fix an issue where VMs could get stuck after several days of uptime if KSM, ballooning, or both, were enabled.
** The <code>FLUSHBYASID</code> flag is now exposed to nested VMs when running on an AMD CPU. This fixes an issue where some hypervisors running in a VM would fail to start nested VMs.
** Fix an issue with recovering potential NX huge pages that resulted in a warning logged to the journal ([https://bugzilla.proxmox.com/show_bug.cgi?id=4833 issue 4833]).
** Fix an issue where only one NVMe device would be recognized even though multiple are present ([https://bugzilla.proxmox.com/show_bug.cgi?id=4770 issue 4770]).
==== Containers (LXC) ====
* Support device passthrough for containers. The new <code>dev0</code>/<code>dev1</code>/... options take the path of host device. Optionally, owner and permission settings for the device node inside the container can be given. For now, the option cannot be set in the GUI and has to be manually set via API or CLI.
* Allow specifying multiple SSH keys in the container creation wizard ([https://bugzilla.proxmox.com/show_bug.cgi?id=4758 issue 4758]).
* Show privileged status as a separate row in the guest status view in the GUI.
* Show distribution logo and name in the guest status view in the GUI.
* Fix an issue where network would fail to come up for Fedora containers.
* Add an API endpoint <code>/nodes/{node}/lxc/{vmid}/interfaces</code> for querying network interfaces of a running container.
* Improve architecture detection for NixOS containers, which would previously produce a warning and default to <code>x86_64</code> in case <code>/bin/sh</code> did not (yet) exist in the container.
* The <code>pct status</code> command does not report guest CPU usage anymore, as there is currently no fast way to measure it ([https://bugzilla.proxmox.com/show_bug.cgi?id=4765 issue 4765]).
* Restoring a container from a PBS backup now honors the <code>ignore-unpack-errors</code> flag ([https://bugzilla.proxmox.com/show_bug.cgi?id=3460 issue 3460]).
* Fix an issue where Fedora containers would not have a container-getty on first boot.
==== General improvements for virtual guests ====
* Show progress of offline disk migration in the migration task log by use of <code>dd</code>'s <code>status=progress</code> argument ([https://bugzilla.proxmox.com/show_bug.cgi?id=3004 issue 3004]).
* Proxmox VE now has an officially assigned [https://standards.ieee.org/products-programs/regauth/ OUI from the IEEE] <code>BC:24:11</code> to be used as product-specific MAC prefix. This is now used by default instead of assigning purely random MACs. ([https://bugzilla.proxmox.com/show_bug.cgi?id=4764 issue 4764]).
==== HA Manager ====
* Notification for HA events, like fencing, are now configurable via the new modular notification system.
* An issue with the target selection during service recovery, where a fenced node was selected as target, was fixed ([https://bugzilla.proxmox.com/show_bug.cgi?id=4984 issue 4984]).
==== Improved management for Proxmox VE clusters ====
* New flexible notification system.
*:Allows sending notifications to different targets. The local Postfix MTA, previously the sole notification option, is now one of several target types available.
*:Two new target types include: <code>smtp</code> allowing direct notification emails via authenticated SMTP, and <code>gotify</code>, which sends notifications to a [https://gotify.net/ Gotify] instance.
*:Flexible notification routing is possible through matcher-based rules that determine which targets receive notifications for specific events.
*:Match rules can select events based on their severity, time of occurrence, or event-specific metadata fields (such as the event type). Multiple rules can be combined to implement more complex routing scenarios.
*:Email notifications now contain an <code>Auto-Submitted</code> header to avoid triggering automated replies ([https://bugzilla.proxmox.com/show_bug.cgi?id=4162 issue 4162])
* Name resolution to find an IP for a node's name now looks at all IPs associated with the name, only excluding loopback addresses. Additionally, a better warning is logged in case of a failed lookup.
* <code>pvecm updatecerts</code>, which is used to ensure a consistent state of the certificates in a cluster, was reworked for increased robustness:
*: Files not being stored in the cluster filesystem are now created earlier.                       
*: The call now waits for the node to join the quorate partition of the cluster instead of failing. This is especially helpful during node-bootup, when running before starting <code>pveproxy.service</code>.
*: The error message in case the call fails due to missing quorum was reworded for better clarity.
* The MAC addresses of the guests under SDN are now cached in the cluster filesystem for improved lookup speed in a cluster.
==== Backup/Restore ====
* Backups and backup jobs can now be configured with a notification mode for a smooth migration to the new notification system.
*: The <code>legacy-sendmail</code> mode replicates the previous behavior of sending an email via the local Postfix MTA if an email is configured.
*: The <code>notification-system</code> mode sends notifications exclusively using the new notification system.
*: The default mode <code>auto</code> behaves like <code>legacy-sendmail</code> if an email address is configured, and like <code>notification-system</code> if no email address is configured.
*: Thus, existing backup jobs without a configured email address will default to sending notification emails to the root user after the upgrade to Proxmox VE 8.1.
* Allow setting the <code>pbs-entries-max</code> parameter. In order to prevent failing container backups with a huge number of files in a directory, it can help to set it to a higher value than the default ([https://bugzilla.proxmox.com/show_bug.cgi?id=3069 issue 3069]).
* Improvements to the <code>vma</code> CLI tool that handles VMA backup files:
** The <code>vma extract</code> command now optionally takes a filter to only extract specific disks from the backup ([https://bugzilla.proxmox.com/show_bug.cgi?id=1534 issue 1534]).
** Fix an issue where the <code>vma create</code> command could not write to tmpfs ([https://bugzilla.proxmox.com/show_bug.cgi?id=4710 issue 4710]).
* Improvements to file restore:
** Fix an issue where the settings for ZFC ARC minimum and maximum were not properly set for the temporary file-restore VM.
** Fix an issue where debug log messages were not printed even though the <code>PBS_QEMU_DEBUG</code> environment variable was set.
* Fix an issue with backups of diskless VMs to Proxmox Backup Server: Even though encryption was enabled, such backups would not be encrypted. Since the backup contained no disks, this did not reveal any VM data, but the VM configuration was stored in plaintext ([https://bugzilla.proxmox.com/show_bug.cgi?id=4822 issue 4822]).
* File restore now allows downloading <code>.tar.zst</code> archives as an alternative to <code>.zip</code> archives.
* Improved handling of backups with master key:
** Abort the backup if the the running QEMU binary does not support master keys, instead of just printing a warning. Master keys are supported in QEMU builds of Proxmox VE since version 6.4.
** If no encryption key is configured, the backup task will explicitly warn that the backup will be unencrypted.
** The backup log now prints only one message that encryption is enabled, instead of previously two messages.
* Allow to configure whether restore should overwrite existing symlinks or hard links, when directly invoking <code>proxmox-backup-client restore</code> ([https://bugzilla.proxmox.com/show_bug.cgi?id=4761 issue 4761])
==== Storage ====
* Improvements to the <code>iSCSI</code> storage backend:
** Try to log into all discovered portals for a target, instead of just the single portal initially configured for the storage. This way, the storage can now become available in a multipath setup if at least one portal is online, even if the single configured portal is offline.
** The backend is now usable immediately after installing Open-iSCSI. Previously, some services needed to be restarted first.
* Fix an issue where a replication job could not be run or deleted if it referred to a storage that does not exist anymore.
* SMB/CIFS: Fix connection check in case an empty domain is provided.
* The BTRFS plugin received a fix for creating base templates when falling back to the standard directory variant.
==== Ceph ====
* Support installing Ceph 18.2 Reef and make it the default release for new setups.
* Allow creating multiple OSDs per physical device via API and CLI, and display such setups properly in the GUI. Multiple OSDs on one device can be useful when using fast NVMe drives that would be bottle-necked by a single OSD service ([https://bugzilla.proxmox.com/show_bug.cgi?id=4631 issue 4631]).
* When creating a pool, read the default values for <code>size</code>/<code>min_size</code> from the Ceph configuration instead of using hard-coded default values <code>3/2</code> ([https://bugzilla.proxmox.com/show_bug.cgi?id=2515 issue 2515]).
*: There are use cases where different values for <code>size</code>/<code>min_size</code> make sense, for example <code>4/2</code> if a cluster spans two rooms.
* The <code>pveceph install</code> commands now asks the user to confirm the Ceph version to be installed ([https://bugzilla.proxmox.com/show_bug.cgi?id=4364 issue 4364]).
* Improve discoverability of Ceph warnings by providing a tabular view and a button to copy warning details.
* Report OSD memory usage more accurately by using the Proportional Set Size (PSS) of the OSD process. Previously, memory usage was read from the OSD service and thus included the page cache, leading to extremely high values shown in the GUI.
* Use <code>snake_case</code> when setting options in Ceph config files to ensure consistency within that file (issue [https://bugzilla.proxmox.com/show_bug.cgi?id=4808 4808]).
* Mark global <code>pg_bits</code> setting as deprecated and make it a no-op. The setting has been deprecated since Ceph 13.
* Improve reporting of cluster health:
** Replace "Error" category for PG states with "Warning" and "Critical" categories to allow more fine-grained assessment of the cluster state.
** Rename "Working" state to "Busy" state to better convey its meaning.
==== Access control ====
* Support nested pools up to a nesting depth of 3 levels for greater flexibility in structuring VMs and containers ([https://bugzilla.proxmox.com/show_bug.cgi?id=1148 issue 1148]).
*:Pool names can now contain at most two slashes (allowing to structure them as <code>parent/child/grandchild</code>).
*:Permissions are inherited along the path according to the usual inheritance rules.
* Improvements to LDAP/AD realms:
** When adding/updating an LDAP/AD realm, there is now the option to directly check if the bind works, instead of having to wait for the first sync. This check is enabled by default in the GUI and can be disabled in the advanced options if needed.
** Forbid specifying a Bind DN without a password in the GUI, which is already forbidden by the API.
** Expose the <code>mode</code> option in the GUI that allows switching between LDAP, LDAPS and LDAP via STARTTLS. This option was already supported by the backend and succeeds the <code>secure</code> option which allowed switching between LDAP and LDAPS only.
** Fix an issue with enforced TFA where certain <code>sync</code> settings would cause the TFA restriction to not be enforced.
** It is now possible to update only the password field for the bind-user of an LDAP realm, this failed previously.
** Allow setting the <code>case-sensitive</code> option of AD realms, which was previously only editable via CLI, in the GUI.
* Existing-but-disabled TFA factors can no longer circumvent realm-mandated TFA.
* The list of SDN-related ACL paths now reflects all SDN objects, ensuring that there are no settings which remain <code>root@pam</code> only.
* A mistyped entry of <code>pools</code> instead of <code>pool</code> in the default set of ACL paths was fixed.
* Unlocking a user now also resets the TFA failure count.
==== Firewall & Software-Defined Networking ====
* The core of Proxmox VE's Software-Defined Network stack has been lifted from experimental to supported.
* New DHCP plugin for SDN (tech preview).
*: Enabling DHCP for a zone will start a DHCP server that can automatically assign IP addresses to associated virtual guests (VMs and containers).
*: Currently, only Simple zones are supported, and [https://thekelleys.org.uk/dnsmasq/doc.html dnsmasq] is the only supported DHCP server.
*: Each subnet of a Simple zone can now be configured with DHCP ranges.
*: When a virtual guest associated to the zone starts, the DHCP plugin queries the zone's IPAM for an IP address and offers it to the virtual guest.
*: If the built-in Proxmox VE IPAM is used, active DHCP leases can be viewed and edited conveniently on the web UI.
* [https://en.wikipedia.org/wiki/IS-IS IS-IS] was added as a further SDN controller, next to <code>EVPN</code> and <code>BGP</code>.
* The interfaces section of the <code>frr</code> is now parsed in order to support multiple underlay networks (like <code>IS-IS</code>).
* MAC learning on SDN bridges can now selectively be disabled for individual plugins. This is implemented for the <code>EVP</code> plugin.
* A warning is logged if the main network configuration (<code>/etc/network/interfaces</code>) does not source the SDN controlled configuration (<code>/etc/network/interfaces.d/sdn</code>), because the SDN configuration would be ignored in that case.
* The error reporting for problems with <code>vnet</code> generation was improved, by pointing to the relevant task log.
* The firewall log can now be also displayed for a specific timespan instead of showing the live-view ([https://bugzilla.proxmox.com/show_bug.cgi?id=4442 issue 4442]).
* Fix an issue where scoped alias resolution would fail with an error.
* Enabling VLAN-awareness for an EVPN zone is unsupported and now fails instead of just printing a warning ([https://bugzilla.proxmox.com/show_bug.cgi?id=4917 issue 4917]).
* Fix an issue where an empty subnet could not be deleted if it has a gateway defined.
* The IPAM selector, which is a required choice, is not hidden behind the Advanced checkbox in the UI anymore.
* The identifying <code>CIDR</code> for a <code>vnet</code> is now named <code>Subnet</code> to improve clarity.
* A <code>systemd.link(5)</code> configuration is now shipped to both keep bridges up even if there's no port connected, and to prevent that a random MAC-address is assigned to bridges or bond interfaces.
* A <code>ethtool</code> is now a hard dependency of <code>ifupdown2</code>, matching the common need of disabling offloading features of certain NICs.
* Prevent a crash in <code>ifupdown2</code> caused by an error in a third-party plugin in <code>/etc/network/ifup.d/</code>.
* The <code>accept_ra</code> and <code>autoconf</code> sysctl settings are now also applied for bridge interfaces.
* <code>ifupdown2</code> now correctly recognizes when remote IPs for <code>vxlan</code> are configured by external sources and does not remove them on reconfiguration.
==== Improved management of Proxmox VE nodes ====
* Secure Boot support.
*: Proxmox VE now ships a shim bootloader signed by a CA trusted by most hardware's UEFI implementation. In addition, it ships variants of the GRUB bootloader, MOK utilities and kernel images signed by Proxmox and trusted by the shim bootloader.
*: New installation will support Secure Boot out of the box if it is enabled.
*: Existing installations can be adapted to Secure Boot by installing optional packages, and possibly reformatting and re-initializing the ESP(s), without the need for a complete reinstallation. See [https://pve.proxmox.com/wiki/Secure_Boot_Setup the wiki article for more details].
* The kernel shipped by Proxmox is shared for all products. This is now reflected in the renaming from <code>pve-kernel</code> and <code>pve-headers</code> to <code>proxmox-kernel</code> and <code>proxmox-headers</code> respectively in all relevant packages.
* The new <code>proxmox-default-kernel</code> and <code>proxmox-default-headers</code> meta-packages will depend on the currently recommended kernel-series.
** Avoid logging benign but confusing warnings about a segfault in <code>pverados</code>.
* Many edge-cases encountered during the upgrade from PVE 7.4 to 8 by our user-base are now detected and warned about in the improved <code>pve7to8</code> checks:
** Warn if [https://github.com/dell/dkms DKMS] modules are detected, as many of them do not upgrade smoothly to the newer kernel versions in PVE 8.
** Warn if the PVE 7 system does not have the correct meta-package of <code>grub</code> installed ensures to actually upgrade the installed bootloader to the newest version.
** The check for old <code>cgroupv1</code> containers was adapted to not cause false positives on current containers (for example Fedora 38).
* Support for adding custom ACME enabled CA's which require authentication through '''E'''xternal '''A'''ccount '''B'''inding (EAB) on the commandline ([https://bugzilla.proxmox.com/show_bug.cgi?id=4497 issue 4497]).
* Using the Console/Shell on a PVE node is now possible for all users with the appropriate permissions (<code>Sys.Console</code>). The restriction to the <code>pam</code> was removed. The users will still need to login as a system user on the shell though.
* With the Proxmox repositories having support for fetching them directly the changelogs for new package versions shown in the UI are now all gathered with <code>apt changelog</code>.
* The <code>pvesh</code> debug tool now also supports yielding output for streaming API calls, like for example the syslog.
* The documentation on firmware updates provided by the operating system has been extended and revised, helping administrators to identify if their setup is optimal.
==== Installation ISO ====
* The ISO is able to run on Secure Boot enabled machines.
* The text-based UI got significant improvement based on the feedback received from the first release in PVE 8.0.
* The current link-state of each network interface is now displayed in the network configuration view, helping in identifying the correct NIC for the management interface ([https://bugzilla.proxmox.com/show_bug.cgi?id=4869 issue 4869]).
* If provided by the DHCP server, the hostname field is already filled out with the information from the lease.
* The <code>arc_max</code> parameter for installations on ZFS can now be set in the Advanced Options. If not explicitly set by the user, it is set to a value targeting 10% of system memory instead of 50%, which is a better fit for a virtualization workload ([https://bugzilla.proxmox.com/show_bug.cgi?id=4829 issue 4829]).
* The correct meta-package of <code>grub</code> is now installed based on the boot mode (<code>grub-pc</code> or <code>grub-efi-amd64</code>). This ensures that the bootloader on disk gets updated when there is an upgrade for the <code>grub</code> package.
* The text-based UI is now also available over a serial console, for headless systems with a serial port.
* <code>/var/lib/vz</code> backing the <code>local</code> storage is now created as separate dataset for installations on ZFS ([https://bugzilla.proxmox.com/show_bug.cgi?id=1410 issue 1410]).
* The root dataset on ZFS installations now uses <code>acltype=posixacl</code> in line with [https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html upstream's recommendation].
* Kernel parameters passed on the commandline during install are now also set in the target system ([https://bugzilla.proxmox.com/show_bug.cgi?id=4747 issue 4747]).
* Fix the warning that is shown in case the address family (IPv4, IPv6) of the host IP and DNS server do not match.
* The text-based UI now sets the correct disk-size for the selected disk, instead of limiting the installation to the size of the first disk in the list ([https://bugzilla.proxmox.com/show_bug.cgi?id=4856 issue 4856]).
* For better UX, the text-based UI now also displays a count-down before automatically rebooting.
* The screensaver in the graphical installer is now disabled.
* The graphical installer now displays the units used for disk-based options.
* The kernel commandline parameter <code>vga788</code> is now set for both the graphical debug and all text-based UI installation options. This improves compatibility of the installer with certain hardware combinations.
==== Other Notable changes ====
* Existing backup jobs without a configured email address did not send email notifications before the upgrade, but will default to sending email notifications to the root user via the new notification system after the upgrade to Proxmox VE 8.1.
*: In order to disable notification emails, either change the job's notification mode to <code>legacy-sendmail</code> or configure the notification system to ignore backup job notifications.
<div id="8.1-known-issues"></div>
=== Known Issues & Breaking Changes ===
==== Kernel ====
* With Kernel 6.5 and ZFS it can happen that the host hits a kernel bug when starting a VM with hugepages, and the host must be rebooted.
*: More information can be found in the [https://forum.proxmox.com/threads/136741/ forum] and in the bug reports for [https://github.com/openzfs/zfs/issues/15140 ZFS] and [https://bugzilla.kernel.org/show_bug.cgi?id=217747 Linux kernel].
* Some users with Intel Wi-Fi cards, like the AX201 model, reported that initialization of the card failed with Linux kernel 6.5.
*: This is still being investigated. You should avoid booting into the new kernel if you have no physical access to your server and an Intel Wi-Fi device is used as its only connection. See the [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_kernel_pin documentation] for how to pin a kernel version.
* Some SAS2008 controllers need a workaround to get detected since kernel 6.2, see the [https://forum.proxmox.com/threads/no-sas2008-after-upgrade.129499/page-4#post-607858 forum thread] for details.
* For certain Linux VMs with OVMF and guest kernels >= 6.5, there might be issues with SCSI disk hot(un)plug. This is a more general issue and is currently being investigated and will be fixed on Linux upstream.
* The TPM (Trusted Platform Module) hardware random number generator (RNG) is now disabled on all AMD systems equipped with a firmware-based TPM (fTPM) device. This change was implemented due to such RNGs causing stutters in many systems. Affected systems should switch the RNG source from <code>/dev/hwrng</code> to an alternative, like <code>/dev/urandom</code>.
: Reference: [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=554b841d470338a3b1d6335b14ee1cd0c8f5d754 kernel commit "tpm: Disable RNG for all AMD fTPMs"]
* Some Dell models, which appear to include all those using a BCM5720 network card, have a compatibility issue with the <code>tg3</code> driver in the kernel based on version 6.5.11.
*: From our current understanding 14th Generation Dell Servers (T140, R240, R640,...) are affected, while others (e.g., R630, R620, R610,...) do not seem to be affected. We are currently investigating this issue. In the meantime, we recommend pinning the kernel to version 6.2 on affected hosts.
*: Some users report that disabling the <code>X2APIC</code> option in the BIOS resolved this issue as a workaround.
==== Network Configuration ====
* Systems installed on top of Debian or those installed before Proxmox VE 7.0 will be switched by default from the <code>ifupdown</code> network configuration implementation to the modern <code>ifupdown2</code>.
*: This switch occurs because the stabilized SDN package is now marked as a recommendation for various Proxmox VE packages. Consequently, it will be installed on all systems that have kept the <code>APT::Install::Recommends</code> config at its default true value, leading to the inclusion of the <code>ifupdown2</code> package.
*: While <code>ifupdown2</code> aims to be backward compatible with the legacy ifupdown, some details may still differ. Currently, we are aware of one such difference, particularly regarding the default value for accepting IPv6 router advertisement requests (<code>accept_ra</code>). In the legacy ifupdown, accept_ra is set to <code>2</code> ("Accept Router Advertisements even if forwarding is enabled") as long as no gateway is configured. However, in ifupdown2, it always defaults to <code>0</code> ("Do not accept Router Advertisements") as a security measure, requiring administrators to actively opt-in.
*: If you rely on router advertisements being accepted, you can simply add <code>accept_ra 2</code> to the respective interface section in <code>/etc/network/interfaces</code>.
==== Virtual Machines ====
* The <code>pve-edk2-firmware</code> package, which provides (U)EFI firmware for virtual machines, has been split up in multiple packages. Users of the fully supported amd64/x86_64 architectures do not need to change anything.
*: The OVMF variants, used for amd64/x86_64 based virtual machines, got moved into <code>pve-edk2-firmware-ovmf</code> and <code>pve-edk2-firmware-legacy</code>, these will always be installed automatically on upgrade.
*: The AAVMF variants, used for the experimental ARM64 VM integration, have been moved to <code>pve-edk2-firmware-aarch64</code>, this package won't be automatically installed on upgrade, if you rely on the experimental ARM integration you need to manually install this package.
* With the 8.1 machine version, QEMU switched to using SMBIOS 3.0 by default, utilizing a 64-bit entry point. Since the SMBIOS 32-bit and 64-bit entry points can coexist, and most modern operating systems set up both, the general impact should be minimal
*: However, certain operating systems or appliances, such as Juniper's vSRX, do not have a 64-bit entry point set up and might fail to boot with the new machine type.
*: For affected VMs, you can explicitly pin the machine version to 8.0 in the web interface. Note that the machine version of VMs with a Windows OS type is automatically pinned to the most recent version at the time of creation.
*: Upstream commit reference: [https://github.com/qemu/qemu/commit/bf376f3020dfd7bcb2c4158b4ffa85c04d44f56d QEMU commit]
== Proxmox VE 8.0 ==
Released 22. June 2023: See [[Downloads]]
* Based on Debian Bookworm (12.0)
* Latest 6.2 Kernel as stable default
* QEMU 8.0.2
* LXC 5.0.2
* ZFS 2.1.12
* Ceph Quincy 17.2.6
=== Highlights ===
* New major release based on the great Debian Bookworm.
* Seamless upgrade from Proxmox VE 7.4, see [[Upgrade from 7 to 8]]
* Ceph Quincy enterprise repository.
*: Access the most stable Ceph repository through any Proxmox VE subscription.
* Add access realm sync jobs.
*: Synchronize users and groups from an LDAP/AD server automatically at regular intervals.
* Integrate host network bridge and VNet access when configuring virtual guests into the ACL system of Proxmox VE.
*: With the new <code>SDN.Use</code> privilege and the new <code>/sdn/zones/<zone>/<bridge-or-vnet>/<vlan-tag></code> ACL object path, you can give out fine-grained usage permissions for specific networks to users.
* Create, manage and assign resource mappings for PCI and USB devices for use in virtual machines (VMs) via API and web UI.
*: Mappings allow you to give out access to one or more specific device(s) to a user, without them requiring root access.
*: For setups that require VMs, with hardware passthrough to be able to (offline-) migrate to other nodes, mappings can be used to ensure that the VM also has a valid device for passthrough on the target node.
* Add virtual machine CPU models based on the [https://gitlab.com/x86-psABIs/x86-64-ABI x86-64 psABI] Micro-Architecture Levels and use the widely supported <code>x86-64-v2-AES</code> as default for new VMs created via the web UI.
*: The <code>x86-64-v2-AES</code> provides important extra features over the qemu64/kvm64 like SSE3, SSE4 and SSE4.1 and was slightly adapted to also provide AES support, improving performance of many computing operations dramatically.
*: See the [[#8.0-virtual-machines|Virtual Machines (KVM/QEMU)]] section for details.
* Add new text-based UI mode for the installation ISO, written in Rust using the [https://github.com/gyscos/cursive Cursive] TUI (Text User Interface) library.
*: You can use the new TUI mode to workaround issues with launching the GTK based graphical installer, sometimes observed on both very new and rather old hardware.
*: The new text mode shares the code executing the actual installation with the existing graphical mode.
=== Changelog Overview ===
==== Enhancements in the web interface (GUI) ====
* The Ceph repository selection now takes into account the subscription status of the complete cluster and recommends the optimal version for the cluster.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=93542d7748eb08764d765a3a5a75042a5a4df621 -->
* Improved Dark color theme:
: The Dark color theme, introduced in Proxmox VE 7.4, received a lot of feedback from our community, which resulted in further improvements.
* Set strict <code>SameSite</code> attribute on the Authorization cookie
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=aec7e8d23c5456b97e15f61b969c7ca895174909 -->
* The Markdown parser, used in notes, has been improved:
** it allows setting the target for links, to make any link open in a new tab or window.
** it allows providing URLs with a scheme different from HTTP/HTTPS;
**: You can now directly link to resources like <code>rdp://<rest-of-url></code>, providing convenience links in the guest notes.
** tag-names and protocols are matched case-insensitive.
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=2d04f0165d2adfc26bf7f492a937fc72fcb784c2 -->
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=5cbbb9c44a76cb0c1067df9692290e74c4ee921f -->
* The mobile UI code was refactored to not suffer from incompatible changes made for the web-based GUI.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=0947b5c36f459f5060344f217412f5260fa25946 -->
* The generated CSR used by the built-in ACME client now sets the correct CSR version (<code>0</code> instead of <code>2</code>).
    <!-- https://git.proxmox.com/?p=pve-common.git;a=commit;h=8ebb0f82ac13e7332929ef0f9ef8a28b2dd9ba96 -->
* Uploading files now only computes the MD5 sum of the uploaded file, if it can be used for comparison with the user-provided one.
    <!-- https://git.proxmox.com/?p=pve-http-server.git;a=commit;h=a2a3d17be8548e47d0bc3a09a325a89dbfad487f -->
* Firewall settings: Improve the alignment of permissions checked by the web UI with the permissions actually required by the API.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=2e37e77902f2b3712eda5b4d68cc907a1fd901e9 -->
* Explicitly disallow internal-only <code>tmpfilename</code> parameter for file uploads.
    <!-- https://git.proxmox.com/?p=pve-http-server.git;a=commit;h=cb53bd68615f8119c8915608455df63951abc26f -->
* Fix multipart HTTP uploads without <code>Content-Type</code> header.
    <!-- https://git.proxmox.com/?p=pve-http-server.git;a=commit;h=602eb8aabdc04180962d5c3b6f1c66358a8d2690 -->
* Show Ceph pool number in the web UI, as it is often mentioned in Ceph warnings and errors.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=79007cfc408e26c02857051cc23af08fb94b8daa -->
* You can now set the <code>subdir</code> option of the CIFS storage type in the web interface, not only via API/CLI.
* Improved translations, among others:
** Ukrainian (NEW)
    <!-- https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=e0dd5e6989adcd4a5d796453707371ac82a1b4a3 -->
** Japanese
    <!-- https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=326feddf629345018f165216decaff2b5fb3bad2 -->
** Simplified Chinese
    <!--  https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=e5fb5ffc3fb71ea0e4aa9323b4dbfef3d2dbf998 -->
** Traditional Chinese
    <!--  https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=f133ff0de42bf852f5cfc32b20458f37790b82ef -->
** The size units (Bytes, KB, MiB,...) are now passed through the translation framework as well, allowing localized variants (e.g., for French).
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=01e64778ad112504d1de155c442571f54da46a45 -->
** The language selection is now localized and displayed in the currently selected language
    <!-- https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=adf33b65568046ba74869d6f980bc1fb00ebf845 -->
<div id="8.0-virtual-machines"></div>
==== Virtual machines (KVM/QEMU) ====
* New QEMU version 8.0:
** The <code>virtiofsd</code> codebase was replaced by a new and improved implementation based on Rust, which is packaged separately.
** QEMU Guest Agent now has initial support for NetBSD and OpenBSD.
** Many more changes, see the [https://wiki.qemu.org/ChangeLog/8.0 upstream changelog] for details.
* Add virtual machine CPU models based on the [https://gitlab.com/x86-psABIs/x86-64-ABI x86-64 psABI] Micro-Architecture Levels.
*: The x86-64 levels provide a vendor-agnostic set of supported features and reported CPU flags.
*: Models like <code>x86-64-v2-AES</code> provides important extra features over the qemu64/kvm64 like SSE3, SSE4 and SSE4.1 and was slightly adapted to also provide AES support, improving performance of many computing operations dramatically.
*: This model is well-supported by all x86-64 hardware released in the last decade, to be specific since Intel Westmere (launched in 2010) and AMD Opteron 6200-series "Interlagos" (launched in 2011), enabling Proxmox VE to use it as the default CPU model for crating new VMs via the web UI.
* Create, manage and assign resource mappings for PCI and USB devices for use in VMs via API and web UI.
*: Mappings allow you to give out access to one or more specific device(s) to a user, without them requiring root access.
*: For setups that require VMs, with hardware passthrough to be able to (offline-) migrate to other nodes, mappings can be used to ensure that the VM also has a valid device for passthrough on the target node.
*: New ACL object paths: <code>/mapping/pci/<id></code> and <code>/mapping/usb/<id></code> refer to the defined PCI and USB mappings.
*: New privileges: <code>Mapping.Audit</code> allows to view resource mappings, <code>Mapping.Modify</code> allows to create or edit resource mappings, and <code>Mapping.Use</code> allows to pass through devices to VMs using the mapping.
*: New roles: <code>PVEMappingUser</code>, with the privilege to view and use mappings, and <code>PVEMappingAdmin</code> with the additional privilege to edit mappings.
* Avoid invalid <code>smm</code> machine flag for <code>aarch64</code> VM when using serial display and SeaBIOS.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=e4263214b8b708b71be9d256e153531e51002bdb -->
* Warn if a network interface is not connected to a bridge on VM startup. This can happen if the user manually edited the VM config.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=bb547dcbd625f64e7a7b69b59311ee653f05a657 -->
* Fix an issue with the improved code for volume size information gathering for passed through disks during backup preparation.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=670f947ee86a9046fe2a5cdc74c7aa18e99a4cba -->
* Workaround breaking driver changes in newer Nvidia grid drivers, which prevented mediated devices (mdev) to be reclaimed upon guest exit.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=49c51a60db7f12d7fe2073b755d18b4d9b628fbd -->
* Prefer an explicit configured SMBIOS UUID for Nvidia vGPU passthrough.
*: If a <code>uuid</code> command line parameter is present, it will be preferred over the default auto-generated UUID, containing the VMID and mdev index.
: This fixes an issue, with software inside the guest, which relies on a specific and unique UUID setting.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=c8bd54e9a8367c4926a7029b209fded16f46c040 -->
* Improved gathering of current setting for live memory unplugging.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=d1b25a2267187c48339f598e81d5f99706f85513 -->
* Avoid sending a temporary size of zero to QEMU when resizing block devices. Previously, this was done when resizing RBD volumes, but it is not necessary anymore.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=2e4357c537287edd47d6031fec8bffc7b0ce2425 -->
* When resizing a disk, spawn a worker task to avoid HTTP request timeout (issue 2315).
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=606d9d76b25c533f15d589d4927015c8e3d78a1b -->
* Allow resizing qcow2 disk images with snapshots (issue 517).
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=5c01f5917759248f6b458f1cd518ba16c67a2f9f -->
* cloud-init improvements:
** Introduce <code>ciupgrade</code> option that controls whether machines should upgrade packages on boot (issue 3428).
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=a3862f699f9aec88e9e6975063366a4a5f45f2b7 -->
** Better align privilege checks in the web UI with the actual privileges required in the backend.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=d1c7fa02096ad025ea179ca7e749edb46e79bb36 -->
** Fix an issue where the hostname was not properly set on Fedora/CentOS machines, by passing the hostname via the <code>fqdn</code> option.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=3e546c5ada47da8434bb58d27a3aa7d9823e7fa4 -->
** Fix an issue where displaying pending changes via <code>qm</code> and <code>pvesh</code> caused an error.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=3a704639b7123d43ad9ef192eb9d842b7b17327e -->
** Allow setting network options with <code>VM.Config.Cloudinit</code> privileges, instead of requiring the more powerful <code>VM.Config.Network</code> privilege.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=46f3fc25d8e990c339f08b00f986c4c2d66196e8 -->
* Drop unused QMP commands for getting the link and creating/deleting internal snapshots.
    <!--
https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=a816d2969eb0fd9580bfdff44f16bde7ab258a78
https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=99f9ce2cd2b60b560c2130680aca812376977b10
-->
* Replace usages of deprecated <code>-no-hpet</code> QEMU option with the <code>hpet=off</code> machine flag.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=17bacc2182d139bce132c6844ee8b0be757512f7 -->
==== Containers (LXC) ====
* Improve handling of <code>/etc/machine-id</code> on clone operations - the file is now only truncated, if the source did not explicitly set it to 'uninitialized' or remove it. Thus, the admin can decide if they want first-boot semantics or not (see <code>machine-id (5)</code>).
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=a72424ba7296d846b8893d3c070138eb280c51cd -->
* Set <code>memory.high</code> cgroup limit to 99.6% of configured memory. This setting gives the container a chance to free memory before directly running into an Out-of-Memory (OOM) condition. It is applied on <code>lxc.conf</code> generation and on hot-plugging memory to a running container.
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=926b193e066e12e7d9ca24dade37635781af2368 -->
* Warn users on conflicting, manual, <code>lxc.idmap</code> entries.
*: Custom uid/gid map entries can become quite complicated and cause overlaps fast.
*: By issuing a warning upon container start, the user should find the wrong entry directly.
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=d17fe04161d252b85227ba2401e1e7ce03e842ff -->
* When resizing a disk, perform plausibility checks already before spawning the worker task. This allows invalid requests to fail earlier.
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=f0595e61425a68d648d9a5fd28fcacbd144cc8fe -->
* General code improvements, adhering to best practices for Perl code.
==== General improvements for virtual guests ====
    <!-- libpve-guest-common-perl has just changes to packaging for the 8.0 release -->
* When cloning guests, the validation of the provided name of the clone is now happening in the frontend, improving UX.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=54fbdcd1c075dca5b831745e5da679b946efe7cb -->
==== HA Manager ====
* Stability improvements of manual maintenance mode:
** Fix an issue where a request for enabling maintenance mode on a node is lost, in case the rebooted node is the current active Cluster Resource Manager (CRM).
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=26bbff0d552f17b8d28d5d452130c34d1632385a -->
** Fix an issue where a shutdown policy other than <code>migrate</code> could cause a node in maintenance mode to leave maintenance mode too early or fence itself.
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=f129138cb046b5940f57f0734ab9eba22a05668d -->
** Fix an issue where <code>ha-rebalance-on-start</code> could cause a newly added and already-running service to be shut down and migrated to another node.
**: Now, <code>ha-rebalance-on-start</code> ignores services that are already running.
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=5a9c3a28083820107f05bf45b111457725bcdab9 -->
* When enabling or disabling maintenance mode via the CLI, the <code>ha-manager</code> command now checks whether the provided node exists.
*: This avoids misconfigurations, e.g., due to a typo in the node name.
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=f6c61fe8a3d27d3bcc3226a647733b2d0eb644d9 -->
==== Improved management for Proxmox VE clusters ====
* The <code>rsync</code> invocation used when joining nodes via ssh, which is deprecated, has been adapted to changes in <code>rsync</code> CLI argument parsing in Bookworm.
    <!-- https://git.proxmox.com/?p=pve-cluster.git;a=commit;h=9697997575e25e188a0993a0e4fc7f33f6602928 -->
==== Backup/Restore ====
* Improve performance of backups that use <code>zstd</code> on fast disks, by invoking <code>zstd</code> without the <code>--rsyncable</code> flag (issue 4605).
* Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.
* When restoring from backups via the web interface, the VM/CT name is now validated client-side before sending an API request. This helps catching invalid names early.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=a2b8ac2e2188fb2292f29b8261aec3c537a6d6b1;ds=sidebyside -->
* The web UI now sorts backups by date, whereas it previously sorted backups first by VMID and then by date. The VMID is added as an extra column for users who would like to restore the previous sorting order (issue 4678).
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=63d74bb632833f31208eb966c2701b6eec7ac319 -->
* Fix an issue where the backup job editor window occasionally did not show the selected guests (issue 4627).
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=25f7446c90fc69db0f8931dc46928faef519f799 -->
* The <code>fs-freeze-on-backup</code> option of the QEMU guest agent, which controls whether the filesystem should be frozen for backups, can now be set in the web UI.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=8bb027f820bfdc9f4f8e4b75ac84095c473e3183 -->
* Improve permission model for backup jobs: Editing backup jobs now generally requires the <code>Datastore.Allocate</code> privilege on the target storage, and editing backup jobs with <code>dumpdir</code> requires root privileges.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=b6e561304a6145c7637f354fb4355198bb11c5ba -->
* Clarify description of the <code>ionice</code> setting.
    <!-- https://git.proxmox.com/?p=pve-guest-common.git;a=commit;h=29b4ce9390d4f57b1b0890aa7615e922d2d2d262 -->
==== Storage ====
* The ''file''-based storage-types have two new config options <code>create-base-path</code> and <code>create-subdirs</code>. They replace the <code>mkdir</code> option and separate two different concepts:
: <code>create-base-path</code> decides if the path to the storage should be created if it does not exist,
: <code>create-subdirs</code> decides if the content-specific sub-directories (guest images, ISO, container template, backups) should be created.
: Conflating both settings in the single <code>mkdir</code> option caused a few unwanted effects in [https://bugzilla.proxmox.com/show_bug.cgi?id=3214 certain situations] (issue 3214).
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=7c242295c9d8ba24ddd3de8e98f453af04ee5da1 -->
* The CIFS storage type can now be configured with custom mount options, as it was already possible for the NFS storage type.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=b48ca5a7c0235b42fe2fa18a7b912114443ac01f -->
* The <code>subdir</code> option of the CIFS storage type can now be configured in the web interface. The option can be used to mount a subdirectory of a SMB/CIFS share and was previously only accessible via the API/CLI.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=711658f42c920bcc0131f3e36e6f6eafae5d07e7 -->
* Improve API documentation for the <code>upload</code> method.
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=4e55684cafde4086b24729365871ee4ee0d31d6b -->
* The API now allows to also query replication jobs that are disabled.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=e81a10a4ab1af19ca35a992a357028f1d38552dc -->
* Allow <code>@</code> in directory storage path, as it is often used to signify Btrfs subvolumes.
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=765e0da23f4abfdc3ae43a6975545e7bb5426ff5 -->
* When resizing RBD volumes, always round up sizes to the nearest integer. This avoids errors due to passing a floating-point size to the RBD tooling.
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=93e34f7259787a8b5704a4cdbdd16e962af0573a -->
==== Ceph ====
* Add support for new Ceph enterprise repositories. When installing Ceph via <code>pveceph install</code> or the web UI, you can now choose between the <code>test</code>, <code>no-subscription</code> and <code>enterprise</code> (default) repositories. The <code>-test-repository</code> option of the <code>pveceph install</code> command was removed.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=93542d7748eb08764d765a3a5a75042a5a4df621 -->
* Add <code>pveceph osddetails</code> command to show information about OSDs on the command line, with a level of detail that is comparable to the web UI/API.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=b48ca5a7c0235b42fe2fa18a7b912114443ac01f -->
* Drop support for Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=ddd8927990b2da79c52f03527fdbc29101a340fc -->
* Remove overly restrictive validation of <code>public_network</code> during monitor creation. Configuring a public network like <code>0::/0</code> or <code>0::/1</code> caused a superfluous "value does not look like a valid CIDR network" error.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=16f3482b3422d808adbc932bb87a84ebba360d25 -->
* The Ceph installation wizard in the web UI does not create monitors and managers called <code>localhost</code> anymore and uses the actual node name instead.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=c2175df21f92e6badd1d47b7d5a542522468c011 -->
==== Access control ====
* Add possibility to define realm sync jobs in the web UI. These allow to synchronize users and groups from an LDAP server automatically at regular intervals.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=dc982c4b76772963ee8cfd45cceea70d3d283f9c -->
* Add TFA/TOTP lockout to protect against an attacker who has obtained the user password and attempts to guess the second factor:
** If TFA failed too many times in a row, lock this user account out of TFA for an hour. If TOTP failed too many times in a row, disable TOTP for the user account. Using a recovery key will unlock a user account.
** Add <code>pveum tfa unlock</code> command and <code>/access/users/{userid}/unlock-tfa</code> API endpoint for manually unlocking users.
** Add TFA lockout status to responses of <code>/access/tfa</code> and <code>/access/users</code> endpoints.
    <!-- https://git.proxmox.com/?p=proxmox.git;a=commit;h=a3448feb1a8124ad85914ab743d8bc215d9883f7 -->
* Fix validity check for LDAP base DNs that was overly strict starting from Proxmox VE 7.4. For example, the check rejected base DNs containing both dashes and spaces (issue #4609).
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=6b833faf931262fc7cd71c2048c6cc0e28811648 -->
* When authenticating via PAM, pass the <code>PAM_RHOST</code> item. With this, it is possible to manually configure PAM such that certain users (for example <code>root@pam</code>) can only log in from certain hosts.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=6c512352aec716bc2486459d31e651a5e6747368 -->
* Add <code>pveum tfa list</code> command for listing second factors on the command line.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=9d2996034def00a7e30d6f9b675f316c50ef2140 -->
* The <code>access/ticket</code> API endpoint does not support the deprecated login API (using <code>new-format=0</code>) anymore.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=cfd8636b5ecf0af8d15ff5f3d2ab0e77abb46b3f -->
* Remove the <code>Permission.Modify</code> privilege from the <code>PVESysAdmin</code> and <code>PVEAdmin</code> roles and restrict it to the <code>Administrator</code> role. This reduces the chances of accidentally granting privilege modification privileges.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=df619a8dc2fe518b10f951c86e51e3d4c204936b -->
* Login with TFA: In order to improve UX, fix wording of messages related to recovery keys.
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=50af081a20e3ef4404df8602ccf0b39c8a7ecb17 -->
* Forbid creating roles with names starting with <code>PVE</code> to reserve these role names for use in future upgrades.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=00b90e7185efe489ddf7cb6f5b6ebdf023cc602c -->
* <code>SDN.Use</code> is required on a bridge/vnet (or its zone) in order to configure it in a guest vNIC.
** use <code>/sdn/zones/localnetwork</code> or <code>/sdn/zones/localnetwork/<bridge></code> to allow usage of all or specific local bridges.
** use <code>/sdn/zones/<zone></code> or <code>/sdn/zones/<zone>/<bridge></code> to allow usage of all or specific vnets in a given SDN zone.
    <!-- https://git.proxmox.com/?p=pve-guest-common.git;a=commitdiff;h=aabf879e0c1e5769608e6a028e0b3022f8f6b4e7 -->
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=9cfd06d944000f978ab52133d783d952587a54fa -->
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commitdiff;h=ee81952f4fc8faf01ed4eda5b8962d1a82d5425d -->
* Users with <code>VM.Allocate</code>/<code>Datastore.Allocate</code>/<code>Pool.Allocate</code> privileges, but without the <code>Permissions.Modify</code> privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths, instead of arbitrary roles.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commitdiff;h=46bfd59dfca655b263d1f905be37d985416717ac -->
==== Firewall & Software Defined Networking ====
* Allow to distinguish IP sets and aliases with the same name defined on the datacenter level and on the guest level by providing an explicit prefix ([https://bugzilla.proxmox.com/show_bug.cgi?id=4556 issue 4556]). Previously, the innermost IP set/alias took precedence, which is still the default behavior if no prefix is provided.
    <!-- https://git.proxmox.com/?p=pve-firewall.git;a=commit;h=856de23adb4129351026bbeefeff0534edddfe6f -->
* Fix an issue where an allowed special ICMP-type could accidentally be added as destination port for a layer 4 protocol, breaking firewall rule loading.
    <!-- https://git.proxmox.com/?p=pve-firewall.git;a=commit;h=4d1ca18ee6799b42aefdf3cfb016cd8bba848174 -->
* Fix setting the correct vlan-protocol for QinQ zones if the bridge is vlan-aware.([https://bugzilla.proxmox.com/show_bug.cgi?id=4683 issue 4683]
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=de4e2b9cf404af8423e08bc026e76d92e308109c -->
* Fix an issue where routing between zones was enabled by default in exit nodes. This has been fixed by adding null-routes for each other zone prefix to each zone ([https://bugzilla.proxmox.com/show_bug.cgi?id=4389 issue 4389]).
    <!--https://git.proxmox.com/?p=pve-network.git;a=commit;h=91b2f632c035ea4296717d9aa7ce1417d5fee8de -->
* Correctly order <code>vrf</code> and <code>router bgp vrf</code> entries by vrf name in the <code>frr</code> configuration. ([https://bugzilla.proxmox.com/show_bug.cgi?id=4662 issue 4662])
    <!--https://git.proxmox.com/?p=pve-network.git;a=commit;h=f56311de970d2217d9c4f520f103f81e0cff28b2 -->
* For setups where a node is primary exit node for one vrf and secondary exit for a different vrf, the configuration now also adds the second vrf's default route. ([https://bugzilla.proxmox.com/show_bug.cgi?id=4657 issue 4657])
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commitdiff;h=b84eacc86a0f6f231946ae16743bf29113ca284e -->
* Allow specifying a custom vxlan-tunnel port per interface.
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=b84eacc86a0f6f231946ae16743bf29113ca284e -->
* Update the <code>frr</code> configuration generation to the version of <code>frr</code> shipped in Debian Bookworm.
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=243fa522b686103b2a8477c5b462fc7b27d284b8 -->
* Fix an issue where reloading the network configuration on a remote node created an error, which hid the actual issue with the network configuration.
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=0626bbf7b3c4d3c92f4af868f2924a46354effb7 -->
* Add support for IPv6 SLAAC and router advertisement configuration in <code>/etc/network/interfaces</code> to <code>ifupdown2</code>.
    <!-- https://git.proxmox.com/?p=ifupdown2.git;a=commitdiff;h=9e5b857e452135e5ac382af16b8c79c1bd6889d6 -->
* Fix live reloading when changing VLAN and VXLAN specific attributes.
    <!-- https://git.proxmox.com/?p=ifupdown2.git;a=commitdiff;h=be93f4754cadc046c98e5df3f24150e18c358447 -->
* Add support for creating an <code>OVS</code> bridge which tags traffic with a specific VLAN tag to <code>ifupdown2</code>.
: This is to match the possibility in <code>ifupdown</code>.
    <!-- https://git.proxmox.com/?p=ifupdown2.git;a=commitdiff;h=7aa3a5e6b614d943c76ece6cabc18971ae28339d -->
==== Improved management of Proxmox VE nodes ====
* <code>pve7to8</code> compatibility check script added.
: As with previous major upgrades, Proxmox VE 7 ships a script checking for issues with the current node/cluster. It should point to any issues which might prevent a successful major upgrade.
* Outdated <code>pve6to7</code> compatibility check script was removed.
<!-- * The memory info now also explicitly displays the ZFS ARC size (issue 1454). (TODO: Remove this, as the patch series is only partially applied and the ARC size is not shown in the GUI?) -->
    <!-- https://git.proxmox.com/?p=pve-common.git;a=commit;h=b21af8f5015b068a383bec99898e185e92d9c2f9 -->
<!-- * Improve handling of APT repositories without an <code>InRelease</code> files, or with a suite/distribution <code>/</code> or <code>./</code> ('''TODO: Not yet bumped?''') -->
    <!--
https://git.proxmox.com/?p=proxmox.git;a=commit;h=c8433e32190ce8ce80f0bd39b481236c969e3312
https://git.proxmox.com/?p=proxmox.git;a=commit;h=27cd0250536f571dcd54a265534eeab1dc8c6179
-->
* Fix an issue where the web UI would display no APT repositories during a major upgrade.
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=e6ed4498cdb0fa6362fe71556107129cdfcd9e30 -->
* The new version of <code>grub2</code> provided by Debian Bookworm (<code>2.06-13</code>) [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=987008 fixes an issue] where a host using LVM would fail to boot with a message <code>disk `lvmid/...` not found</code>, even though the LVM setup is healthy.
==== Installation ISO ====
* Add new text-based UI mode for the installation ISO, written in Rust using the [https://github.com/gyscos/cursive Cursive] TUI (Text User Interface) library.
*: You can use the new TUI mode to workaround issues with launching the GTK based graphical installer, sometimes observed on both very new and rather old hardware.
*: The new text mode shares the code executing the actual installation with the existing graphical mode.
* The version of BusyBox shipped with the ISO was updated to version 1.36.1.
* The Ceph Quincy repository provided by Proxmox is configured by default to deliver updates for the Ceph client, even if no Proxmox Ceph hyper-converged server is set up.
* Detection of unreasonable system time.
: If the system time is older than the time the installer was created, the system notifies the user with a warning.
    <!-- https://git.proxmox.com/?p=pve-installer.git;a=commit;h=d0817324af4f25119ae5284720088198dd7985e3 -->
* <code>ethtool</code> is now shipped with the ISO and installed on all systems.
* <code>systemd-boot</code> is provided by its own package instead of <code>systemd</code> in Debian Bookworm and is installed with the new ISO.
==== Notable bugfixes and general improvements ====
* Most git repositories now have a <code>dsc</code> Makefile target to create a Debian Source Package and additionally a <code>sbuild</code> target to create the source package and build it using [https://wiki.debian.org/sbuild sbuild].
<div id="8.0-known-issues"></div>
=== Known Issues & Breaking Changes ===
==== Browser Support ====
This Proxmox VE release uses some newer JavaScript, CSS and/or HTML features and requires the following browser version baseline:
* Firefox 79 (released on 2020-07-28)
* Chrome/Chromium 85 (released 2020-08-25)
* Safari 14 (released 2020-09-16)
This is in line with our [https://pve.proxmox.com/pve-docs/chapter-pve-installation.html#_supported_web_browsers_for_accessing_the_web_interface documented minimal required browser release].
If you are using older browser versions, we strongly recommend that you upgrade to the latest available version for best compatibility and to avoid many (security) issues.
==== Storage ====
* Storage activation now checks that every content type uses a different directory, in order to prevent unexpected interactions between different content types.
*: This breaks setups in which the <code>content-dirs</code> option was set up to map different content types to the same directory, and setups in which some content directories were manually set up as symlinks to a common directory.
* The <code>mkdir</code> option is considered deprecated, it got split into <code>create-base-path</code> and <code>create-subdirs</code> as fine-grained replacement.
*: While Proxmox VE will continue to support this option during the 8.x release series, we recommend switching over to the new options already now.
==== Kernel ====
* Previous 6.2 kernels had problems with incoming live migrations when <u>all</u> of the following were true:
** VM has a restricted CPU type (e.g., <code>qemu64</code>) – using CPU type <code>host</code> or <code>Skylake-Server</code> is ok!
** the source host uses an Intel CPU from Skylake Server, Tiger Lake Desktop, or equivalent newer generation.
** the source host is booted with a kernel version 5.15 (or older) (e.g. when upgrading from Proxmox VE 7.4)
:In this case, the VM could hang and use 100% of the CPU of one or more vCPUs.
:This was fixed with <code>pve-kernel-6.2.16-4-pve</code> in version <code>6.2.16-5</code>. So make sure your target host is booted with this (or a newer) kernel version if the above points apply to your setup.
* Kernels based on 6.2 have a degraded Kernel Samepage Merging (KSM) performance on multi-socket NUMA systems.
** Depending on the workload this can result in a significant amount of memory that is not deduplicated anymore.
** This issue went unnoticed for a few kernel releases, making a clean backport of the fixes made for 6.5 hard to do without some general fall-out.
*: Until we either find a targeted fix for our kernel, or change the default kernel to a 6.5 based kernel (planned for 2023'Q4), the current recommendation is to keep your multi-socket NUMA systems that rely on KSM on Proxmox VE 7 with it's 5.15 based kernel.
==== QEMU ====
* QEMU 8.0 removed some previously deprecated features. Proxmox VE 8 won't use the <code>-chardev tty</code> and <code>-chardev parport</code> aliases anymore, and no other features were used by the Proxmox VE stack. Thus, only installations using <code>args</code> inside their guest configs need to check the compatibility. See the [https://wiki.qemu.org/ChangeLog/8.0#Removed_features_and_incompatible_changes Qemu changelog on the topic] for details.
** The removed features in QEMU 8.0 also include the Proxmox VE-specific, but unused/deprecated QMP commands <code>get_link_status</code>, <code>snapshot-drive</code> and <code>delete-drive-snapshot</code>.
* Only root@pam is now allowed to clone and restore guests with passed through PCI/USB devices that are not using the new mapping feature. To allow regular users to clone and restore with PCI/USB devices, create a mapping and give the user 'Mapping.Use' on that.
* Trying to pass through the same PCI device multiple times in a single guest now fails earlier. A <code>qm showcmd</code> for example does not generate an output anymore in that case.
* When passed through device is configured as <code>multifunction</code> (or 'All Functions' in the web UI) with a set mediated device (mdev) this now generates an error instead of a warning. Use the specific function instead.
* cloud-init: If the VM name is not a FQDN and no DNS search domain is configured, the automatically-generated cloud-init user data now contains an additional <code>fqdn</code> option. This fixes an issue where the hostname was not set properly for some in-guest distributions. However, the changed user data will change the instance ID, which may cause the in-guest cloud-init to re-run actions that trigger once-per-instance. For example, it may regenerate the in-guest SSH host keys.
* Migration doesn't scan all local storages for orphaned volumes anymore. Instead, only the volumes referenced in the configuration (including snapshots) are picked up. This avoids unexpected errors where an unavailable local storage could fail migration even if no volume referenced that storage.
==== Container ====
* The <code>lxc.id_map</code> configuration key has been deprecated for a long time by <code>lxc</code> and was replaced by <code>lxc.idmap</code>. With this release, its presence is considered an error. The key can only be present if it was manually added to a guest configuration.
* The <code>lxcfs</code> is now built with <code>fuse 3</code>. This upgrade is done on a major release, since all running containers need to be restarted afterwards.
* Migration doesn't scan all local storages for orphaned volumes anymore. Instead, only the volumes referenced in the configuration (including snapshots) are picked up. This avoids unexpected errors where an unavailable local storage could fail migration even if no volume referenced that storage.
==== Authentication & Permission System ====
* There is a new <code>SDN.Use</code> privilege (and corresponding <code>PVESDNUser</code> role) that is required to configure virtual NICs in guests. See "Access control" section above for details!
* The <code>Permission.Modify</code> privilege has been removed from the <code>PVESysAdmin</code> and <code>PVEAdmin</code> roles, in order to reduce the chances of accidentally granting the privilege to modify privileges. If a particular setup requires a role with this privilege, it is necessary to define a new custom role and use that instead of <code>PVESysAdmin</code>/<code>PVEAdmin</code>.
* Users with <code>VM.Allocate</code>/<code>Datastore.Allocate</code>/<code>Pool.Allocate</code> privileges, but without the <code>Permissions.Modify</code> privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths. Previously they could assign any role to specific VM/storage/pool paths. As the privileges usable on specific VM/storage/pool paths were quite limited, this did not allow privilege escalation, but restricting the capabilities now allows adding more powerful privileges in future versions without breaking changes.
* Editing backup jobs now generally requires the <code>Datastore.Allocate</code> privilege on the target storage, and editing backup jobs with <code>dumpdir</code> requires root privileges.
* User accounts will now be locked after too many attempts to authenticate with a second factor. This is intended to protect against an attacker who has obtained the user password and attempts to guess the second factor. Unlocking requires either a successful login with a recovery key or a manual unlock by an administrator.
==== Node Management ====
* Systems booting via UEFI from a ZFS on root setup should install the <code>systemd-boot</code> package after the upgrade.
*: The <code>systemd-boot</code> was split out from the <code>systemd</code> package for Debian Bookworm based releases. It won't get installed automatically upon upgrade from Proxmox VE 7.4 as it can cause trouble on systems not booting from UEFI with ZFS on root setup by the Proxmox VE installer.
*: Systems which have ZFS on root and boot in UEFI mode will need to manually install it if they need to initialize a new ESP (see the output of <code>proxmox-boot-tool status</code> and the  [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_determine_bootloader_used relevant documentation]).
*: Note that the system remains bootable even without the package installed (the bootloader that was copied to the ESPs during intialization remains untouched), so you can also install it after the upgrade was finished.
*: It is not recommended installing <code>systemd-boot</code> on systems which don't need it, as it would replace <code>grub</code> as bootloader in its <code>postinst</code> script.
==== API ====
* The API can handle array-type data differently, while trying staying backward compatible.
*: Instead of being able to pass the individual elements separated by null bytes, you can pass the data directly as array.
* Before Proxmox VE 8, the API endpoint to trigger a QEMU guest agent (QGA) command execution (<code>/nodes/{node}/qemu/{vmid}/agent/exec</code>) allowed passing a command as a single string, which would then be automatically split at whitespace. This was deemed too brittle and is not supported anymore. You must now send the command and all its arguments as a proper array of strings.
==== OVS Network ====
* The Open vSwitch (OVS) version shipped with Proxmox VE 8 / Debian Bookworm contains a bug that causes it to fail to provide a working network when run on a system with 2 CPU cores or less. This is caused by the handler responsible for filling the forward database (FDB) not being started, so forwarding doesn't work. OVS doesn't log any errors or warnings in this case. Proxmox VE is primarily targeted at server hardware, most of which now has 3 cores or more. If you're still affected on a production system, please report it to our [https://bugzilla.proxmox.com/ Bugzilla instance].
== Proxmox VE 8.0 beta1 ==
Released 9. June 2023: See [[Downloads]]
'''Note''': this is a test version not meant for production use yet.
* Based on Debian 12 Bookworm (testing)
* Latest 6.2 Kernel as stable default
* QEMU 8.0.2
* LXC 5.0.2
* ZFS 2.1.11
* Ceph Quincy 17.2.6
=== Highlights ===
* New major release based on the great Debian Bookworm.
* Seamless upgrade from Proxmox VE 7.4, see [[Upgrade from 7 to 8]]
* Ceph Quincy enterprise repository.
*: Access the most stable Ceph repository through any Proxmox VE subscription.
* Add access realm sync jobs.
*: Synchronize users and groups from an LDAP/AD server automatically at regular intervals.
* Integrate host network bridge and VNet access when configuring virtual guests into Proxmox VE's ACL system.
*: With the new <code>SDN.Use</code> privilege and the new <code>/sdn/zones/<zone>/<bridge-or-vnet>/<vlan-tag></code> ACL object path, one can give out fine-grained usage permissions for specific networks to users.
=== Changelog Overview ===
==== Enhancements in the web interface (GUI) ====
* The Ceph repository selection now takes into account the subscription status of the complete cluster and recommends the optimal version for the cluster.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=93542d7748eb08764d765a3a5a75042a5a4df621 -->
* Improved Dark color theme:
: The Dark color theme, introduced in Proxmox VE 7.4, received a lot of feedback from our community, which resulted in further improvements.
* Set strict <code>SameSite</code> attribute on the Authorization cookie
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=aec7e8d23c5456b97e15f61b969c7ca895174909 -->
* The Markdown parser, used in notes, has been improved:
** it allows setting the target for links, to make any link open in a new tab or window.
** it allows providing URLs with a scheme different from HTTP/HTTPS;
**: You can now directly link to resources like <code>rdp://<rest-of-url></code>, providing convenience links in the guest notes.
** tag-names and protocols are matched case-insensitive.
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=2d04f0165d2adfc26bf7f492a937fc72fcb784c2 -->
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=5cbbb9c44a76cb0c1067df9692290e74c4ee921f -->
* The mobile UI code was refactored to not suffer from incompatible changes made for the web-based GUI.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=0947b5c36f459f5060344f217412f5260fa25946 -->
* The generated CSR used by the built-in ACME client now sets the correct CSR version (<code>0</code> instead of <code>2</code>).
    <!-- https://git.proxmox.com/?p=pve-common.git;a=commit;h=8ebb0f82ac13e7332929ef0f9ef8a28b2dd9ba96 -->
* Uploading files now only computes the MD5 sum of the uploaded file, if it can be used for comparison with the user-provided one.
    <!-- https://git.proxmox.com/?p=pve-http-server.git;a=commit;h=a2a3d17be8548e47d0bc3a09a325a89dbfad487f -->
* Firewall settings: Improve the alignment of permissions checked by the web UI with the permissions actually required by the API.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=2e37e77902f2b3712eda5b4d68cc907a1fd901e9 -->
* Explicitly disallow internal-only <code>tmpfilename</code> parameter for file uploads.
    <!-- https://git.proxmox.com/?p=pve-http-server.git;a=commit;h=cb53bd68615f8119c8915608455df63951abc26f -->
* Fix multipart HTTP uploads without <code>Content-Type</code> header.
    <!-- https://git.proxmox.com/?p=pve-http-server.git;a=commit;h=602eb8aabdc04180962d5c3b6f1c66358a8d2690 -->
* Show Ceph pool number in the web UI, as it is often mentioned in Ceph warnings and errors.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=79007cfc408e26c02857051cc23af08fb94b8daa -->
* Improved translations, among others:
** Ukrainian (NEW)
    <!-- https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=e0dd5e6989adcd4a5d796453707371ac82a1b4a3 -->
** Japanese
    <!-- https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=326feddf629345018f165216decaff2b5fb3bad2 -->
** Simplified Chinese
    <!--  https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=e5fb5ffc3fb71ea0e4aa9323b4dbfef3d2dbf998 -->
** Traditional Chinese
    <!--  https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=f133ff0de42bf852f5cfc32b20458f37790b82ef -->
** The size units (Bytes, KB, MiB,...) are now passed through the translation framework as well, allowing localized variants (e.g., for French).
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=01e64778ad112504d1de155c442571f54da46a45 -->
** The language selection is now localized and displayed in the currently selected language
    <!-- https://git.proxmox.com/?p=proxmox-i18n.git;a=commit;h=adf33b65568046ba74869d6f980bc1fb00ebf845 -->
==== Virtual Machines (KVM/QEMU) ====
* New QEMU Version 8.0:
** The <code>virtiofsd</code> codebase was replaced by a new and improved implementation based on Rust, which is packaged separately.
** QEMU Guest Agent now has initial support for NetBSD and OpenBSD.
** Many more changes, see the [https://wiki.qemu.org/ChangeLog/8.0 upstream changelog] for details.
* Avoid invalid <code>smm</code> machine flag for <code>aarch64</code> VM when using serial display and SeaBIOS.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=e4263214b8b708b71be9d256e153531e51002bdb -->
* Warn if a network interface is not connected to a bridge on VM startup. This can happen if the user manually edited the VM config.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=bb547dcbd625f64e7a7b69b59311ee653f05a657 -->
* Fix an issue with the improved code for volume size information gathering for passed through disks during backup preparation.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=670f947ee86a9046fe2a5cdc74c7aa18e99a4cba -->
* Workaround breaking driver changes in newer Nvidia grid drivers, which prevented mediated devices (mdev) to be reclaimed upon guest exit.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=49c51a60db7f12d7fe2073b755d18b4d9b628fbd -->
* Prefer an explicit configured SMBIOS UUID for Nvidia vGPU passthrough.
*: If a <code>uuid</code> command line parameter is present, it will be preferred over the default auto-generated UUID, containing the VMID and mdev index.
: This fixes an issue, with software inside the guest, which relies on a specific and unique UUID setting.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=c8bd54e9a8367c4926a7029b209fded16f46c040 -->
* Improved gathering of current setting for live memory unplugging.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=d1b25a2267187c48339f598e81d5f99706f85513 -->
* Avoid sending a temporary size of zero to QEMU when resizing block devices. Previously, this was done when resizing RBD volumes, but it is not necessary anymore.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=2e4357c537287edd47d6031fec8bffc7b0ce2425 -->
* When resizing a disk, spawn a worker task to avoid HTTP request timeout ([https://bugzilla.proxmox.com/show_bug.cgi?id=2315 issue 2315]).
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=606d9d76b25c533f15d589d4927015c8e3d78a1b -->
* Allow resizing qcow2 disk images with snapshots ([https://bugzilla.proxmox.com/show_bug.cgi?id=517 issue 517]).
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=5c01f5917759248f6b458f1cd518ba16c67a2f9f -->
* cloud-init improvements:
** Introduce <code>ciupgrade</code> option that controls whether machines should upgrade packages on boot ([https://bugzilla.proxmox.com/show_bug.cgi?id=3428 issue 3428]).
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=a3862f699f9aec88e9e6975063366a4a5f45f2b7 -->
** Better align privilege checks in the web UI with the actual privileges required in the backend.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=d1c7fa02096ad025ea179ca7e749edb46e79bb36 -->
** Fix an issue where the hostname was not properly set on Fedora/CentOS machines, by passing the hostname via the <code>fqdn</code> option.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=3e546c5ada47da8434bb58d27a3aa7d9823e7fa4 -->
** Fix an issue where displaying pending changes via <code>qm</code> and <code>pvesh</code> caused an error.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=3a704639b7123d43ad9ef192eb9d842b7b17327e -->
** Allow setting network options with <code>VM.Config.Cloudinit</code> privileges, instead of requiring the more powerful <code>VM.Config.Network</code> privilege.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=46f3fc25d8e990c339f08b00f986c4c2d66196e8 -->
* Drop unused QMP commands for getting the link and creating/deleting internal snapshots.
    <!--
https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=a816d2969eb0fd9580bfdff44f16bde7ab258a78
https://git.proxmox.com/?p=pve-qemu.git;a=commit;h=99f9ce2cd2b60b560c2130680aca812376977b10
-->
* Replace usages of deprecated <code>-no-hpet</code> QEMU option with the <code>hpet=off</code> machine flag.
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commit;h=17bacc2182d139bce132c6844ee8b0be757512f7 -->
==== Containers (LXC) ====
* Improve handling of <code>/etc/machine-id</code> on clone operations - the file is now only truncated, if the source did not explicitly set it to 'uninitialized' or remove it. Thus, the admin can decide if they want first-boot semantics or not (see <code>machine-id (5)</code>).
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=a72424ba7296d846b8893d3c070138eb280c51cd -->
* Set <code>memory.high</code> cgroup limit to 99.6% of configured memory. This setting gives the container a chance to free memory before directly running into an Out-of-Memory (OOM) condition. It is applied on <code>lxc.conf</code> generation and on hot-plugging memory to a running container.
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=926b193e066e12e7d9ca24dade37635781af2368 -->
* Warn users on conflicting, manual, <code>lxc.idmap</code> entries.
*: Custom mappings can become quite complicated and cause overlaps fast.
*: By issuing a warning upon container start, the user should find the wrong entry directly.
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=d17fe04161d252b85227ba2401e1e7ce03e842ff -->
* When resizing a disk, perform plausibility checks already before spawning the worker task. This allows invalid requests to fail earlier.
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commit;h=f0595e61425a68d648d9a5fd28fcacbd144cc8fe -->
* General code improvements, adhering to best practices for Perl code.
==== General improvements for virtual guests ====
    <!-- libpve-guest-common-perl has just changes to packaging for the 8.0 release -->
* When cloning guests, the validation of the provided name of the clone is now happening in the frontend, improving UX.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=54fbdcd1c075dca5b831745e5da679b946efe7cb -->
* Add config files in <code>/etc/pve/mapping</code> and privileges <code>Mapping.*</code> in preparation for cluster-wide mapping of PCI/USB devices.
    <!-- https://git.proxmox.com/?p=pve-guest-common.git;a=commit;h=11fa95682eed43e7aa89c0e70769fe6e177327eb -->
==== HA Manager ====
* Stability improvements of manual maintenance mode:
** Fix an issue where a request for enabling maintenance mode on a node is lost, in case the rebooted node is the current active Cluster Resource Manager (CRM).
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=26bbff0d552f17b8d28d5d452130c34d1632385a -->
** Fix an issue where a shutdown policy other than <code>migrate</code> could cause a node in maintenance mode to leave maintenance mode too early or fence itself.
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=f129138cb046b5940f57f0734ab9eba22a05668d -->
** Fix an issue where <code>ha-rebalance-on-start</code> could cause a newly added and already-running service to be shut down and migrated to another node.
**: Now, <code>ha-rebalance-on-start</code> ignores services that are already running.
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=5a9c3a28083820107f05bf45b111457725bcdab9 -->
* When enabling or disabling maintenance mode via the CLI, the <code>ha-manager</code> command now checks whether the provided node exists.
*: This avoids misconfigurations, e.g., due to a typo in the node name.
    <!-- https://git.proxmox.com/?p=pve-ha-manager.git;a=commit;h=f6c61fe8a3d27d3bcc3226a647733b2d0eb644d9 -->
==== Improved management for Proxmox VE clusters ====
* The <code>rsync</code> invocation used when joining nodes via ssh, which is deprecated, has been adapted to changes in <code>rsync</code> CLI argument parsing in Bookworm.
    <!-- https://git.proxmox.com/?p=pve-cluster.git;a=commit;h=9697997575e25e188a0993a0e4fc7f33f6602928 -->
==== Backup/Restore ====
* Improve performance of backups that use <code>zstd</code> on fast disks, by invoking <code>zstd</code> without the <code>--rsyncable</code> flag ([https://bugzilla.proxmox.com/show_bug.cgi?id=4605 issue 4605]).
* Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.
* When restoring from backups via the web UI, the VM/CT name is now validated client-side before sending an API request. This helps catching invalid names early.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=a2b8ac2e2188fb2292f29b8261aec3c537a6d6b1;ds=sidebyside -->
* The web UI now sorts backups by date, whereas it previously sorted backups first by VMID and then by date. The VMID is added as an extra column for users who would like to restore the previous sorting order ([https://bugzilla.proxmox.com/show_bug.cgi?id=4678 issue 4678]).
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=63d74bb632833f31208eb966c2701b6eec7ac319 -->
* Fix an issue where the backup job editor window occasionally did not show the selected guests ([https://bugzilla.proxmox.com/show_bug.cgi?id=4627 issue 4627]).
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=25f7446c90fc69db0f8931dc46928faef519f799 -->
* The <code>fs-freeze-on-backup</code> option of the QEMU guest agent, which controls whether the filesystem should be frozen for backups, can now be set in the web UI.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=8bb027f820bfdc9f4f8e4b75ac84095c473e3183 -->
* Improve permission model for backup jobs: Editing backup jobs now generally requires the <code>Datastore.Allocate</code> privilege on the target storage, and editing backup jobs with <code>dumpdir</code> requires root privileges.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=b6e561304a6145c7637f354fb4355198bb11c5ba -->
* Clarify description of the <code>ionice</code> setting.
    <!-- https://git.proxmox.com/?p=pve-guest-common.git;a=commit;h=29b4ce9390d4f57b1b0890aa7615e922d2d2d262 -->
==== Storage ====
* The ''file''-based storage-types have two new config options <code>create-base-path</code> and <code>create-subdirs</code>. They replace the <code>mkdir</code> option and separate two different concepts:
: <code>create-base-path</code> controls if the path to the storage should be created if it does not exist,
: <code>create-subdirs</code> controls if the content-specific subdirectories (e.g., guest images, ISO images, container templates, or backups) should be created.
: Conflating both settings in the single <code>mkdir</code> option caused a few unwanted effects in [https://bugzilla.proxmox.com/show_bug.cgi?id=3214 certain situations (issue 3214)].
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=7c242295c9d8ba24ddd3de8e98f453af04ee5da1 -->
* The CIFS storage type can now be configured with custom mount options, as it was already possible for the NFS storage type.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=b48ca5a7c0235b42fe2fa18a7b912114443ac01f -->
* The <code>subdir</code> option of the CIFS storage type can now be configured in the web interface. The option can be used to mount a subdirectory of a SMB/CIFS share and was previously only accessible via the API/CLI.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=711658f42c920bcc0131f3e36e6f6eafae5d07e7 -->
* Improve API documentation for the <code>upload</code> method.
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=4e55684cafde4086b24729365871ee4ee0d31d6b -->
* The API now allows to also query replication jobs that are disabled.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=e81a10a4ab1af19ca35a992a357028f1d38552dc -->
* Allow <code>@</code> in directory storage path, as it is often used to signify Btrfs subvolumes.
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=765e0da23f4abfdc3ae43a6975545e7bb5426ff5 -->
* When resizing RBD volumes, always round up sizes to the nearest integer. This avoids errors due to passing a floating-point size to the RBD tooling.
    <!-- https://git.proxmox.com/?p=pve-storage.git;a=commit;h=93e34f7259787a8b5704a4cdbdd16e962af0573a -->
==== Ceph ====
* Add support for new Ceph enterprise repositories. When installing Ceph via <code>pveceph install</code> or the web UI, you can now choose between the <code>test</code>, <code>no-subscription</code> and <code>enterprise</code> (default) repositories. The <code>-test-repository</code> option of the <code>pveceph install</code> command was removed.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=93542d7748eb08764d765a3a5a75042a5a4df621 -->
* Add <code>pveceph osddetails</code> command to show information about OSDs on the command line, with a level of detail that is comparable to the web UI/API.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=b48ca5a7c0235b42fe2fa18a7b912114443ac01f -->
* Drop support for hyper-converged Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8.
*: Proxmox VE 8 will support managing Quincy and newer Ceph server releases, setups still using Pacific can [[Ceph Pacific to Quincy|upgrade to Ceph Quincy]] before upgrading Proxmox VE from 7 to  8.
*: The Ceph 17.2 Quincy client will still support accessing older Ceph server setups as a client.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=ddd8927990b2da79c52f03527fdbc29101a340fc -->
* Remove overly restrictive validation of <code>public_network</code> during monitor creation. Configuring a public network like <code>0::/0</code> or <code>0::/1</code> caused a superfluous "value does not look like a valid CIDR network" error.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=16f3482b3422d808adbc932bb87a84ebba360d25 -->
* The Ceph installation wizard in the web UI does not create monitors and managers called <code>localhost</code> anymore and uses the actual node name instead.
    <!-- https://git.proxmox.com/?p=pve-manager.git;a=commit;h=c2175df21f92e6badd1d47b7d5a542522468c011 -->
==== Access Control ====
* Add possibility to define realm sync jobs in the web UI. These allow to synchronize users and groups from an LDAP server automatically at regular intervals.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=dc982c4b76772963ee8cfd45cceea70d3d283f9c -->
* Add TFA/TOTP lockout to protect against an attacker who has obtained the user password and attempts to guess the second factor:
** If TFA failed too many times in a row, lock this user account out of TFA for an hour. If TOTP failed too many times in a row, disable TOTP for the user account. Using a recovery key will unlock a user account.
** Add <code>pveum tfa unlock</code> command and <code>/access/users/{userid}/unlock-tfa</code> API endpoint for manually unlocking users.
** Add TFA lockout status to responses of <code>/access/tfa</code> and <code>/access/users</code> endpoints.
    <!-- https://git.proxmox.com/?p=proxmox.git;a=commit;h=a3448feb1a8124ad85914ab743d8bc215d9883f7 -->
* Fix validity check for LDAP base DNs that was overly strict starting from Proxmox VE 7.4. For example, the check rejected base DNs containing both dashes and spaces ([https://bugzilla.proxmox.com/show_bug.cgi?id=4609 issue 4609]).
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=6b833faf931262fc7cd71c2048c6cc0e28811648 -->
* When authenticating via PAM, pass the <code>PAM_RHOST</code> item. With this, it is possible to manually configure PAM such that certain users (for example <code>root@pam</code>) can only log in from certain hosts.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=6c512352aec716bc2486459d31e651a5e6747368 -->
* Add <code>pveum tfa list</code> command for listing second factors on the command line.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=9d2996034def00a7e30d6f9b675f316c50ef2140 -->
* The <code>access/ticket</code> API endpoint does not support the deprecated login API (using <code>new-format=0</code>) anymore.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=cfd8636b5ecf0af8d15ff5f3d2ab0e77abb46b3f -->
* Remove the <code>Permission.Modify</code> privilege from the <code>PVESysAdmin</code> and <code>PVEAdmin</code> roles and restrict it to the <code>Administrator</code> role. This reduces the chances of accidentally granting privilege modification privileges.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=df619a8dc2fe518b10f951c86e51e3d4c204936b -->
* Login with TFA: In order to improve UX, fix wording of messages related to recovery keys.
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=50af081a20e3ef4404df8602ccf0b39c8a7ecb17 -->
* Forbid creating roles with names starting with <code>PVE</code> to reserve these role names for use in future upgrades.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commit;h=00b90e7185efe489ddf7cb6f5b6ebdf023cc602c -->
* <code>SDN.Use</code> is required on a bridge/vnet (or its zone) in order to configure it in a guest vNIC.
** use <code>/sdn/zones/localnetwork</code> or <code>/sdn/zones/localnetwork/<bridge></code> to allow usage of all or specific local bridges.
** use <code>/sdn/zones/<zone></code> or <code>/sdn/zones/<zone>/<bridge></code> to allow usage of all or specific vnets in a given SDN zone.
    <!-- https://git.proxmox.com/?p=pve-guest-common.git;a=commitdiff;h=aabf879e0c1e5769608e6a028e0b3022f8f6b4e7 -->
    <!-- https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=9cfd06d944000f978ab52133d783d952587a54fa -->
    <!-- https://git.proxmox.com/?p=pve-container.git;a=commitdiff;h=ee81952f4fc8faf01ed4eda5b8962d1a82d5425d -->
* Users with <code>VM.Allocate</code>/<code>Datastore.Allocate</code>/<code>Pool.Allocate</code> privileges, but without the <code>Permissions.Modify</code> privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths, instead of arbitrary roles.
    <!-- https://git.proxmox.com/?p=pve-access-control.git;a=commitdiff;h=46bfd59dfca655b263d1f905be37d985416717ac -->
==== Firewall & Software Defined Networking ====
* Allow to distinguish IP sets and aliases with the same name defined on the datacenter level and on the guest level by providing an explicit prefix ([https://bugzilla.proxmox.com/show_bug.cgi?id=4556 issue 4556]). Previously, the innermost IP set/alias took precedence, which is still the default behavior if no prefix is provided.
    <!-- https://git.proxmox.com/?p=pve-firewall.git;a=commit;h=856de23adb4129351026bbeefeff0534edddfe6f -->
* Fix an issue where an allowed special ICMP-type could accidentally be added as destination port for a layer 4 protocol, breaking firewall rule loading.
    <!-- https://git.proxmox.com/?p=pve-firewall.git;a=commit;h=4d1ca18ee6799b42aefdf3cfb016cd8bba848174 -->
* Fix setting the correct vlan-protocol for QinQ zones if the bridge is vlan-aware.([https://bugzilla.proxmox.com/show_bug.cgi?id=4683 issue 4683]
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=de4e2b9cf404af8423e08bc026e76d92e308109c -->
* Fix an issue where routing between zones was enabled by default in exit nodes. This has been fixed by adding null-routes for each other zone prefix to each zone ([https://bugzilla.proxmox.com/show_bug.cgi?id=4389 issue 4389]).
    <!--https://git.proxmox.com/?p=pve-network.git;a=commit;h=91b2f632c035ea4296717d9aa7ce1417d5fee8de -->
* Correctly order <code>vrf</code> and <code>router bgp vrf</code> entries by vrf name in the <code>frr</code> configuration. ([https://bugzilla.proxmox.com/show_bug.cgi?id=4662 issue 4662])
    <!--https://git.proxmox.com/?p=pve-network.git;a=commit;h=f56311de970d2217d9c4f520f103f81e0cff28b2 -->
* For setups where a node is primary exit node for one vrf and secondary exit for a different vrf, the configuration now also adds the second vrf's default route. ([https://bugzilla.proxmox.com/show_bug.cgi?id=4657 issue 4657])
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commitdiff;h=b84eacc86a0f6f231946ae16743bf29113ca284e -->
* Allow specifying a custom vxlan-tunnel port per interface.
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=b84eacc86a0f6f231946ae16743bf29113ca284e -->
* Update the <code>frr</code> configuration generation to the version of <code>frr</code> shipped in Debian Bookworm.
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=243fa522b686103b2a8477c5b462fc7b27d284b8 -->
* Fix an issue where reloading the network configuration on a remote node created an error, which hid the actual issue with the network configuration.
    <!-- https://git.proxmox.com/?p=pve-network.git;a=commit;h=0626bbf7b3c4d3c92f4af868f2924a46354effb7 -->
* Add support for IPv6 SLAAC and router advertisement configuration in <code>/etc/network/interfaces</code> to <code>ifupdown2</code>.
    <!-- https://git.proxmox.com/?p=ifupdown2.git;a=commitdiff;h=9e5b857e452135e5ac382af16b8c79c1bd6889d6 -->
* Fix live reloading when changing VLAN and VXLAN specific attributes.
    <!-- https://git.proxmox.com/?p=ifupdown2.git;a=commitdiff;h=be93f4754cadc046c98e5df3f24150e18c358447 -->
* Add support for creating an <code>OVS</code> bridge which tags traffic with a specific VLAN tag to <code>ifupdown2</code>.
: This is to match the possibility in <code>ifupdown</code>.
    <!-- https://git.proxmox.com/?p=ifupdown2.git;a=commitdiff;h=7aa3a5e6b614d943c76ece6cabc18971ae28339d -->
==== Improvements for the management of Proxmox VE Nodes ====
* <code>pve7to8</code> compatibility check script added.
: As with previous major upgrades, Proxmox VE 7 ships a script checking for issues with the current node/cluster. It should point to any issues which might prevent a successful major upgrade.
* Outdated <code>pve6to7</code> compatibility check script was removed.
* Fix an issue where the web UI would display no APT repositories during a major upgrade.
    <!-- https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commit;h=e6ed4498cdb0fa6362fe71556107129cdfcd9e30 -->
* The new version of <code>grub2</code> provided by Debian Bookworm (<code>2.06-13</code>) [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=987008 fixes an issue] where a host using LVM would fail to boot with a message <code>disk `lvmid/...` not found</code>, even though the LVM setup is healthy.
==== Installation ISO ====
* The version of BusyBox shipped with the ISO was updated to version 1.36.1.
* The Proxmox-provided Ceph Quincy repo will be set-up by default, providing updates for a modern Ceph client even if Proxmox Ceph hyper-converged setup is not in use.
* Detection of unreasonable system time.
: If the system time is older than the time the installer was created, the system notifies the user with a warning.
    <!-- https://git.proxmox.com/?p=pve-installer.git;a=commit;h=d0817324af4f25119ae5284720088198dd7985e3 -->
* <code>ethtool</code> is now shipped with the ISO and installed on all systems.
* <code>systemd-boot</code> is provided by its own package instead of <code>systemd</code> in Debian Bookworm and is installed with the new ISO.
==== Notable bug fixes and general improvments ====
* Most git repositories now have a <code>dsc</code> Makefile target to create a Debian Source Package and additionally a <code>sbuild</code> target to create the source package and build it using [https://wiki.debian.org/sbuild sbuild].
<div id="8.0-beta-known-issues"></div>
=== Known Issues & Breaking Changes ===
==== Browser Support ====
This Proxmox VE release uses some newer JavaScript, CSS and/or HTML features and requires the following browser version baseline:
* Firefox 79 (released on 2020-07-28)
* Chrome/Chromium 85 (released 2020-08-25)
* Safari 14 (released 2020-09-16)
This is well below our requirement to use a browser released in the last twelve months.
If you are using older browser versions, we strongly recommend that you upgrade to the latest available version for best compatibility and to avoid many (security) issues.
==== Storage ====
* Storage activation now checks that every content type uses a different directory, in order to prevent unexpected interactions between different content types.
*: This breaks setups in which the <code>content-dirs</code> option was set up to map different content types to the same directory, and setups in which some content directories were manually set up as symlinks to a common directory.
* The <code>mkdir</code> option is considered deprecated, it got split into <code>create-base-path</code> and <code>create-subdirs</code> as fine-grained replacement.
*: While Proxmox VE will continue to support this option during the 8.x release series, we recommend switching over to the new options already now.
==== QEMU ====
* QEMU 8.0 removed some previously deprecated features. Proxmox VE 8 won't use the <code>-chardev tty</code> and <code>-chardev parport</code> aliases anymore, and no other features were used by the Proxmox VE stack. Thus, only installations using <code>args</code> inside their guest configs need to check the compatibility. See the [https://wiki.qemu.org/ChangeLog/8.0#Removed_features_and_incompatible_changes Qemu changelog on the topic] for details.
** The removed features in QEMU 8.0 also include the Proxmox VE-specific, but unused/deprecated QMP commands <code>get_link_status</code>, <code>snapshot-drive</code> and <code>delete-drive-snapshot</code>.
==== Container ====
* The <code>lxc.id_map</code> configuration key has been deprecated for a long time by <code>lxc</code> and was replaced by <code>lxc.idmap</code>. With this release, its presence is considered an error. The key can only be present if it was manually added to a guest configuration.
* The <code>lxcfs</code> is now built with <code>fuse 3</code>. This upgrade is done on a major release, since all running containers need to be restarted afterwards.
==== Authentication & Permission System ====
* There is a new <code>SDN.Use</code> privilege (and corresponding <code>PVESDNUser</code> role) that is required to configure virtual NICs in guests. See SDN section above for details!
* The <code>Permission.Modify</code> privilege has been removed from the <code>PVESysAdmin</code> and <code>PVEAdmin</code> roles, in order to reduce the chances of accidentally granting the privilege to modify privileges. If a particular setup requires a role with this privilege, it is necessary to define a new custom role and use that instead of <code>PVESysAdmin</code>/<code>PVEAdmin</code>.
* Users with <code>VM.Allocate</code>/<code>Datastore.Allocate</code>/<code>Pool.Allocate</code> privileges, but without the <code>Permissions.Modify</code> privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths. Previously they could assign any role to specific VM/storage/pool paths. As the privileges usable on specific VM/storage/pool paths were quite limited, this did not allow privilege escalation, but restricting the capabilities now allows adding more powerful privileges in future versions without breaking changes.
* Editing backup jobs now generally requires the <code>Datastore.Allocate</code> privilege on the target storage, and editing backup jobs with <code>dumpdir</code> requires root privileges.
* User accounts will now be locked after too many attempts to authenticate with a second factor. This is intended to protect against an attacker who has obtained the user password and attempts to guess the second factor. Unlocking requires either a successful login with a recovery key or a manual unlock by an administrator.
==== Others ====
* The API can handle array-type data differently, while staying backward compatible.
*: Instead of being able to pass the individual elements separated by null bytes, you can pass the data directly as array.
== Proxmox VE 7.4 ==
Released 23. March 2023: See [[Downloads]]
* Based on Debian Bullseye (11.6)
* Latest 5.15 Kernel as stable default
* Newer 6.2 kernel as opt-in
* QEMU 7.2
* LXC 5.0.2
* ZFS 2.1.9
* Ceph Quincy 17.2.5
* Ceph Pacific 16.2.11
=== Highlights ===
* Proxmox VE now provides a dark theme for the web interface.
* Guests in resource tree can now be sorted by their name, not only VMID.
* The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.
* Added CRM command to the HA manager to switch an online node manually into maintenance mode (without reboot.
=== Changelog Overview ===
==== Enhancements in the web interface (GUI) ====
* Add a fully-integrated "Proxmox Dark" color theme variant of the long-time Crisp light theme.
: By default, the <code>prefers-color-scheme</code> media query from the Browser/OS will be used to decide the default color scheme.
: Users can override the theme via a newly added <code>Color Theme</code> menu in the user menu.
* Add "Proxmox Dark" color theme to the Proxmox VE reference documentation.
: The <code>prefers-color-scheme</code> media query from the Browser/OS will be used to decide if the light or dark color scheme should be used.
: The new dark theme is also available in the [https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/version Proxmox VE API Viewer].
* Local storage types that are located on other cluster nodes can be added.
: A node selector was added to the <code>Add Storage</code> wizard for the ZFS, LVM, and LVM-Thin storage types.
* Automatically redirect HTTP requests to HTTPS for convenience.
: This avoids "Connection reset" browser errors that can be confusing, especially after setting up a Proxmox VE host the first time.
* Task logs can now be downloaded directly as text files for further inspection.
* It is now possible to choose the sort-order of the resource tree and to sort guests by name.
* Fix loading of changelogs in case additional package repositories are configured.
* Improve editing of backup jobs:
** Add a filter to the columns of the guest selector.
** Show selected, but non-existing, guests.
* Remove the "Storage View" mode from the resource tree panel.
:  This mode only showed the storage of a cluster and did not provide additional information over the folder or server views.
* The Proxmox Backup Server specific columns for verification and encryption status can now be used for sorting in the backup content view of a storage.
* Polish the user experience of the backup schedule simulator by splitting the date and time into two columns and better check the validity of the input fields.
* Improve accessibility for screens with our minimal required display resolution of 720p
** add scrolling overflow handler for the toolbar of the backup job view
** rework the layout of the backup job info window for better space usage and reduce its default size
* Fix search in "Guests without backup" window.
* Node and Datacenter resource summary panels now show the guest tag column by default.
* Show role privileges when adding permissions. <!-- TODO: clarify, not so telling for users -->
* Allow the use of the `-` character in snapshot names, as the backend has supported this for some time.
* Update the noVNC guest viewer to upstream version 1.4.0.
* Fix overly-strict permission check that prevented users with only the <code>VM.Console</code> privilege from accessing the noVNC console.
* Align permission check for bulk actions with the ones enforced by the API.
: Switch the check from the <code>Sys.PowerMgmt</code> privilege to the correct <code>VM.PowerMgmt</code> one.
* Invalid entries in advanced fields now cause the advanced panel to unfold, providing direct feedback.
* HTML-encode API results before rendering as additional hardening against XSS.
* Fix preselection of tree elements based on the URL after login.
* Fix race condition when switching between the content panel of two storage before one of them hasn't finished loading.
* Metric server: Expose setting the <code>verify-certificate</code> option for InfluxDB as advanced setting
* Replace non-clickable checkbox with icons for backup jobs, APT repositories, and replication jobs.
* Fix error when editing LDAP sync setting and only a single parameter is not set to a non-default value.
* Add missing online-help references for various panels and edit windows.
* Improved translations, among others:
** Arabic
** French
** German
** Italian
** Japanese
** Russian
** Slovenian
** Simplified Chinese
==== Virtual Machines (KVM/QEMU) ====
* New QEMU Version 7.2:
** QEMU 7.2 fixes issues with Windows Guests, installed from a German ISO, during installation of the VirtIO drivers.
** Fix crash of VMs with iSCSI disks on a busy target.
** Fix rare hang of VMs with IDE/SATA during disk-related operations like backup and resize.
** Many more changes, see the [https://wiki.qemu.org/ChangeLog/7.2 upstream changelog] for details.
* Taking a snapshot of a VM with large disks following a PBS backup occasionally was very slow. This has been fixed (issue #4476).
* Running <code>fsfreeze/fsthaw</code> before starting a backup can now optionally be disabled in the QEMU guest agent options.
: Note: Disabling this option can potentially lead to backups with inconsistent filesystems and should therefore only be disabled if you know what you are doing.
* Cloning or moving a disk of an offline VM now also takes the configured bandwidth limits into consideration (issue #4249).
* Fix an issue with EFI disks on ARM 64 VMs.
* Add safeguards preventing the moving of disks of a VM using <code>io_uring</code> to storage types that have problems with <code>io_uring</code> in some kernel versions.
* General improvements to error reporting. For example, the error messages from <code>query-migrate</code> are added when a migration fails and a configured, but non-existing physical CD-ROM drive, results in a descriptive error message.
* Allow users to destroy a VM even if it's suspended.
* Fix a race-condition when migrating VMs on highly loaded or slower clusters, where the move of the guest's config file to the target node directory might not have been propagated to the target node.
* Rolling back a VM to a snapshot with state (memory) and still selecting to start the VM after the rollback does not cause an error anymore (rollbacks with state result in a running VM).
* Deleting snapshots of running VMs, with a configured TPM on Ceph storages with <code>krbd</code> enabled, is now possible.
* Fix command execution via <code>pvesh</code> and QEMU guest agent in VMs on other cluster nodes.
* Update Linux OS version description to include 6.x kernels.
==== Containers (LXC) ====
* Update to LXC 5.0.2 and lxcfs 5.0.3.
* Allow <code>riscv32</code> and <code>riscv64</code> container architectures through the <code>binfmt_misc</code> kernel capability.
: After installing the <code>qemu-user-static</code> and <code>binfmt-support</code> packages one can use a RISC-V based rootfs image to run as container directly on an x86_64/amd64 Proxmox VE host.
* Create <code>/etc/hostname</code> file on Alma Linux, CentOS, and Rocky Linux containers. With this, DHCP requests sent by the container now include its hostname.
* Add option to disconnect network interfaces of containers, similarly to network interfaces of VMs.
* Make container start more resilient after OOM or node crash (empty AppArmor profile files do not cause a crash).
* Improve cleanup upon failed restores (remove the container configuration if restore fails due to an invalid source archive, remove firewall configuration).
* Ignore bind or read-only mount points when running <code>pct fstrim</code>.
* During container shutdown, wait with a timeout in case <code>lxc-stop</code> fails. This prevents the shutdown task from running indefinitely and having to be aborted manually.
* Templates:
** Updated Debian Bullseye template from 11.3 to 11.6.
** Updated Proxmox Mail Gateway template from 7.0 to 7.2.
==== General improvements for virtual guests ====
* The "Bulk Stop" action was renamed to "Bulk Shutdown" to better describe its behavior.
* Allow overriding timeout and force-stop settings for bulk shutdowns.
* Allow bulk actions even if the user does not have the required privileges for all guests but has the privileges for each guest involved in the bulk action.
==== HA Manager ====
* Add CRM command to switch an online node manually into maintenance (without reboot).
: When a node goes into maintenance mode all active HA services will be moved to other nodes, but automatically migrate them back once the maintenance mode is disabled again.
* The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.
: One can now enable the <code>ha-rebalance-on-start</code> option in the <code>datacenter.cfg</code> or via the web UI to use Proxmox CRS to balance on service start up.
* A new intermediate state <code>request_started</code> has been added for the stop -> start transitions of services.
* Improve scheduling algorithm for some cases.
** make CPU load matter more if there is no memory load at all
*: avoids boosting tiny relative differences over higher absolute loads.
** use a non-linear averaging algorithm when comparing loads.
*: The previous algorithm was blind in cases where the static node stats are the same and there is (at least) one node that is over committed when compared to the others.
==== Improved management for Proxmox VE clusters ====
* Ensure that the current working directory is not in <code>/etc/pve</code> when you set up the cluster using the pvecm CLI tool.
: Since <code>pmxcfs</code>, which provides the mount point for <code>/etc/pve</code>, is restarted when you set up the cluster, a confusing "Transport endpoint is not connected" error message would be reported otherwise.
* The <code>proxmox-offline-mirror</code> tool now supports fetching data through an HTTP proxy.
* Fetching the changelog of package updates has been improved:
** The correct changelog will be downloaded if repositories from multiple Proxmox projects are configured, for example if one has Proxmox VE and Proxmox Backup Server installed on the same host.
** Support getting the for packages coming from a Debian Backports repository.
* You can now configure if you want to receive a notification mail for new available package updates.
* The wrapper for <code>acme.sh</code> DNS-validation plugins received fixes for 2 small issues:
**  a renaming of parameters for the <code>acmedns</code> plugin was pulled from upstream.
**  a missing method was added to fix an issue with the <code>dns_cf.sh</code> plugin.
* Improved <code>pvereport</code>: In order to provide a better status overview, add the following information:
** /etc/pve/datacenter.cfg.
** ceph health detail.
* OpenSSL errors are now reported in full to ease troubleshooting when managing the nodes certificate.
* Add missing or newly added/split-out packages to the Proxmox VE apt version API, also used for the <code>pveversion -v</code> call:
** <code>proxmox-mail-forward</code>
** <code>proxmox-kernel-helper</code>
** <code>libpve-rs-perl</code>
==== Backup/Restore ====
* Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.
==== Storage ====
* It is now possible to override the specific subdirectories for content (ISOs, container templates, backups, guest disks) to custom values through the <code>content-dirs</code> option.
* The CIFS storage type can now also directly mount a specific subdirectory of a share, thus better integrating into already existing environments.
* The availability check for the NFSv4 storage type was reworked in order to work with setups running without <code>rpcbind</code>.
* Fix ISO upload via HTTP in a few edge cases (newlines in filenames, additional headers, not sent by common browsers).
* Fix caching volume information for systems which both have a local ZFS pool storage and a ZFS over iSCSI storage configured during guest disk rescan.
==== Storage Replication ====
* Extend support for online migration of replicated VM guests.
: One can now also migrate VMs if they included snapshots, as long as those are only on replicated volumes.
==== Disk Management ====
* Improve showing the SMART values for the correct NVMe devices.
==== Ceph ====
* Expose more detailed OSD information through the API and use that to add an OSD Detail window in the web interface.
: You can now check the backing device, logical volume info, front- and back- network addresses and more using the new OSD detail window.
* Show placement groups per OSD in the web interface.
* Improve schema description for various Ceph-related API endpoints.
: This also improves the api-viewer and <code>pvesh</code> tool for various Ceph-related API endpoints.
* Fix broken <code>cmd-safety</code> endpoint that made it impossible for non-root users to stop/destroy OSDs and monitors.
* Allow admins to easily set up multiple MDS per node to increase redundancy if more than one CephFS is configured.
==== Access Control ====
* ACL computation was refactored causing a significant performance improvement (up to a factor of 450) on setups with thousands of entries.
* It is now possible to override the <code>remove-vanished</code> settings for a realm when actively syncing it in the GUI.
* Allow quoted values in LDAP DN attributes when setting up an LDAP realm.
==== Firewall & Software Defined Networking ====
* <code>ipsets</code> can be added even with set host-bits. For example, 192.0.2.5/24 is now a valid input. Host-bits get cleared upon parsing (resulting in 192.0.2.0/24 in the example).
* Firewall logs can be restricted to a timeframe with the <code>since</code> and <code>until</code> parameters to the API call
* The conditional loading of <code>nf_conntrack_helpers</code> was dropped for compatibility with kernel 6.1.
* Not adding link-local IPv6 addresses on the internal guest-communication devices was fixed in a corner-case.
* The MTU is now set to the value of the parent bridge on the automatically generated VLAN-bridge devices for non-VLAN-aware bridges.
* The EVPN plugin now also merges a defined <code>prefix-list</code> from <code>/etc/frr/frr.conf.local</code>.
==== Installation ISO ====
* the version of BusyBox shipped with the ISO was updated to version 1.36.0.
* The EFI System Partition (ESP) defaults to 1 GiB of size if the root disk partition (<code>hdsize</code>) is bigger than 100 GB.
* UTC can now be selected as timezone during installation.
==== Notable bug fixes ====
* An issue with OVS network configuration where the node would lose connectivity when upgrading Open vSwitch (see https://bugs.debian.org/1008684).
* A race condition in the API servers causing failed tasks when running a lot of concurrent API requests was fixed.
<div id="7.4-known-issues"></div>
=== Known Issues & Breaking Changes ===
* In QEMU 7.2, it is a hard error if audio initialization fails rather than a warning.
: This can happen, for example, if you have an audio device with SPICE driver configured but are not using SPICE display. To avoid the issue, make sure the configuration is valid.
* With <code>pve-edk2-firmware >= 3.20221111-1</code> we know of two issues affecting specific set ups:
** virtual machines using OVMF/EFI with very little memory (< 1 GiB) and certain CPU types (e.g. host) might no longer boot.
*: Possible workarounds are to assign more memory or to use <code>kvm64</code> as the CPU type.
*: The background for this problem is that OVMF << 3.20221111-1 used to guess the address (bit) width only from the available memory, but now there is more accurate detection that better matches what the configured CPU type provides. The more accurate address-width can lead to a larger space requirement for page tables.
** The (non-default) <code>PVSCSI</code> disk controller might result in SCSI disk not being detected inside the guess in regressions.
*: We're still investigating this, until then you might either evaluate if your VM really requires the non-standard PVSCSI controller, use the SATA bus instead, or keep using the older pve-edk2-firmware package.
* Because of hardware errata, the <code>XSAVES</code> feature had to be disabled by the kernel for certain AMD CPUs, resulting in start issues with VMs and certain VM CPU types like <code>EPYC-ROME</code>.
: For affected VMs you will find a message like <code>host doesn't support requested feature: CPUID.0DH:EAX.xsaves [bit 3]</code> logged in the kernel log.
: As a workaround, use a different CPU model for the VM, like <code>host</code>, <code>EPYC</code>, or a custom CPU model that has the <code>xsaves</code> flag disabled.
== Proxmox VE 7.3 ==
Released 22. November 2022: See [[Downloads]]
* Based on Debian Bullseye (11.5)
* Latest 5.15 Kernel as stable default (5.15.74)
* Newer 5.19 kernel as opt-in
* QEMU 7.1
* LXC 5.0.0
* ZFS 2.1.6
* Ceph Quincy 17.2.5
* Ceph Pacific 16.2.10
=== Highlights ===
* Ceph Quincy support. It is also the default for new installations
* Initial Cluster Resource Scheduling (CRS) support
* Tags for Virtual Guests in web-interface for better categorizing/searching/...
* Support for Proxmox Offline Mirror to update and manage subscriptions of air gapped systems
=== Changelog Overview ===
==== Enhancements in the web interface (GUI) ====
* Show tags for virtual guests in the resource tree and allow edits.
* Improved UX for mediated PCIe devices - they now also show the name.
* Improved Certificate View - for example for certificates with many SANs.
* Node disk UI: gracefully handle adding the same local storage (e.g. a zpool with the same name) to multiple nodes.
* Expose node configurations like <code>wakeonlan</code> and the delay for starting guests on boot for each node in the web interface.
* Improved translations, among others:
** Arabic
** Dutch
** German
** Italian
** Polish
** Traditional Chinese
** Turkish
* Improve rendering complex formats in the api-viewer widget
==== Virtual Machines (KVM/QEMU) ====
* New major version of QEMU 7.1
* Support for pinning a VM to certain CPU cores via <code>taskset</code>
* In the web interface, new VMs default to <code>iothread</code> enabled and <code>VirtIO SCSI-Single</code> selected as SCSI controller (if supported by the guest OS)
* New VMs use <code>qemu-xhci</code> USB controller, if supported by the guest OS (Windows >= 8, Linux >= 2.6)
: USB devices can now be hot-plugged
: Pass through up to 14 USB devices (previous 5) to a virtual machine
* Align <code>virtio-net</code> parameters for the receive- (rx) and transmit- (tx) queue sizes with upstream best-practices
* Use the more efficient <code>packed</code> format for multi-queues
* Allow up to 64 rx and tx multi-queues (previously 16)
* Cloud-init improvements: changes to the cloud-init settings now can be made available in the config-drive ISO inside the guest directly
* Disable <code>io_uring</code> for CIFS backed VM disks by default - to mitigate an issue with CIFS and <code>io_uring</code> present since kernel 5.15
* Improved handling for VMs with passed through PCIe-devices:
** Cleanup of created mdev devices, even if the VM could not be started
** Longer timeouts between sending <code>SIGTERM</code> and <code>SIGKILL</code> to allow for a cleanup upon termination
** Prevent suspending a VM with passed through PCIe device, as the device's state cannot be saved
==== Containers (LXC) ====
* New major LXC version 5.0.0
* More robust cgroup mode detection, by explicitly checking the type of <code>/sys/fs/cgroup</code>
* Support for newer distribution versions:
** Fedora 37 and preparation for 38
** Devuan 12 Daedalus
** Preparation for Ubuntu 23.04
* Bind-mounts are now also directly applied to a running container
* Fix a bug when cloning a locked container: It does not create an empty config anymore, but fails correctly
* Improvements to the systemd version detection inside containers
* Volumes are now always deactivated upon successful <code>move_volume</code>, not only if the source volume is to be removed: preventing dangling krbd mappings
* New pre-made templates available for:
** AlmaLinux 9
** Alpine 3.16
** Centos 9 Stream
** Fedora 36
** Fedora 37
** OpenSUSE 15.4
** Rocky Linux 9
** Ubuntu 22.10
* Refreshed existing templates:
** Gentoo (2022-06-22-openrc)
** ArchLinux (2022-11-11)
==== General improvements for virtual guests ====
* Add option to disable MAC learning on bridges (the guest-mac addresses are added statically, no broadcast packets are flooded to those ports, thus no spurious answers are replied, which broke certain hoster network setups)
* Improve cleanup of backup-jobs upon purging the configuration for a removed VM
* Optionally restart a guest after rollback to snapshot
* Framework for remote migration to cluster-external Proxmox VE hosts
==== HA Manager ====
** Cluster Resource Scheduling (CRS) tech-preview: Improve new-node selection for when the HA Manager needs to find a new host node for a HA service, in the following cases:
** recovering it after fencing its node
** on node shutdown, if the <code>migrate</code> shutdown-policy is enabled
** on HA group configuration changes, if the current node is not in the highest priority set anymore
: Use the TOPSIS multi-criteria decision analysis method for finding a better target
: Having established the CRS foundation, Proxmox developers plan to extend it with a dynamic load scheduler and live load balancing in future releases
==== Cluster ====
* Fix a permission issue in the <code>QDevice</code> status API
: The API call for obtaining the API status needs privileged access, but was run directly in the unprivileged daemon leading to spurious permission denied errors
* Fix race-condition between writing <code>corosync.conf</code> and reloading <code>corosync</code> on update
==== Backup/Restore ====
* Improved namespace support for the Proxmox Backup Server storage type
* Improvements to the parsing of the template variables of the backup notes
: The notes template for backups, introduced in Proxmox VE 7.2, received a number of bug-fixes and improvements
* Added option <code>repeat-missed</code>, allowing one to opt-out from the default behavior of running missed jobs on a new boot
* The VM used for single-file-restore with QEMU guests now has support for increasing its memory (e.g. to handle many ZFS datasets inside the guest)
* Improved configuration validation with Proxmox Backup Server encryption (for example, do not fall back to plain-text if the encryption key is missing)
* When deleting <code>vzdump</code> backup files the corresponding notes and log are also removed.
==== Storage ====
* Support ZFS dRAID vdevs when creating a zpool via the API & GUI. dRAID improves recovery times when a disk failure occurs.
: A dRAID setup makes most sense for either a large (15+) amount of disks, or a medium+ amount of huge disks (15+ TB).
* Align SMART status API with Proxmox Backup Server fields
* Support Notes and the Protected setting for backups stored on BTRFS storage types.
==== Storage Replication ====
* Don't send mails on bogus errors: e.g. when a replication could not be started because the guest is currently being migrated.
* Upon replication failure the first 3 retries are scheduled in a shorter time, before waiting for 30 minutes before retrying - improving the consistency upon short network hiccups.
* Cleanup replication state of guests running on another note: as can happen after a HA-fence.
* Make interaction of replication state and configuration changes more robust: e.g. in the case of first removing all volumes from one storage, and then removing the VM before the next replication was run.
==== pve-zsync ====
* support <code>--compressed</code> option resulting in an already compressed dataset to be sent as is to the destination (thus removing the need to decompress and potentially re-compress on the target).
==== Ceph ====
* Improved UX when creating new clusters
: The network selection and duplicate IP checking was improved.
: It's no longer possible to run into an error, by selecting a different node for the first monitor than the one you are connected to (prevents trying to create a monitor on nodes without installed Ceph packages).
* Added heuristic checks if it is OK to stop or remove a ceph MON, MDS, or OSD service.
: The Web UI will now show a warning if the removal / stop of a service will affect the operation of the cluster.
* Support for installing Ceph Quincy via Proxmox VE CLI and GUI.
==== Access Control ====
* Improve naming of WebAuthn parameters in the GUI.
* Increase OpenID code size - compatibility with Azure AD as OpenID provider.
* Only require write-access (quorum) to TFA config for recovery keys.
: All other TFA methods only need read-access to the config. This makes it possible to login to a node, which is not in the quorate partition, even if your user has TFA configured.
* Fix a hard to trigger update issue with rotating the private key used for signing the access tickets, resulting in falsely rejected API calls.
* Fix creation of tokens for other users, by anyone except <code>root@pam</code>
: a bug prevented user A from creating a token for user B, despite having the relevant permissions
* Better logging for expired tokens.
==== Firewall, Networking & Software Defined Networking (tech-preview) ====
* Fix setting MTU on setups using OVS.
* <code>ifupdown2</code> now handles <code>point-to-point</code> settings correctly
* <code>ifupdown2</code> can now add a OVSBrige with a vlan-tag as ports to an OVSBridge (fakebridge)
* Fix updating MTU if a bridge-port is plugged into a different bridge.
* Firewall security groups can now be renamed with the changes directly being picked up from <code>pve-firewall</code>
* Stricter parsing of guest config files in <code>pve-firewall</code>, making it possible to actually disable the guest firewall while keeping the config file around.
* Improved handling on externally added ebtables rules: If a rule was added to a table different than <code>filter</code>, <code>pve-firewall</code> still tried to parse and add it to the <code>filter</code> table upon rule compilation.
==== Improved management for Proxmox VE clusters: ====
* Proxmox Offline Mirror: The tool supports subscriptions and repository mirrors for air-gapped clusters. Newly added [https://pom.proxmox.com proxmox-offline-mirror] utility can now be used to keep Proxmox VE nodes, without access to the public internet up-to-date and running with a valid subscription.
* New mail-forwarding binary <code>proxmox-mail-forward</code>: no functional change, but unifies the configuration for sending the system-generated mails to the email address configured for <code>root@pam</code>
* Improvements to <code>pvereport</code> - providing a better overview of the status of a Proxmox VE node the following information was added/improved:
** ceph-device-list
** stable ordering of guests and network information
** proxmox-boot-tool output
** arcstat output
==== HTTP & REST-API Daemon ====
* File-uploads now support filenames with spaces.
* File-uploads now support files with a size smaller than 16 KB
* Improved input sanitization of API URLs as additional security layer.
==== Installation ISO ====
* Fixed the permissions of /tmp inside the installation environment (e.g. for the edge-case of users installing software manually from a debug-shell)
* Make the size requirement of 8 GB a warning - most installations can run with less space, but might need adaptation after install (e.g. moving the log-destination to another device) - keep 2 GB as hard limit
* Rework auto-sizing of root, guest-data and SWAP partitions & avoid creating the LVM-Thin pool in favor of root partition space on small setups.
==== Mobile App ====
* update to flutter 3.0
* support and target Android 13
* fix buttons hidden behind Android's soft nav buttons
* provide feedback about running backup tasks: a bug prevented any visual feedback in the app, when starting a backup (although the backup was started).
<div id="7.3-known-issues"></div>
=== Known Issues & Breaking Changes ===
* Virtual Guest Tags:
*: Duplicate tags are now filtered out when updating the <code>tag</code> property of a virtual guest.
*: Duplication detection and sorting is handled case-insensitive by default, all tags are treated lower case. One can change that in the <code>datacenter.cfg</code> configuration using the <code>case-sensitive</code> boolean property of the <code>tag-style</code> option.
==Proxmox VE 7.2==
Released 4. May 2022: See [[Downloads]]
* Based on Debian Bullseye (11.3)
* Ceph Pacific 16.2.7
* Ceph Octopus 15.2.16 (continued support until mid 2022)
* Kernel 5.15.30
* LXC 4.0
* QEMU 6.2
* ZFS 2.1.4
'''Changelog Overview'''
* Enhancements in the web interface (GUI):
** Enhance backup restore dialog by allowing administrators to override some basic settings for the virtual guest being restored.
** Allow for safe reassignment of a VM disk or CT volume to another guest on the same node.
** Support configuring the range that new VMIDs are selected from when creating a VM or CT.
*: You can set the upper and lower boundaries in the datacenter's options panel. Setting <code>lower</code> equal to <code>upper</code> disables auto-suggestion completely.
** Enable passing the keyring secret for external Ceph clusters in the web interface when adding an RBD or CephFS storage to Proxmox VE.
** Render "ghost" OSDs in the Ceph dashboard to make admins aware of such leftover entries.
** Provide the "next-run" time, i.e., the time a backup job will be executed next, in the datacenter's backup job overview.
** Display a prominent ''Start Guest'' button in the Console view of guests, if they are not running .
** Improve performance for the task log viewer in the GUI.
** Move the actions affecting a disk or volume of a virtual guest to its own submenu.
** Switch some bitmap-based VM & CT resource icons over to vector based SVG.
** Allow assigning up to 16 PCI(e) devices to a VM via the web interface.
*:The previous limit of 5 has already expanded in the backend.
** Improved translations, among others:
*** Arabic
*** Basque
*** German
*** Japanese
*** Polish
*** Traditional Chinese
*** Turkish
* Virtual Machines (KVM/QEMU)
** Add support for the accelerated <code>virtio-gl</code> (VirGL) display driver.
** Enable SPICE support by default for the VirtIO and VirGL display types.
** If VMs were paused before migration starts, keep them paused after migration finishes.
** Support for importing disk-images via the API - currently not available in the GUI.
** Support for reading OVF files via the API - currently not available in the GUI.
** The PCI vendor and device IDs can be overridden for a configuration passthrough entry, and do not need to correspond to the actual IDs of the physical device.
** Increase the timeout for the <code>fsthaw</code> operation issued through the QEMU guest agent.
** Fixed issues that occur in certain storage layouts, when migrating local VM state volumes, such as the TPM state or cloud-init.
** Improved clean-up of unneeded resources upon restore, for example, if a restore fails, drop the left-over snapshots of replaced disks and cloud-init disks.
** Improved consistency when cloning from a snapshot (rather than from the current state) - the EFI disk's state is also taken from the snapshot.
* Containers (LXC)
** Support for NixOS container templates
** Support Ubuntu 22.04, Devuan 4.0 and the upcoming Fedora 36
** Provide new pre-made templates for:
*** Ubuntu 22.04
*** Devuan 4.0
*** Alpine 3.15
** Refreshed some existing templates:
*** CentOS 8 Stream,
*** Arch Linux
*** Debian 11 (to the 11.3 point release)
** Containers with multiple mount points can now be successfully trimmed with <code>pct fstrim</code>, even if one of the mount points does not support trimming, for example, because it's on a ZFS subvolume.
** Add target-storage support to migration.
** Allow <code>lazytime</code> mount option for containers.
** Fix an issue with wrong ownership for files/directories created by the container stack on unprivileged containers.
*: Files on Red Hat and SUSE based templates were created as root on the host and showed up as owned by nobody inside the container, resulting in NetworkManager failing.
* General improvements for virtual guests
** Better handling of snapshot removal (for example, after finishing a backup) when storage replication is configured.
* HA Manager
** Improve handling of huge numbers of services per node, by improving the LRM scheduler that starts workers.
*: <code>ha-manager</code> uses a statically configurable number of workers to handle the services, by scheduling jobs in repeating scheduling rounds with a time-slice of 10s max.
*: Since the services that have already started must also be checked to ensure that they are still in the target state, it could happen during large deployments that the services starved at the end of the queue.
*: The issue was fixed by additionally sorting the services by the amount of time in which they hadn't been scheduled. This increased the number of configurable services to be well above the previous implementation.
** Handle an edge case where a node would get stuck in <code>fence</code> state, if all services were removed from it before the node actually fenced itself.
** The ha-simulator can now better help to test races in scheduling (on the different nodes) by introducing a <code>skip-round</code>
* Cluster
** A segfault triggered with certain usage patterns was fixed.
*: The issue was reported in the context of a cluster having many sessions opened with username and password (instead of API tokens) by a monitoring system.
** The lower and upper boundaries to be considered when selecting the next free VMID can now be set in the <code>datacenter.cfg</code>.
*: The settings are available over the web interface's <code>Datacenter</code> -> <code>Options</code> panel.
* Backup/Restore
** Allow for the configuration of a notes-template for backup jobs, which will be evaluated once the job runs and added to each resulting backup of said job.
*: With this, useful meta-information can be added, allowing backups to be easier to distinguish, so that a relevant one can be found quickly if required.
*: Note templates can contain template variables like <code><nowiki>{{guestname}}</nowiki></code> or <code><nowiki>{{cluster}}</nowiki></code>.
** The next-event scheduling code was updated and now uses the more powerful one from Proxmox Backup Server via a Perl to rust binding ([https://git.proxmox.com/?p=perlmod.git;a=blob;f=README.md;hb=HEAD perlmod]).
*: In addition to the existing weekday, time and time range support, you can now specify:
*** Dates: For example <code>2015-10-21</code> or <code>*-12-31 23:50</code> (New Year's Eve, 10 minutes before midnight every year)
*** Date ranges: <code>Sat *-1..7 15:00</code> (first Saturday every month at 15:00)
*** Repeating ranges <code>Sat *-1..7 */30</code>  (first Saturday every month, every half hour)
** Add a 'max-protected-backups' property to the storage definitions, to address issues with running into the configured general limit, due to having protected backups on the storage which should not count towards it.
** Add a <code>job-init</code> hook that is run once upfront, on each node that hosts guests which are included in the backup job.
* Storage
** Increase the timeout of ZFS operations for synchronous and worker calls to 10 seconds and a minimum of 5 minutes respectively, based on a report of the current settings being too low.
* pve-zsync
** Improved resiliency of the crontab parser to better cope with manual modifications.
** Support syncing TPM state.
* Ceph
** Support creating and destroying erasure coded pools, which can also be added as Proxmox VE Storage entries.
** Support for selecting existing custom device classes when adding OSDs through the web interface.
** Out/In commands for OSDs are now executed on the node the user is connected to, instead of the node where the OSDs are located.
*: This makes it possible to disable OSDs in the GUI, in case the target node is not available (for example in a disaster recovery situation).
* Access Control
** More fine-grained control over how users or groups that vanished while syncing realms should be handled.
*: Administrators can now choose if the user and all their configured values (for example TFA configurations, ACL entries) are set to the state in LDAP, or if only the users are added/removed, while their properties remain.
*: This makes it possible to handle removed users, without losing the TFA configuration for all users who did not change since the last sync.
* OpenID realms support configuring of scopes, authentication context class reference (ACR) values, and whether the provider prompts the user for re-authentication and consent.
** Fixed an issue with using Azure as an OpenID provider (unexpected <code>Content-Type-Encoding</code> leading to rejected logins).
** WebAuthn TFA configurations now support only optionally providing the origin parameter for clustered systems.
** Improved robustness in case of misconfigured U2F and WebAuthn settings.
*: Logins with other TFA methods do not get rejected if the U2F/WebAuthn configuration is incorrect.
** TFA settings can now be viewed by users with <code>User.Modify</code> and <code>Sys.Audit</code> permissions in addition to <code>root@pam</code>
* Firewall & Software Defined Networking (tech preview)
** Set the ICMP type in the firewall rules GUI directly.
** Add support for configuring <code>anycast</code> addresses on guests, by advertising the whole subnet to the EVPN network.
** Add configuration for allowing exit-notes to reach guests inside the EVPN network, for example if they need to talk with a DNS server running inside a virtual guest.
** Support for floating IPs on guests (by not suppressing ARP/ND packets).
** Hide bridges that a user doesn't have access to, if vNets are in use.
* Improvements for managing a Proxmox VE system
** It is now possible to select a particular kernel version to boot persistently from a running system through <code>proxmox-boot-tool kernel pin</code>
*: You can select a particular version to be used either indefinitely or just for the next boot. This removes the need to watch the boot process to select the desired version in the bootloader screen.
** Administrators can now disable MAC learning on a bridge in <code>/etc/network/interfaces</code> with the <code>bridge-disable-mac-learning</code> flag.
*: This reduces the number of packets flooded on all ports (for unknown MAC addresses), preventing issues with certain hosting providers (for example, Hetzner), which resulted in the Proxmox VE node getting disconnected.
** The location of the private key-file used for <code>pveproxy</code> can now be set in <code>/etc/default/pveproxy</code>, supporting certain setups which need sensitive data to be stored in a different location due to regulatory requirements.
** The TLS configuration options for <code>pveproxy</code> now support disabling TLS 1.2 (to run in a pure TLS 1.3 environment) and configuring the advertised TLS1.3 ciphers.
** Client-initiated TLS renegotiation was disabled, following best-practices.
** The <code>pveschedule</code> daemon was improved based on the feedback received from its initial release with Proxmox VE 7.1.
** Data sent to a configured metrics server is now formatted in a more strict fashion to prevent type mismatches, which were possible if a guest had a numeric-only name.
* Mobile App
** Provide a simple dark theme and enable it if the system settings are configured to use dark designs.
** Provide an inline console by relaying noVNC for VMs, and xterm.js for containers and the Proxmox VE node shell in the GUI.
** Improved support for notches and full-screen mode.
* Installation ISO
** Updated memtest86+ to the completely rewritten 6.0b version, adding support for UEFI and modern memory like DDR5.
** Support for restricting the partitioned space for BTRFS installs, in order to leave unpartitioned space, which can be used for swap, for example.
** Improved accuracy of displayed disk size by adding a decimal point. Otherwise, a 1.8 TB disk, for example, was shown as being just 1 TB.
** Support for creating RAID0 configurations with differently sized disks.
** Improved UX when setting the <code>hdsize</code> for RAID setups.
** ZFS installs can now be configured to use zstd or GZIP compression.
** Better support for installing ZFS on Systems with certain bugs in their UEFI implementation (HP Thin clients).
** Drop the kernel command line <code>quiet</code> flag when booting in the advanced mode to provide more verbosity.
* Notable bug fixes
** To prevent a network outage during the transition from <code>ifupdown</code> to <code>ifupdown2</code>, the <code>ifupdown</code> package was modified to not stop networking upon its removal.
** For backups of running VMs with multiple disks, it was possible that write operations happening after the backup start synchronization point to the non-primary disks to be included in the backup.
*: This has been fixed with QEMU 6.2.0 by initializing the relevant bitmap early enough.
<div id="7.2-known-issues"></div>
'''Known Issues'''
* Setups mounting a QNAP NFS share could not be mounted with NFS version 4.1 with kernel pve-kernel-5.15.30-2-pve - the issue has been mitigated in kernels pve-kernel-5.15.35-2 and above.
*: If your QNAP NFS share cannot be mounted upgrade the kernel and reboot.
*: As an alternative mitigation you can explicitly set the NFS version to 4 (not 4.1 or auto).
* PCI(e) pass through related:
** Systems passing through a GPU may be affected from the switch to the <code>SYS_FB</code> (system frame buffer) KConfig build options using the <code>simplefb</code> module as driver in the new default 5.15 based kernel.
**: The sys-fb allows taking over the FB from the firmware/earlier boot stages. Note that Proxmox VE uses the legacy <code>simplefb</code> driver over the modern <code>simpledrm</code> one due to regressions and issues we encountered on testing with the latter.
**: Most of those issues are already fixed in newer kernels and Proxmox VE may try to switch to the modern, DRM based FB driver once it moves to 5.17, or newer, as its default kernel.
**: If your systems is configured to pass through the (i)GPU, and you had to avoid the host kernel claiming the device, you may now need to also add <code>video=simplefb:off</code> to the [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline kernel boot command line].
** Setups using [https://github.com/gnif/vendor-reset vendor-reset] for PCIe pass through need to adapt to changes of the new default 5.15 based kernel, see  For details see [https://github.com/gnif/vendor-reset/issues/46#issuecomment-992282166 this issue].
**: They must run the command <code>echo 'device_specific' > /sys/bus/pci/devices/<PCI-ID>/reset_method</code> before the VM is started. This can be automated by using a systemd service or using a on-boot cron script.
**: Alternatively one can also use a VM hook script with the <code>pre-start</code> hook.
* <code>intel_iommu</code> defaulted to on in kernels 5.15 based kernels up to <code>pve-kernel-5.15.35-3-pve</code> with version <code>5.15.35-6</code>. An enabled IOMMU can cause problems with older hardware - issues were reported with, for example, HP DL380 g8 servers, and Dell R610 servers  (both older than 10 years).
*: The issue can be fixed by explicitly disabling <code>intel_iommu</code> on the kernel commandline (<code>intel_iommu=off</code>) following the reference documentation - https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline
*: As the setting has been reverted in newer pve-kernel-5.15 packages the issue is now mostly relevant during installation from the ISO.
* Certain systems may need to explicitly enable <code>iommu=pt</code> (SR-IOV pass-through) on the [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline kernel command line].
*: There are some reports for this to solve issues with Avago/LSI RAID controllers, for example in a Dell R340 Server booted in legacy mode.
* <s>Certain systems may need to work around a HW issue with TDP (two-dimensional paging), a now by default enabled technic to speed up VM to host, or vice versa, switches.</s>
*: <s>If you see a kernel log message containing <code>KVM: entry failed, hardware error 0x80000021</code> you are affected.</s>
*: This has been adressed with the kernel package <code>pve-kernel-5.15.39-3-pve</code> with version <code>5.15.39-3</code> and newer, which is available on all repositories.
==Proxmox VE 7.1==
Released 17. November 2021: See [[Downloads]] (updated ISO release 2 with updated kernel and bug fixes 7. December 2021)
* Based on Debian Bullseye (11.1)
* Ceph Pacific 16.2.6
* Ceph Octopus 15.2.15 (continued support until mid 2022)
* Kernel 5.13 default
* LXC 4.0
* QEMU 6.1
* ZFS 2.1
'''Changelog Overview'''
* Enhancements in the web interface (GUI):
** Improved, more flexible scheduling options for backup jobs.
*: Backup jobs are now run by a new scheduler daemon (<code>pvescheduler</code>) and support the same more flexible calendar-event schedules already known from the verify and garbage-collection jobs in Proxmox Backup Server.
** Support configuration of backup retention policy for each backup job.
*: Brings the powerful and flexible retention settings from the API and CLI to the GUI, making it easier to set up smart backup policies.
** Add <code>Schedule Simulator</code> tool to the backup job overview panel to help get the calendar event format right.
** Simplify the creation of virtual guests with multiple disks.
*: Additional disks can now be added from the creation wizard, eliminating the need to add them after creating the VM or Container.
** Added a combo box under advanced options, for selecting the asynchronous IO mode for each disk of a virtual machine.
** Add a <code>Notes</code> panel for the datacenter. It can interpret Markdown and render it as HTML.
** Improvements to the UX for configuring metric servers.
** Two-factor authentication improvements:
*** Support for WebAuthn (superseding U2F) and one-time recovery keys.
*** Support for configuring multiple factors for a single account.
** Improved translations, among others:
*** Arabic
*** Basque
*** Brazilian Portuguese
*** French
*** German
*** Simplified Chinese
*** Traditional Chinese
*** Turkish
* Virtual Machines (KVM/QEMU):
** Support for Windows 11 OS type
** UEFI Secure Boot
*: Newly created VMs have support for secure boot verification, with an option to have Linux distribution and Microsoft keys pre-enrolled.
** TPM support
*: Support for adding a Trusted Platform Module to any VM, fulfilling the requirements of modern operating systems, for example Windows 11.
** Store the current time and the installed QEMU version, when creating a VM.
*: This information is also used to work around changes in the QEMU PCI/ACPI device hotplug layer, which might cause network interface name-changes otherwise.
** SCSI and VirtIO VM disks can be marked read-only.
** Allow re-assigning a disk to another VM.
** <code>qm move_disk</code> has been renamed to <code>qm move-disk</code>.
*: The old syntax is still supported for backwards compatibility.
** Changed the initial default resolution from <code>800x600</code> to <code>1024x768</code> when using OVMF (EFI).
*: Note that the resolution is saved in the EFIvars, so this mainly affects new VMs. You can override the preferred resolution in the OVMF settings menu (Device Manager -> Platform Config).
* Container
** Support Fedora 35 and Ubuntu 21.10.
** Support and provide templates for the AlmaLinux and Rocky Linux distributions, which claim to be 1:1 binary compatible with RHEL.
** Support zstd-compressed templates.
*: Leverage the highly efficient, high-ratio, state-of-the-art compression algorithm for CT templates.
** Enable the <code>nesting</code> feature for new unprivileged containers created via the web interface.
**: The <code>nesting</code> feature ensures better interoperability with modern systemd versions used in newer templates.
** Better handling of unmanaged containers.
** Allow re-assigning a mount point volume to another container.
** <code>pct move_volume</code> has been renamed to <code>pct move-volume</code>.
*: The old syntax is still supported for backwards compatibility.
** Improved support for restoring backups of privileged containers as unprivileged ones (use of unprivileged containers is strongly recommended wherever possible).
** Added support for container CPU-sets on hosts with <code>cgroupv2</code> layout.
* General improvements for virtual guests
** Better interoperability between storage replication and rollback to an independent snapshot.
* Cluster
** Updated corosync to include bug-fixes for issues occurring during network recovery.
*: This could have otherwise lead to loss of quorum on all cluster nodes, which in turn would cause a cluster-wide fencing event in case HA was enabled.
* Disk Management
** Support wiping specific partitions.
*: Added more fine-grained control over what part of a disk you want to clean up for re-use.
** Support creating storages on a particular partition, in addition to using the whole disk.
** Support using partitions for specific use-cases, like Ceph Journal.
** Support for zstd compression when creating a new zpool via API/GUI.
* Storage
** Make pre-allocation policy configurable for file-based storage.
*: Fully pre-allocated images can provide better performance, but take a longer time when being created - see <code>qemu-img(1)</code> for details.
* Backup/Restore
** Augmented scheduling options for backups.
*: Backup jobs are now run by a new scheduler daemon (<code>pvescheduler</code>) and support the more flexible schedule-options known from Proxmox Backup Server verify and garbage collection jobs (which in turn are inspired by systemd-timers).
** Support for <code>protected</code> backups, which will not be pruned and cannot be removed manually, without removing the <code>protected</code> flag.
* Storage Replication
** Scheduling of replication jobs is now handled by the <code>pvescheduler</code> daemon instead of a systemd-timer running every minute, resulting in vastly reduced logging on the nodes.
* Ceph
** Full support for configuring external Ceph clusters via API.
*: Before, the keyring needed to be put in the correct place via shell.
** Multiple CephFS instances are supported.
* Access Control
** OpenID Authentication now honors the configured http-proxy in the datacenter settings.
** Two-Factor Authentication
*** Add support for multiple 2nd factors.
*** Add recovery keys as 2nd factor.
*** Add WebAuthn 2nd factor support.
*** Rework TFA and login UI accordingly.
* Improvements regarding the upgrade from version 6.4:
** A bug in <code>proxmox-boot-tool</code>, which could make a Proxmox VE system running the 5.11 kernel unbootable upon upgrade, was fixed.
** Privileged containers with old systemd versions can now run successfully - the Proxmox VE node still needs to be booted into the legacy cgroup layout.
<div id="7.1-breaking-changes"></div>
'''Noteable or Breaking Changes'''
* API deprecations, moves and removals
** The <code>/version</code> API method now only returns the <code>console</code> setting of the datacenter config, instead of all settings (use the <code>/cluster/options</code> instead).
* CLI changes
** <code>qm</code>: rename <code>move_disk</code> command to <code>move-disk</code> with an alias for backward compatibility.
* Config file changes
** The <code>/etc/sysctl.d/pve.conf</code> snippet has been moved to <code>/usr/lib/sysctl.d/10-pve.conf</code>, in accordance with current recommendations (see <code>sysctl.d(5)</code>). Local modifications need to be re-enabled again - they are preserved in <code>/etc/sysctl.d/pve.conf.dpkg-old</code>).
* Two-Factor Authentication:
** Keys stored by admins in the user entry (this is how it worked back before the users could add TOTP and U2F keys themselves, and how it worked for Yubico OTP until now) are now ''deprecated''.
** The two-step login procedure in the API has been changed to work like in the Proxmox Backup Server. For now, the new procedure is opt-in and the old one still works, but is deprecated.
<div id="7.1-known-issues"></div>
'''Known Issues'''
* Web-interface glitches
** Please reload the web-interface after the upgrade to avoid various glitches or issues all around.
*: Ensure that the browser caches are circumvented on reload, for example by using <code>CTRL + SHIFT + R</code> (switch CTRL with CMD for MacOS).
* Two-Factor Authentication:
** Keys stored by admins in the user entry (see above) still work, but will ''not'' be visible in the TFA panel until that user’s TFA configuration has been updated once, by registering any new TFA entry, such as by adding recovery keys.
** When in a cluster, changes to TFA entries can only be done after ''all'' nodes have been upgraded, since the TFA configuration file format changed and will be automatically upgraded to the new format with the first change.
** Old API clients which support 2nd factors via two-step logins may be unable to log in to accounts which have multiple TFA entries, or ones which were not supported previously.
* With the addition of <code>pvescheduler</code> - the new backup schedules are no longer configured in the cluster-wide <code>/etc/pve/vzdump.cron</code> file. While this file is still read and will remain supported for the foreseeable future, changing an existing backup schedule will cause the schedule to be stored in the new config file <code>/etc/pve/jobs.cfg</code>.
*: This is not problematic, as long as not all nodes in a cluster have been upgraded to the new version, since backups will only run on nodes who read the new config file.
*: To avoid any problems, simply upgrade all nodes in your cluster before making any changes to scheduled backups.
* Low potential for full-cluster fencing event when upgrading HA-enabled clusters using libknet1 version 1.21-pve1
** Under certain network load situations, that is one set of nodes having very low left-over bandwidth and another set has a lot of available bandwidth, a cluster (partly) running libknet1 version 1.21-pve1 can trigger a bug in corosync/kronosnet. With HA enabled this can then lead to a fencing event for the entire cluster.
*: If a separate network is used for corosync/kronosnet the chance that the bug can get triggered is highly reduced.
*: See [https://bugzilla.proxmox.com/show_bug.cgi?id=3672 bug #3672] for details.
** As a precaution, it is advised to disable HA, in order to prevent a fencing event. This is done by stopping the `pve-ha-lrm` service on all nodes, then stopping the `pve-ha-crm` service on all nodes prior to upgrading. After all nodes have been upgraded, HA should be enabled again before rebooting any nodes, by first restarting `pve-ha-lrm` on all nodes, then restarting `pve-ha-crm` on all nodes.
* Garbled output during installation of Debian in VMs when standard VGA display and OVMF/UEFI is configured
** Debian's Kernel package currently does not enable the <code>simplefb</code> module and thus uses the legacy <code>efifb</code> frame buffer, this can lead to glitches with the OVMF version that Proxmox VE 7.1 ship.
*: As workaround, you can either use QXL (SPICE) for the VM's display, use SeaBIOS or disable Secure Boot in OVMF and use the non-graphical, text based installer.
* VMs with SATA as disk bus may show issues when used with <code>io_uring</code>
** For now this can be worked around by switching the disk's <code>Async IO</code> mode to threads (for cache = write back/through) or native (for cache = off, none or direct sync), respectively, in the Advanced options of the disk edit dialogue.
** Note that SATA is really not the best choice for a VM's disk bus in general, rather use (VirtIO-) SCSI for best performance and feature set. For Windows VirtIO support is available through the [[Windows VirtIO Drivers]]
==Proxmox VE 7.0==
Released 6. July 2021: See [[Downloads]], updated ISO release 2 with updated base system to final Debian 11.0 Bullseye and bug fixes 31. August 2021
* Based on Debian Bullseye (11)
* Ceph Pacific 16.2 as new default
* Ceph Octopus 15.2 continued support
* Kernel 5.11 default
* LXC 4.0
* QEMU 6.0
* ZFS 2.0.4
'''Changelog Overview'''
* Installer:
** Rework the installer environment to use <code>switch_root</code> instead of <code>chroot</code>, when transitioning from initrd to the actual installer.
**: This improves module and firmware loading, and slightly reduces memory usage during installation.
** Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
** Improve ISO detection:
*** Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
*** Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
** Use <code>zstd</code> compression for the initrd image and the squashfs images.
** Setup Btrfs as root file system through the Proxmox VE Installer (Technology preview)
** Update to busybox 1.33.1 as the core-utils provider.
* Enhancements in the web interface (GUI):
** The node summary panel shows a high level status overview, while the separate Repository panel shows in-depth status and list of all configured repositories. Basic repository management, for example, activating or deactivating a repository, is also supported.
** <code>Notes</code> panels for Guests and Nodes can now interpret Markdown and render it as HTML.
** On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
** The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
** Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
** Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
** Improved translations, among others:
*** Arabic
*** French
*** German
*** Japan
*** Polish
*** Turkish
* Access Control:
** Single-Sign-On (SSO) with the new OpenID Connect access realm type.
*: You can integrate external authorization servers, either using existing public services or your own identity and access management solution, for example, Keycloack or LemonLDAP::NG.
** Added new permission <code>Pool.Audit</code> to allow users to see pools, without permitting them to change the pool.
*: See breaking changes below for some possible impact in custom created roles.
* Virtual Machines (KVM/QEMU):
** QEMU 6.0 has support for <code>io_uring</code> as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
*: The new default can be overridden in the guest config via <code>qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native</code> (where, for example, DRIVE would be <code>scsi0</code> and the OPTS could be get from <code>qm config VMID</code> output).
** EFI disks stored on Ceph now use the <code>writeback</code> caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
** Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
*** This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
*** Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
*: With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
** VM snapshot states are now always removed when a VM gets destroyed.
** Improved logging during live restore.
* Container
** Support for containers on custom storages.
** Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
* Migration
** QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
*: Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
** Containers: The <code>force</code> parameter to <code>pct migrate</code>, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as <code>shared</code>.
* High Availability (HA):
** Release LRM locks and disable watchdog protection if all services of the node the LRM is running on, got removed and no new ones were added for over 10 minutes.
*: This reduced the possible subtle impact of an active watchdog after a node was cleared of HA services, for example, when HA services were previously only configured for evaluation.
** Add a new HA service state <code>recovery</code> and transform the <code>fence</code> state in a transition to that new state.
*: This gains a clear distinction between to be fenced services and the services whose node already got fenced and are now awaiting recovery.
** Continuously retry recovery, even if no suitable node was found.
*: This improves recovery for services in restricted HA groups, as only with that the possibility of a quorate and working partition but no available new node for a specific service exists.
*: For example, if HA is used for ensuring that a HA service using local resource, like a VM using local storage, will be restarted and up as long as the node is running.
** Allow manually disabling HA service that currently are in <code>recovery</code> state, for more admin control in those situations.
* Backup and Restore
** Backups of QEMU guests now support encryption using a master key.
** It is now possible to back up VM templates with SATA and IDE disks.
** The <code>maxfiles</code> parameter has been deprecated in favor of the more flexible <code>prune-options</code>.
** <code>vzdump</code> now defaults to keeping all backups, instead of keeping only the latest one.
** Caching during live restore got reworked, reducing total restore time required and improving time to fully booted guest both significantly.
** Support file-restore for VMs using ZFS or LVM for one, or more, storages in the guest OS.
* Network:
** Default to the modern <code>ifupdown2</code> for new installations using the Proxmox VE official ISO. The legacy <code>ifupdown</code> is still supported in Proxmox VE 7, but may be deprecated in a future major release.
* Time Synchronization:
** Due to the design limitations of <code>systemd-timesync</code>, which make it problematic for server use, new installations will install <code>chrony</code> as the default NTP daemon.
*: If you upgrade from a system using <code>systemd-timesyncd</code>, it's recommended that you manually install either <code>chrony</code>, <code>ntp</code> or <code>openntpd</code>.
* Ceph Server
** Support for Ceph 16.2 Pacific
** Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple <code>public_networks</code> defined.
*: Note that multiple <code>public_networks</code> are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
** Improved support for IPv6 and mixed setups, when creating a Ceph monitor.
** Beginning with Ceph 16.2 Pacific, the [https://docs.ceph.com/en/pacific/rados/operations/balancer/ balancer module] is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs.
** Newly created Bluestore OSDs will benefit from the newly enabled [https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#bluestore-rocksdb-sharding sharding configuration for rocksdb], which should lead to better caching of frequently read metadata and less space needed during compaction.
* Storage
** Support for Btrfs as technology preview
*** Add an existing Btrfs file system as storage to Proxmox VE, using it for virtual machines, container, as backup target or to store and server ISO and container appliance images.
** The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit<ref name="linbit-drbd-gh" group="footnotes 7.0">https://github.com/LINBIT/linstor-proxmox</ref>.
** More use of content-type checks instead of checking a hard-coded storage-type list in various places.
** Support downloading ISO and Cont appliance images directly from a URL to a storage, including optional checksum verifications.
* Disk Management
** Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential.
*: Note that with using this feature any data on the disk will be destroyed permanently.
* pve-zsync
** Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
* Firewall
** The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations).
* Certificate management
** The ACME standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface.
<div id="7.0-breaking-changes"></div>
'''Breaking Changes'''
* Pool permissions
: The old permission <code>Pool.Allocate</code> now only allows users to edit pools, '''not''' to see them. Therefore, <code>Pool.Audit</code> must be added to existing custom roles with the old <code>Pool.Allocate</code> to preserve the same behavior. All built-in roles are updated automatically.
* VZDump
** Hookscript: The <code>TARFILE</code> environment variable was deprecated in Proxmox VE 6, in favor of <code>TARGET</code>. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
** The <code>size</code> parameter of <code>vzdump</code> has been deprecated, and setting it is now an error.
* API deprecations, moves and removals
** The <code>upgrade</code> parameter of the <code>/nodes/{node}/(spiceshell|vncshell|termproxy)</code> API method has been replaced by providing <code>upgrade</code> as <code>cmd</code> parameter.
** The <code>/nodes/{node}/cpu</code> API method has been moved to <code>/nodes/{node}/capabilities/qemu/cpu</code>
** The <code>/nodes/{node}/ceph/disks</code> API method has been replaced by <code>/nodes/{node}/disks/list</code>
** The <code>/nodes/{node}/ceph/flags</code> API method has been moved to <code>/cluster/ceph/flags</code>
** The <code>db_size</code> and <code>wal_size</code> parameters of the <code>/nodes/{node}/ceph/osd</code> API method have been renamed to <code>db_dev_size</code> and <code>wal_dev_size</code> respectively.
** The <code> /nodes/<node>/scan/usb</code> API method has been moved to <code>/nodes/<node>/hardware/usb</code>
* CIFS credentials have been stored in the namespaced <code>/etc/pve/priv/storage/<storage>.pw</code> instead of <code>/etc/pve/<storage>.cred</code> since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
* The external storage plugin mechanism had a ABI-version bump that reset the ABI-age, thus marking an incompatible breaking change, that external plugins must adapt before being able to get loaded again.
* <code>qm|pct status <VMID> --verbose</code>, and the respective status API call, only include the <code>template</code> line if the guest is a template, instead of outputting <code>template:</code> for guests which are not templates.
* <code>qm vncproxy</code> now always requires a password to be set via the <code>LC_PVE_TICKET</code> environment variable.
<div id="7.0-known-issues"></div>
'''Known Issues'''
* '''Network''': Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
** Some may change their name. For example, due to newly supported functions, a change from <code>enp33s0f0</code> to <code>enp33s0f0np0</code> could occur.
**: We observed such changes with high-speed Mellanox models.
** [https://sources.debian.org/src/bridge-utils/1.7-1/debian/NEWS/#L3-L23 Bridge MAC address selection has changed in Debian Bullseye] - it is now generated based on the interface name and the <code>machine-id (5)</code> of the system.
*: Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
: If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
* '''Container''':
** cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 or newer support cgroupv2 [1], as do containers that do not use systemd as init system in the first place (e.g., Alpine Linux or Devuan).
*: CentOS 7 and Ubuntu 16.10 are two prominent examples for Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, for details and possible fixes see: https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat
==Proxmox VE 7.0 beta1==
Released 24. June 2021: See [[Downloads]]
* Based on Debian Bullseye (11)
* Ceph Pacific 16.2 as new default
* Ceph Octopus 15.2 continued support
* Kernel 5.11 default
* LXC 4.0
* QEMU 6.0
* ZFS 2.0.4
* Installer:
** Rework the installer environment to use <code>switch_root</code> instead of <code>chroot</code>, when transitioning from initrd to the actual installer.
**: This improves module and firmware loading, and slightly reduces memory usage during installation.
** Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
** Improve ISO detection:
*** Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
*** Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
** Use <code>zstd</code> compression for the initrd image and the squashfs images.
** Update to busybox 1.33.1 as the core-utils provider.
* Enhancements in the web interface (GUI)
** <code>Notes</code> panels for Guests and Nodes can now interpret Markdown and render it as HTML.
** On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
** The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
** Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
** Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
** Improved translations, among others:
*** Arabic
*** French
*** German
*** Polish
* Virtual Machines (KVM/QEMU):
** QEMU 6.0 has support for <code>io_uring</code> as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
*: The new default can be overridden in the guest config per drive via <code>qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native</code>.
** EFI disks stored on Ceph now use the <code>writeback</code> caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
** Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
*** This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
*** Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
*: With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
** VM snapshot states are now always removed when a VM gets destroyed.
** Improved logging during live restore.
* Container
** Support for containers on custom storages.
** Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
* Migration
** QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
*: Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
** Containers: The <code>force</code> parameter to <code>pct migrate</code>, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as <code>shared</code>.
* Backup and Restore
** Backups of QEMU guests now support encryption using a master key.
** It is now possible to back up VM templates with SATA and IDE disks.
** The <code>maxfiles</code> parameter has been deprecated in favor of the more flexible <code>prune-options</code>.
** <code>vzdump</code> now defaults to keeping all backups, instead of keeping only the latest one.
* Network:
** Default to the modern <code>ifupdown2</code> for new installations using the Proxmox VE official ISO. The legacy <code>ifupdown</code> is still supported in Proxmox VE 7, but may be dropped in a future major release.
* Time Synchronization:
** Due to the design limitations of <code>systemd-timesync</code>, which make it problematic for server use, new installations will install <code>chrony</code> as the default NTP daemon.
*: If you upgrade from a system using <code>systemd-timesyncd</code>, it's recommended that you manually install either <code>chrony</code>, <code>ntp</code> or <code>openntpd</code>.
* Ceph Server
** Support for Ceph 16.2 Pacific
** Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple <code>public_networks</code> defined.
*: Note that multiple <code>public_networks</code> are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
** Improved support for IPv6 and mixed setups, when creating a Ceph monitor
** Beginning with Ceph 16.2 Pacific, the [https://docs.ceph.com/en/pacific/rados/operations/balancer/ balancer module] is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs
** Newly created Bluestore OSDs will benefit from the newly enabled [https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#bluestore-rocksdb-sharding sharding configuration for rocksdb], which should lead to better caching of frequently read metadata and less space needed during compaction
* Storage
** The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit<ref name="linbit-drbd-gh" group="footnotes 7.0"/>.
* Disk Management
** Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential. Any data on the disk will be destroyed.
* pve-zsync
** Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
* Firewall
** The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations)
* Certificate management
** The ACME Standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface
<div id="7.0-beta-breaking-changes"></div>
'''Breaking Changes'''
* Pool permissions
: Added new permission <code>Pool.Audit</code> to allow users to see pools, without permitting them to change the pool. The old permission <code>Pool.Allocate</code> now only allows users to edit pools, '''not''' to see them. Therefore, <code>Pool.Audit</code> must be added to custom roles with the old <code>Pool.Allocate</code> to preserve the same behavior. All built-in roles are updated automatically.
* VZDump
** Hookscript: The <code>TARFILE</code> environment variable was deprecated in Proxmox VE 6, in favor of <code>TARGET</code>. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
** The <code>size</code> parameter of <code>vzdump</code> has been deprecated, and setting it is now an error.
* API deprecations, moves and removals
** The <code>upgrade</code> parameter of the <code>/nodes/{node}/(spiceshell|vncshell|termproxy)</code> API method has been replaced by providing <code>upgrade</code> as <code>cmd</code> parameter.
** The <code>/nodes/{node}/cpu</code> API method has been moved to <code>/nodes/{node}/capabilities/qemu/cpu</code>
** The <code>/nodes/{node}/ceph/disks</code> API method has been replaced by <code>/nodes/{node}/disks/list</code>
** The <code>/nodes/{node}/ceph/flags</code> API method has been moved to <code>/cluster/ceph/flags</code>
** The <code>db_size</code> and <code>wal_size</code> parameters of the <code>/nodes/{node}/ceph/osd</code> API method have been renamed to <code>db_dev_size</code> and <code>wal_dev_size</code> respectively.
** The <code> /nodes/<node>/scan/usb</code> API method has been moved to <code>/nodes/<node>/hardware/usb</code>
* CIFS credentials have been stored in the namespaced <code>/etc/pve/priv/storage/<storage>.pw</code> instead of <code>/etc/pve/<storage>.cred</code> since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
* <code>qm|pct status <VMID> --verbose</code>, and the respective status API call, only include the <code>template</code> line if the guest is a template, instead of outputting <code>template:</code> for guests which are not templates.
Note that there is no API-stability guarantee during the beta.
<div id="7.0-beta-known-issues"></div>
'''Known Issues'''
* '''Network''': Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
** Some may change their name. For example, due to newly supported functions, a change from <code>enp33s0f0</code> to <code>enp33s0f0np0</code> could occur.
**: We observed such changes with high-speed Mellanox models.
** [https://sources.debian.org/src/bridge-utils/1.7-1/debian/NEWS/#L3-L23 Bridge MAC address selection has changed in Debian Bullseye] - it is now generated based on the interface name and the <code>machine-id (5)</code> of the system.
*: Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
: If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
* Misc. bug fixes and improvements
'''Notes'''
<references group="footnotes 7.0" />
== Proxmox VE 6.4 ==
Released 28. April 2021
* Based on Debian Buster (10.9)
* Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20
* Kernel 5.4 default
* Kernel 5.11 opt-in
* LXC 4.0
* QEMU 5.2
* ZFS 2.0.4 - new [https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0 major version]
* Virtual Machines (KVM/QEMU):
** Support pinning a VM to a specific QEMU machine version.
** Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation.
*: This improves stability and guarantees that the hardware layout can stay the same even with newer QEMU versions.
** Address issues with hanging QMP commands, causing VMs to freeze on disk resize and indeterministic edge cases.
*: Note that some QMP timeout log messages are still being investigated, they are harmless and only of informative nature.
** cloud-init: re-add ''Stateless Address Autoconfiguration'' (SLAAC) option to IPv6 configuration.
** Improve output in task log for mirroring drives and VM live-migration.
* Container
** Improved cgroup v2 (control group) handling.
** Support and provide appliance templates for Alpine Linux 3.13, Devuan 3, Fedora 34, Ubuntu 21.04.
* Backup and Restore
** Implement unified single-file restore for virtual machine (VM) and container (CT) backup archives located on a Proxmox Backup Server.
*: The file-restore is available in the GUI and in a new command line tool <code>proxmox-file-restore</code>.
** Live-Restore of VM backup archives located on a Proxmox Backup Server.
*: No more watching the task log, waiting for a restore to finish; VMs can now be brought up while the restore runs in the background.
** Consistent handling of excludes for container backups across the different backup modes and storage types.
** Container restores now default to the privilege setting from the backup archive.
* Ceph Server
** Improve integration for placement group (PG) auto-scaler status and configuration.
**: Allow configuration of the CRUSH-rule, <code>Target Size</code> and <code>Target Ratio</code> settings, and automatically calculating the optimal numbers of PGs based on this.
* Storage
** Support editing of backup notes on any CephFS, CIFS or NFS storage.
** Support configuring a namespace for accessing a Ceph pool.
** Improve handling ZFS pool by doing separate checks for imported and mounted.
*: This separation helps in situations where a pool was imported but not mounted, and executing another import command failed.
* Disk Management
** Return partitions and display them in tree format.
** Improve detection of disk and partition usage.
* Enhancements in the web interface (GUI)
** Show current usage of host memory and CPU resources by each guest in a node's search-view.
** Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, providing consistency with the units used in the current usage gauge.
** Make columns in the firewall rule view more responsive and flexible by default.
** Improve Ceph pool view, show auto-scaler related columns.
** Support editing existing Ceph pools, adapting the CRUSH-rule, <code>Target Size</code> and <code>Target Ratio</code>, among other things.
* External metric servers:
** Support the InfluxDB 1.8 and 2.0 HTTP(s) API.
** Allow use of InfluxDB instances placed behind a reverse-proxy.
* Proxmox VE API Proxy Daemon (<code>pveproxy</code>)
** Make listening IP configurable (in <code>/etc/default/pveproxy</code>). This can help to limit exposure to the outside (e.g. by only binding to an internal IP).
** <code>pveproxy</code> now listens for both IPv4 and IPv6, by default
* Installation ISO:
** Installation on ZFS:
*** if booted with legacy BIOS (non-UEFI), now also copy the kernel images to the second VFAT partition (ESP), allowing the system to boot from there with grub, making it possible to enable all ZFS features on such systems.
*** set up the boot-partition and boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable.
** The installer environment attempts to do an NTP time synchronization before actually starting the installation, avoiding telemetry and cluster issues, if the RTC had a huge time-drift.
* pve-zsync
** Improved snapshot handling allowing for multiple sync intervals for a source and destination pair.
** Better detection of aborted syncs, which previously caused pve-zsync to stop the replication.
* Firewall
** Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by <code>iptables-restore</code> due to its size limitations. We recommended you to create and use IP-Sets for that use case.
** Improvements to the command-line parameter handling.
<div id="6.4-known-issues"></div>
'''Known Issues'''
* Please '''avoid''' using <code>zpool upgrade</code> on the "rpool" (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
*: See [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_determine_bootloader_used the documentation for determining the bootloader used], if you're unsure.
*: Setups installed with the Proxmox VE 6.4 ISO are '''not''' affected, as there the installer always sets up an easier to handle, vfat formatted, ESP to boot.
*: See the [[ZFS: Switch Legacy-Boot to Proxmox Boot Tool]] article about how to switch over to a safer boot variant for legacy GRUB-booted setups with ZFS as root filesystem.
* New default bind address for pveproxy and spiceproxy, unifying the default behavior with Proxmox Backup Server
** With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4 <code>0.0.0.0:8006</code> and IPv6 <code>[::]:8006</code>) by default.
*: Should you wish to prevent it from listening on IPv6, simply configure the IPv4 wildcard as LISTEN_IP in <code>/etc/default/pveproxy</code>:
*: <code>LISTEN_IP="0.0.0.0"</code>
** Additionally, the logged IP address format changed for IPv4 in pveproxy's access log (<code>/var/log/pveproxy/access.log</code>). They are now logged as IPv4-mapped IPv6 addresses instead of:
*: <code>192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854</code>
*: the line now looks like:
*: <code>::ffff:192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854</code>
*:If you want to restore the old logging format, also set <code>LISTEN_IP="0.0.0.0"</code>
* Resolving the Ceph `insecure global_id reclaim` Health Warning
*: With Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20 we released an update to fix a security issue (CVE-2021-20288) where Ceph was not ensuring that reconnecting/renewing clients were presenting an existing ticket when reclaiming their global_id value.
*: Updating from an earlier version will result in the above health warning.
*: See the [https://forum.proxmox.com/threads/ceph-nautilus-and-octopus-security-update-for-insecure-global_id-reclaim-cve-2021-20288.88038/#post-385756 forum post here] for more details and instructions to address this warning.
==Proxmox VE 6.3==
Released 26. November 2020
* Based on Debian Buster (10.6)
* Ceph Octopus 15.2.6 (first stable release) and Ceph Nautilus 14.2.15
* Kernel 5.4 LTS
* LXC 4.0
* QEMU 5.1
* ZFS 0.8.5
* Proxmox Backup Server Integration
** Stable Proxmox Backup Server integration: The stable version 1.0 of Proxmox Backup Server is now integrated and enterprise support is available from the Proxmox support team.
** Data encrypted on client-side before backing up to Proxmox Backup Server.
* Ceph
** Stable integration of Ceph Octopus.
** Add selector to choose which supported Ceph version to install in the GUI configuration wizard.
** Recovery progress is displayed in the Ceph status panel.
** Show and allow setting of Placement Group (PG) auto-scaling mode of Ceph pools.
** Set device class when creating OSDs, especially if the auto-detection yields the wrong class.
* Enhancements in the GUI
** Improved VM boot device order editor:
*** It is now possible to select multiple devices per type (disk, network) for booting.
*** Booting from passed through PCI devices (e.g., NVMe drives) is supported.
*** Improved user experience with a drag-and-drop UI.
** GUI for editing external metric servers: You can now connect your Proxmox VE nodes to InfluxDB or Graphite using the GUI, instead of having to manually edit <code>/etc/pve/status.cfg</code>
** Optional TLS certificate verification for LDAP and AD authentication realms.
** Improve high-DPI display and browser zoom compatibility.
** Split up storage content view by type.
** Backup/Restore:
*** Overview of all guests, which aren't included in any backup at all.
*** Detailed view per backup job, showing all covered guests and which of their disks are backed up.
** Display optional comments for all storage types.
*** Proxmox Backup Server additionally displays the verification state of all backup snapshots.
** Better usability for preventing accidental snapshot rollback
*** The GUI now makes it difficult to accidentally confuse snapshot removal with snapshot rollback.
* Storage
** Add highly flexible backup retention with "keep" settings: The new backup retention settings, which augment and replace the "Max Backups" setting, enable you to decide how many backups to keep per timeframe and implement enhanced retention policies per storage or backup job.
** Better handling of container volume activation on ZFS.
** Increased timeout for connecting to CIFS and NFS storage over slow links.
** Improve querying SSD wear leveling.
** Small improvements to the configuration handling of the LIO iSCSI provider for ZFS over iSCSI storage.
** ZFS disk management: create a systemd service to unconditionally import a pool created using the GUI.
* Container
** Add support for current Devuan containers.
** Add support for Kali Linux containers.
** Update list of supported releases for Ubuntu, Fedora and CentOS.
** Support setting a custom timezone per container.
** Improve startup monitoring.
** Add a <code>debug</code> parameter to <code>pct start</code>, to output the debug log of the container start.
** Support systems with up to 8192 cores.
** Optionally ignore mount points while running <code>pct fstrim</code>.
** Fix aborting of backups on Ceph backed containers with a large IO load, by issuing fsfreeze before snapshotting.
* QEMU
** Fast, incremental backups to Proxmox Backup Server using dirty-bitmaps.
** Handle guest shutdowns (power down from within a VM) during backups.
** Improved boot order selection allowing booting from multiple virtual disks and from passed through PCI devices (e.g., NVMe drives).
** Allow pass through of certain older Intel iGPU models with QEMU's 'legacy-igd' mode.
** Support more CPU options, for example SSE4.2
** Better support for hugepages across multiple NUMA nodes.
* General improvements for virtual guests
** Improved handling of replicated guests when migrating.
* Clustering
** Harden locking in the clustered configuration filesystem (pmxcfs), avoids a possible freeze when joining, messaging or leaving a closed-process-group.
* User and permission management
** Improved support for using client certificates/keys when connecting to AD/LDAP realms.
** Optional support for case-insensitive logins with AD/LDAP realms.
** Fine-grained permissions for SDN and CloudInit.
** Better handling of clock jumps for rotating keys.
* Firewall
** Improved API for matching ICMP-types.
* Documentation
** Clarify qdevice requirements.
** Add section about ZFS pool design choices.
** Add documentation on requirement for encrypted ZFS datasets as storage.
** Add manpage for cpu-models.conf(5).
* Installer
** Reboot automatically upon successful installation.
** Drop ext3 as supported file system.
** Start a shell on vt3 for debugging during installation.
* Experimental: support for Software-Defined Networking (SDN)
** Support for IPAM with a plugin framework.
** Add support for internal IPAM management and PowerDNS.
* Countless bug fixes and smaller improvements
<div id="_6.3_known_issues"></div>
'''Known Issues'''
* VM guests with multiple boot disk setups (e.g., mdadm, LVM, etc...) need to be configured with the new boot order config so that all required disks are marked "bootable", with the one containing the boot loader placed first - otherwise, the guest may fail to boot after being restarted with the new QEMU 5.1 version.
* The "exclude-path" option for vzdump now supports non-anchored paths for all backup modes. Non-anchored paths do not start with a '/' and will match in any subdirectory. Previously, such paths only had an effect for "suspend" mode backups, so please ensure that you don't have any unwanted non-anchored paths configured.
==Proxmox VE 6.2==
Released 12. May 2020
*Based on Debian Buster (10.4)
*Ceph Nautilus (14.2.9)
*Kernel 5.4 LTS
*LXC 4.0
*Qemu 5.0
*ZFS 0.8.3
*Web interface
**Allow to deploy DNS based Let's Encrypt (ACME) certificates
**Allow admins to see the full granted permission/privilege tree for a user
**Add experimental Software Defined Network (SDN) GUI and documentation, based on modern mainline Linux Kernel network technology
**Allow to collapse the Notes sub-panel in the guest summary panels. One can do this permanently or automatically if empty through the user settings.
**Add 'Creation Date' column for storage content, allowing one, for example, to comfortably find backups from a certain date
**Add device node creation to Container feature selection
**Full support for up to 8 corosync links
**Automatic guest console reload on VM/CT startup
**Add Arabic translation
**Allow seamlessly changing the language, without logging out and logging in again
* Container
**LXC 4.0.2 and lxcfs 4.0.3 with initial full cgroupv2 support by Proxmox VE
**Improve support for modern systemd based Containers
**Improve default settings to support hundreds to thousands* of parallel running Containers per node (* thousands only with simple distributions like Alpine Linux)
**Allow creating templates on directory-based storage
* Backup/Restore
**Support for the highly efficient and fast Zstandard (zstd) compression
* Improvements to the HA stack
** Allow to destroy virtual guests under HA control when purge is set
* QEMU
** Fixed EFI disk behavior on block-based storage (see [[#_6.2_known_issues_ovmf|known issues]])
** VirtIO Random Number Generator (RNG) support for VMs
** Custom CPU types with user-defined flags and QEMU/KVM settings
** Improved machine type versioning and compatibility checks
** Various stability fixes, especially for backups and IO-Threads
** Migration:
*** Enable support for Live-Migration with replicated disks
*** Allow specifying target storage for offline mode
*** Allow specifying multiple source-target storage pairs (for now, CLI only)
*** Improve behavior with unused disks
*** Secure live-migration with local disks
* General improvements for virtual guests
** Handle ZFS volumes with non-standard mount point correctly
* Cluster
** Improve lock contention during high frequency config modifications
** Add versioning for cluster join
** Enable full support for up to 8 corosync links
* Ceph
** Easier uninstall process, better informing which actions need to be taken.
* Storage
** Storage migration: introduce allow-rename option and return new volume ID if a volume with the same ID is already allocated on the target storage
** Support the 'snippet' content type for GlusterFS storage
** Bandwidth limitations are now also available for SAMBA/CIFS based storage
** Handle ZFS volumes with non-standard mount point correctly
** Improve metadata calculation when creating a new LVM-Thin pool
** Improve parsing of NVMe wearout metrics
* User and permission management
** LDAP Sync users and groups automatically into Proxmox VE
** LDAP mode LDAP+STARTTLS
** Allow to add and manage authentication realms through the 'pveum' CLI tool
** Full support and integration for API Tokens
*** Shared or separated privileges
*** Token lifetime
*** Revoke it anytime without impacting your user login
* Documentation
** Update hypervisor migration guide in the wiki
** Reducing number of Ceph placement groups is possible
** Improve layout, use a left-side based table-of-content
* Various improvements
** Firewall: make config parsing more robust and improve ICMP-type filtering
<div id="_6.2_known_issues_ovmf"></div>
'''Known Issues with OVMF/UEFI disks of VMs'''
A VM's EFI disk on a storage which doesn't allow for small (128 KB) images (for example: CEPH, ZFS, LVM(thin)), was not correctly mapped to the VM. While fixed now, such existing setup may need manual intervention:
* You do not have to do anything if your EFI disks is using qcow2 or "raw" on a file based storage.
* '''Before the upgrade''', make sure that on your ESP, the EFI boot binary exists at <code>\EFI\BOOT\BOOTX64.EFI</code> (the default EFI Boot fallback).
** Windows and some Linux VMs using systemd-boot should do that automatically
* '''If you already upgraded''' and it does not boot, see [[OVMF/UEFI Boot Entries]] on how to recreate the boot entry via the OVMF Boot Menu
==Proxmox VE 6.1==
Released 4. December 2019: See [[Downloads]], updated ISO release 2 with installer bug fixes and updated packages on 6. March 2020
* Based on Debian Buster (10.2)
* Ceph Nautilus (14.2.4.1)
* Corosync 3.0
* Kernel 5.3
* LXC 3.2
* Qemu 4.1.1
* ZFS 0.8.2
* Web interface
** More datacenter configuration options are now editable via GUI:
*** U2F two-factor settings.
*** Cluster-wide bandwidth limit for the following traffic types: migration, backup-restore, clone, disk-move.
** Improvements to two-factor authentication allowing to use a hardware-based TOTP key.
** Mobile GUI: implemented login for TOTP-based two-factor authentication enabled user accounts.
** More icons have been converted from bitmap to vectorized icons from Font Awesome.
** The scale mode of noVNC can be changed in "My Settings".
** Addition of Hebrew language translations.
** New "Run now" button for cluster-wide backup jobs.
** With <nowiki>ifupdown2</nowiki> installed, it is now possible to change the network configuration and reload it from the GUI without a reboot.
* Container
** Pending changes for containers. It is possible to make changes to a running container. They will be applied after the next reboot of the container.
** Reboot of a running container via GUI, API, and command-line interface.
** Hot plugging of mount points by using the new mount API available with Linux Kernel 5.3.
** Support of recent releases of GNU/Linux distributions like Fedora 31, CentOS 8, and Ubuntu 19.10.
* SPICE
** Audio devices can now be added via GUI (no need to edit the configuration file).
** Folders can now be shared between SPICE client and VM (still experimental).
** Video streaming can be enabled. This can help to improve the performance of fast-changing display areas like videos.
** The SPICE USB device now supports USB3 (Qemu >= 4.1).
* Backup/Restore
** VMs with IOThreads enabled can now be backed up.
** Run scheduled backup jobs manually from the Datacenter in the GUI.
* Improvements to the HA stack
** New 'migrate' shutdown policy. If enabled, running services will be migrated to another node on shutdown. Once the node comes back, and if the services have not been manually moved to another node in the meantime, those services will move back.
** New 'crm-command stop'. Shuts down a VM/container with the specified timeout and performs a hard stop if the timeout is 0. A VM or container stop command will now be redirected to this new crm-command.
* QEMU
** Allow domains other than '0000' for PCI(e) passthrough.
** New 'reboot' API call. This allows to apply pending changes without having to wait for the guest to shutdown before starting it again.
** QEMU monitor timeout issue, which prevented successful backups in certain configurations, has been fixed.
** PCI(e) passthrough supports up to 16 PCI(e) devices
** Support for QEMU Guest Agents using an ISA serial port (not VirtIO) for communication, allows to use the QGA with FreeBSD - among others.
* General improvements for virtual guests
** Allow 'tags' in the guest configuration. This meta-information can be useful for things like configuration management (no GUI support yet).
** VM/CT: Purge allows to remove the corresponding virtual machine or container from replication or backup jobs on destruction.
* Cluster stability
** A number of bugs have been identified and fixed upstream in cooperation with corosync and kronosnet.
** The initial issues some users have experienced with MTU changes are resolved.
** pmxcfs has undergone a stability audit using ASAN (AddressSanitizer) and UBSAN (Undefined Behavior Sanitizer), fixing various possible issues for certain edge-cases.
* Storage
** Allow configuration of non-default 'mountpoint' properties for ZFS.
** Allow .img files as alternative to .iso images.
** Various iSCSI improvements.
** Overhaul of ZFS on iSCSI with the LIO target provider.
** Supports all features offered by newer kernels with Ceph and KRBD.
* Various improvements
** Firewall: Added support for 'raw' tables and use them to allow users to enable and configure Synflood-Attack protections.
** Automatic certificate renewal for the self-signed certificate 2 weeks before expiration.
** Reduced certificate lifetime of newly generated certificates (2 years instead of 10 years). Avoids that some modern Browser complain about very long certificate lifetime.
* Documentation
** Proofreading of parts of the documentation (style and grammar).
** Enhanced documentation for the Ceph administration.
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 6.0==
Released 16. July 2019: See [[Downloads]]
*Based on Debian Buster 10.0
*Pre-upgrade checklist tool `pve5to6` - available on Proxmox VE 5.4 and V 6.0
**Running `pve5to6` checks for common pitfalls known to interfere with a clean upgrade process.
*Corosync 3.0.2 using Kronosnet as transport
**Default transport method now uses unicast, this can simplify setups where the network had issues with multicast.
**New Web GUI Network selection widget avoids making typos when choosing the correct link address.
**Currently, there is no multicast support available (it's on the kronosnet roadmap).
*LXC 3.1
*Ceph Nautilus 14.2.x
**Better performance monitoring for rbd images through `rbd perf image iotop` and `rbd perf image iostat`.
**OSD creation, based on ceph-volume: integrated support for full disk encryption of OSDs.
**More robust handling of OSDs (no more mounting and unmounting to identify the OSD).
**'''ceph-disk has been removed''': After upgrading it is not possible to create new OSDs without upgrading to Ceph Nautilus.
**Support for PG split and join: The number of placement groups per pool can now be increased and decreased. There is even an optional plugin in ceph-manager to automatically scale the number of PGs.
**New  messenger v2 protocol brings support for encryption on the wire (currently this is still experimental).
**See https://docs.ceph.com/en/nautilus/releases/nautilus/ for the complete release notes.
*Improved Ceph administration via GUI
**A cluster-wide overview for Ceph is now displayed in the 'Datacenter View' too.
**The activity and state of the placement groups (PGs) is visualized.
**The version of all Ceph services is now displayed, making detection of outdated services easier.
**Configuration settings from the config file and database are displayed.
**You can now select the public and cluster networks in the GUI with a new network selector.
**Easy encryption for OSDs with a checkbox.
*ZFS 0.8.1
**Native encryption for datasets with comfortable key-handling by integrating the encryption directly into the `zfs` utilities. Encryption  is as flexible as volume creation and adding redundancy - the gained comfort w.r.t. dm-crypt is comparable to the difference between mdadm+lvm to zfs.
**Allocation-classes for vdevs:  you can add a dedicated fast device to a pool which is used for storing often accessed data (metadata, small files).
**TRIM-support - use `zpool trim` to notify devices about unused sectors.
**Checkpoints on pool level.
**See https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.0 and https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.1 for the complete release notes.
*Support for ZFS on UEFI and on NVMe devices in the installer
**You can now install Proxmox VE with its root on ZFS on UEFI booted systems.
**You can also install ZFS on NVMe devices directly from the installer.
**By using `systemd-boot` as bootloader all pool-level features can be enabled on the root pool.
*Qemu 4.0.0
**Live migration of guests with disks backed by local storage via GUI.
**Added support for more Hyper-V enlightenments improving Windows performance in a virtual machine under Qemu/KVM.
**Mitigations for the performance impact of recent Intel CPU vulnerabilities.
**More VM CPU-flags can be set in the web interface.
**Newer virtual PCIe port hardware for machine type q35 in version >= 4.0. This fixes some passthrough issues.
**Support for custom Cloudinit configurations:
***You can create a custom Cloudinit configuration and store it as snippet on a storage.
***The `qm cloudinit dump` command can be used to get the current Cloudinit configuration as a starting point for extensions.
*Firewall improvements
**Improved detection of the local network so  that all used corosync cluster networks get automatically whitelisted.
**Improved firewall behavior during cluster filesystem restart, e.g. on package upgrade.
*Mount options for container images
**You can now set certain performance and security related mount options for each container mountpoint.
*Linux Kernel
**Updated 5.0 Kernel based off the Ubuntu 19.04 "Disco" kernel with ZFS.
**Intel in-tree NIC drivers are used:
***Many recent improvements to the kernel networking subsystem introduced incompatibilities with the out of tree drivers provided by Intel, which sometimes lag behind on support for new kernel versions. This can lead to a change of the predictable network interface names for Intel NICs.
*Automatic cleanup of old kernel images
**Old kernel images are not longer marked as NeverAutoRemove - preventing problems when /boot is mounted on a small partition.
**By default the following images are kept installed (all others can be automatically removed with `apt autoremove`):
***the currently running kernel
***the version being newly installed on package updates
***the two latest kernels
***the latest version of each kernel series (e.g. 4.15, 5.0)
*Guest status display in the tree view: Additional states for guests (migration, backup, snapshot, locked) are shown directly in the tree overview.
*Improved ISO detection in the installer: The way how the installer detects the ISO was reworked to include more devices, alleviating problems of detection on certain hardware.
*Pool level backup: It is now possible to create a backup task for backing up a whole pool. By selecting a pool as backup target instead of an explicit list of guests, new members of the pool are automatically included, and removed guests are automatically excluded from the backup task.
*New User Settings and Logout menu.
*Automatic rotation of authentication key every 24h: by limiting the key lifetime to 24h the impact of key leakage or a malicious administrator are reduced.
*The nodes Syslog view in the GUI was overhauled and is now faster.
*Sheepdog is no longer maintained, and thus not supported anymore as Storage plugin.
*`ceph-disk` has been removed in Ceph Nautilus - use `ceph-volume` instead.
*Improved reference documentation
*Upgrade documentation: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 6.0 beta1==
Released 04.07.2019: See [[Downloads]]
*Based on Debian Buster 10.0
*Kernel 5.0.15
*ZFS on EFI and/or NVMe installation
*Ceph v14.2.1 Nautilus
*ZFS 0.8.1
*QEMU 4.0.0
*LXC 3.1.0
*Corosync 3
*Misc. bug fixes and improvements
==Proxmox VE 5.4==
Released 11.04.2019: See [[Downloads]]
*Based on Debian Stretch 9.8
*Kernel 4.15.18
*QEMU 2.12.1 (QEMU 3.0.1 since May 2019 via apt)
*LXC 3.1.0
*ZFS 0.7.13
*Ceph 12.2.11 (Luminous LTS, stable), packaged by Proxmox
**Ceph can now be installed via the GUI, thus closing the gap of needing the CLI for this initial step, before configuring a complete (hyperconverged) Ceph setup via PVE's GUI.
*QDevice support via `pvecm`
**primarily for small 2-node clusters adding a qdevice can help mitigate the downside of not being able to reboot one node without losing quorum (and thus the ability to make any changes in the cluster) ** Can also help in clusters with a larger even number of nodes by providing a tie-break vote.
**Integration into pvecm and PVE stack vastly simplifies adding a qdevice (it was possible manually before as well)
*Wake On Lan support via `pvenode`
**It is now possible to associate a MAC-Address with a PVE-node.
**This can be used to send a WOL (wake on lan) packet to that node from any other node in the PVE-Cluster, and thus power it on on-demand
**Provides a comfortable way of saving power by keeping only part of the nodes permanently online, and starting others on demand (e.g. increased load, or for running guests during upgrades to the PVE-nodes.)
*GUI Container wizard creates unprivileged containers by default
*HA improvements and added flexibility
**It is now possible to set a datacenter wide HA policy which can change the way guests are treated upon a Node shutdown or reboot. The choices are:
***freeze: always freeze servivces, independent of the shutdown type (reboot, poweroff)
***failover: never freeze services, this means that a service will get recovered to another node if possible and if the current node does not comes back up in the grace period of 1 minute.
***default: this is the current behavior, freeze on reboot but do not freeze on poweroff
* Suspend to disk/hibernate support for Qemu/KVM guests
**qemu guests can be 'hibernated' (have their RAM contents and internal state saved to permanent storage) and resumed on the next start.
**This enables users to preserve the running state of the qemu-guests across most upgrades to and reboots of the PVE-node.
**Additionally it can speed up the startup of guests running complex workloads/ workloads which take lots of resources to setup initially, but which need not run permanently.
*Support for guest (both Qemu and LXC) hookscripts:
**Hook-scripts are small executables which can be configured for each guest, and are called at certain steps of the guest's lifetime ('pre-start', 'post-start', 'pre-stop', 'post-stop').
**This gives Administrators great flexibility in the way they can prepare the environment for the guest (e.g. adding necessary network resources (routes, vlans), firewall-rules, unlocking encrypted files/devices,...) and cleaning them up when the guest is shutdown or stopped.
*Improved Qemu Guest creation wizard:
**Certain often requested options (e.g. Machine-type (q35, pc-i440fx), Firmware (Seabios, UEFI), SCSI controller) can now be selected directly in the VM Creation Wizard, and dependent options get set to sensible values directly.
*Clusterwide Subscription Check and Display
**The Datacenter Summary page now gives Administrators a global overview of the subscription status of all nodes in a PVE cluster, thus helping to verify the support-status of the complete cluster on one quick glance
* Improved UI in the installer
**It is now possible to go back to a previous screen in the pve-installer, and adapt choices made there, without the need to restart the complete installation process
**Before the actual installation is performed an information-page is displayed containing all relevant information
*U2F Authentication for the WebUI
**PVE's WebUI now can optionally use two-factor authentication for users.
**This is generally considered good security practice, and in certain domains and environments it is required.
**The new U2F authentication and the TOTP second factor authentication can be configured by each user for themselves (before all 2FA solutions needed to be configured by a user with User.Modify permission).
*Improved reference documentation
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 5.3==
Released 4.12.2018: See [[Downloads]], updated ISO image with installer and apt bug fixes and updated packages on 23.1.2019
*Based on Debian Stretch 9.6
*Kernel 4.15.18
*QEMU 2.12.1
*LXC 3.0.2
*ZFS 0.7.12
*Ceph 12.2.8 (Luminous LTS, stable), packaged by Proxmox
*Installer with ZFS: no swap space is created by default, instead an optional limit of the used space in the advanced options can be defined, thus leaving unpartitioned space at the end for a swap partition.
*Disk Management on GUI (ZFS, LVM, LVMthin, xfs, ext4)
*Create CephFS via GUI (MDS)
*CephFS Storage plugin (supports backups, templates and ISO images)
*LIO support for ZFS over iSCSI storage plugin
*ifupdown2 package and partial GUI support
*Delete unused disk images on the storage view
*Enable/disable the local cursor in noVNC
*Enable/disable autoresize in noVNC
*Edit /etc/hosts/ via GUI, which is necessary whenever the IP or hostname is changed
*Editable VGA Memory for VM, enables higher resolutions (UHD) for e.g. SPICE
*VM Generation ID: Users can set a 'vmgenid' device, which enables safe snapshots, backup and clone for various software inside the VM, like Microsoft Active Directory. This vmgenid will be added to all newly created VMs automatically. Users have to manually add it to older VMs.
*qemu-server: add ability to fstrim a guest filesystem after moving a disk (during move_disk or migrate with local disks)
*Emulating ARM virtual machines (experimental, mostly useful for development purposes)
*vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias vGPUS.
*pvesh rewrite to use the new CLI Formatter framework - providing a configurable output format for all API calls (formats include: json, yaml and a pleasant table output with borders). Provides unified support for json output for the PVE CLI Tools.
*Nesting for Containers (privileged & unprivileged containers): Allows running lxc, lxd or docker inside containers, also supports using AppArmor inside containers
*Mounting CIFS/NFS inside containers (privileged): Allows using samba or nfs shares directly from within containers
*Improved reference documentation
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 5.2==
Released 16.05.2018: See [[Downloads]]
*Based on Debian Stretch 9.4
*Kernel 4.15.17
*QEMU 2.11.1
*LXC 3.0.0
*Ceph 12.2.5 (Luminous LTS, stable), packaged by Proxmox
*ZFS 0.7.8
*Cloudinit GUI support
*Cluster create/join nodes via GUI
*Certificate management including Let´s Encrypt GUI
*SMB/CIFS Storage plugin (supports backups, images, templates, iso and containers)
*Display IP for VM (using qemu-guest-agent)
*LXC: templates and clones, move volume/disk
*Create and edit new roles via GUI
*I/O bandwith limits for restore operations (globally, per storage or per restore job)
*new and improved xterm.js integration including reconnect logic (on container reboots or restart migrations)
*Basic/Advanced GUI
*ebtables support
*Improved reference documentation
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 5.1==
Released 24.10.2017: See [[Downloads]], updated ISO image with installer bug fixes and updated packages on 18.12.2017
*Based on Debian Stretch 9.2
*Kernel 4.13.3
*QEMU 2.9.1
*LXC: update to 2.1
*Ceph 12.2.1 (Luminous LTS, stable), packaged by Proxmox
*ZFS 0.7.2
*Improved reference documentation with screenshots
*Countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 5.0==
Released 04.07.2017: See [[Downloads]], updated ISO image with installer bug fixes and updated packages on 09.08.2017
*Based on Debian Stretch 9.0
*Kernel 4.10.15
*QEMU 2.9
*LXC: update to 2.0.8
*New asynchronous [[Storage Replication]] feature (needs ZFS, technology preview)
*New/updated LXC templates (Debian, Ubuntu, CentOS, Fedora, OpenSUSE, Arch Linux, Gentoo and Alpine)
*Updated/improved noVNC console]
*Ceph v12.1.0 Luminous (technology preview), packaged by Proxmox
*live migration with local storage
*GUI improvements
**USB und Host PCI address visibility
**improved bulk and filtering options
*Improved installation ISO
*Importing [[Qemu/KVM_Virtual_Machines#_importing_virtual_machines_from_foreign_hypervisors]]
*improved reference documentation with screenshots
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 5.0 beta2==
Released 23.05.2017: See [[Downloads]]
*Based on Debian Stretch
*Kernel 4.10
*Update to QEMU 2.9
*Improved handling of re-used disks in installer
*Improved handling of network devices in installer
*Many small improvements and bug fixes
==Proxmox VE 5.0 beta1==
Released 22.03.2017: See [[Downloads]]
*Based on Debian Stretch
*Kernel 4.10
*Many small improvements and bug fixes
==Proxmox VE 4.4==
Released 13.12.2016: See [[Downloads]]
*update kernel to 4.4.35
*KVM: update to qemu 2.7.0
*LXC improvements
**update to LXC 2.0.6
**implement new restart migration
**unprivileged containers (now the GUI)
**updated template repository (Debian, Ubuntu, CentOS, Fedora, Arch and Alpine)
*GUI improvements
**new ceph dashboard
**new cluster dashboard
**improved disk management, smart status supporting more SSD types
**HA improved GUI
*ISO installer supports advanced ZFS settings
*dedicated network for VM/CT migrations (CLI only)
*improved reference documentation, e.g. contains now also screenshots
*DRBD9 (removed from core, now maintained directly by Linbit, due to [https://forum.proxmox.com/threads/drbdmanage-license-change.30404/ license change])
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 4.3==
Released 27.09.2016: See [[Downloads]]
*GUI improvements
**new search (“ctrl-shift-f”)
**vertical menus with groups and icons
**double click to open VM/CT console
**new status overview for host, VM and containers, signal colors for overloaded CPU, etc.
**Task log window remembers its size
*VM wizard proposes now optimal settings dependent to the selected operating system (for Linux the default is “virtio scsi disk” and "virtio net")
*comprehensive reference documentation in html, pdf and epub format, including contextual help function on several places - based on asciiDoc
*new disk management, including SMART monitoring and SSD wearout level display (for Intel, Sandisk and Samsung SSD).
*based on Debian Jessie 8.6.0
*Linux kernel 4.4.19
*KVM/qemu 2.6.1
*LXC: update to 2.0.4
*many new and improved manual/help pages
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 4.2==
Released 27.04.2016: See [[Downloads]]
*GUI update to Sencha Ext JS 6, including new icon set and dynamic performance charts
*based on Debian Jessie 8.4.0
*Linux kernel 4.4.6
*KVM/qemu 2.5.1
*installer: LVM thin or ZFS plugin pre-configured
*LVM thin support
*[https://pve.proxmox.com/wiki/HTTPSCertificateConfiguration#Let.27s_Encrypt_using_acme.sh Let´s Encrypt] support
*DRBD9: drbd 9.0.2 kernel module and drbdmanage 0.95
*LXC: improved container setup
*LXC: network rate limits
*LXC: adding mount-points via GUI
*LXC: pct can mount rootfs for read/write access of stopped or running containers
*LXC: backup/snapshot improvements, including proper backup and restore of ACL
*LXC: add support for Alpine Linux, Ubuntu 16.04
*HA manager improvements
*new confirmation dialog to prevent accidentally removal of a virtual machine or container
*updated GUI translation (especially French, Italian and German)
*many new and improved manual pages
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 4.1==
Released 11.12.2015: See [[Downloads]]
*based on Debian Jessie 8.2.0
*Linux kernel 4.2.6
*improved startup/shutdown behavior (systemd)
*enable NTP by default
*installer: allow up to 8 disks for ZFS
*KVM: add qemu agent GUI option
*KVM: update network boot ROMs
*Improve HA GUI for users with restricted permissions
*add Galician language to GUI
*LXC: add rootfs resize to GUI
*LXC: add support for Fedora 22 and Debian stretch/sid, Ubuntu 15.10
*LXC: support unpriviledged containers (technology preview)
*storage: added LVM thin support (technology preview)
*Support for Turnkey Linux LXC appliances
*added new pvereport command
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 4.0==
Released 05.10.2015: See [[Downloads]]
*based on Debian Jessie 8.2.0
*Linux kernel 4.2
*QEMU 2.4
*new high availability manager
*bash completion support
*full IPv6 support
*include LXC (and removed OpenVZ) - see [[Convert OpenVZ to LXC]]
*added [https://pve.proxmox.com/wiki/DRBD9 DRBD9] with drbdmanage (technology preview)
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 4.0 beta2==
Released 10.09.2015: See [[Downloads]]
*based on Debian Jessie 8.2.0
*Linux kernel 4.2
*QEMU 2.4
*LXC improvements [[Linux Container]]
**Containers can be created on local storage, NFS, Ceph RBD, DRBD9, ...
**Migration (offline)
*IPv6 support
*DRBD9
*Ceph Server packages
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 4.0 beta1==
Released 22.06.2015: See [[Downloads]]
*based on Debian Jessie 8.1
*use kernel 3.19.8
*new HA manager, see [https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x High_Availability_Cluster_4.x]
*QEMU 2.3
*include LXC (and removed openvz), see [https://pve.proxmox.com/wiki/Linux_Container Linux Container]
*[https://pve.proxmox.com/wiki/DRBD9 DRBD9]
*countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT])
==Proxmox VE 3.4==
Released 19.02.2015: See [[Downloads]] - updated ISO image with ZFS 0.6.4.1 on 27.05.2015
*add ZFS support
**installer supports all ZFS RAID levels.
*pve-manager (GUI) updates
**openvz: implement suspend/resume
**qemu GUI: support new pending changes API
**new 'disconnect' button on the network edit panel
**vzdump: new option --stop to abort running backup job
**add NUMA option to processor form
**add GUI for local [[ZFS]] storage
**add startall/stopall/migrateall menu buttons 
**add multiqueues field to NIC form
*update to qemu 2.1.3
*qemu-server updates
**shutdown by Qemu Guest Agent if the agent flag is set
**snapshot_create: call fsfreeze if agent flag is set
**vzdump: use qga freeze in vzdump in snapshot mode
**add custom NUMA topology support
**serial: allow to pass arbitrary device names
**add link_down flag to network config
**support additional e1000 variants for VM machines
**enable hotplug by default
**hotplug config: allow to enable specific features
**include memory hotplug patch (experimental)
*improved hotplug support
*based on Debian Wheezy 7.8
*kernel updates
*countless bug fixes and package updates, for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT]
==Proxmox VE 3.3==
Released 15.09.2014: See [[Downloads]]
* improved security features
**Firewall support (new package pve-firewall)
**[[Two-Factor Authentication]] (Yubico and OATH)
* pve-manager (GUI) updates
**new [[Proxmox VE Firewall]]
**noVNC console
**openvz: add bridge vlan && firewall options to gui
**new [[Proxmox VE Mobile]], GUI for mobile devices
**add new 'Pool View'
**ZFS storage can now be configured on GUI
**glusterfs: new option to specify backup volfile server
**add new email_from option to datacenter.cfg
**add Persian (Farsi) translation.
**improved Spanish translation
**update Chinese translation
**Countless updates and fixes
* update to qemu 2.1.0
**pci passthrough improvements
**hotplug improvements
**migration : enable auto-converge capability
**add cpu_hotplug (and maxcpus config)
**add virtio-net multiqueue support
**new option smbios1 to specify SMBIOS type 1 fields
**set uuid for newly created machines
**support new q35 machine type
**add Broadwell cpu model
**compile with new libiscsi (1.12.0)
**use glusterfs 3.5.2 libraries
**support drive option 'discard'
**add support for new qemu throttling burst max parameters
**add 'vmxnet3', 'lsi53c810', and 'pvscsi' to the list of available network card models
* improved Console support
**HTML5 Console for shell, VM and container console (noVNC)
**noVNC console is now the default
**vncterm: new option -notls (for novnc, which use 'wss')
**vncterm: updated signature for java applet to avoid warnings
* pve-kernel-2.6.32-32-pve: 2.6.32-136
**update aacraid, arcmsr, netxtreme2, ixgbe, igb, megaraid_sas and e1000e drivers
**update to vzkernel-2.6.32-042stab093.4.src.rpm
**allow to use grub-efi-ia32 boot loader
* pve-kernel-3.10.0-4-pve: 3.10.0-17
**enable vfio xfga
**update arcmsr, netxtreme2, ixgbe, igb, e1000e drivers
**update to kernel-3.10.0-123.6.3.el7.src.rpm
**allow to use grub-efi-ia32 boot loader
**Note: there is still no OpenVZ support with this kernel
* update ceph packages to 'firefly' (0.80.5)
**Note: Please upgrade ceph packages first if you run ceph server on proxmox nodes (see ceph upgrade instructions).
* update gluster packages to 3.5.2
* fence-agents-pve: 4.0.10
**update to 4.0.10
**add fence_ovh and fence_amt
**remove baytech, bullpap, cpint, egenera, mcdata, nss_wrapper,rackswitch, vixel, xcat. Those agents are no longer included in    upstream package.
**removed fence_scsi
**Note: This includes updates for fence_ipmilan (fence_ilo3, fence_ilo4, fence_imm, and fence_idrac), and some parameter names changed (see 'man fence_ipmilan'). Please verify that your fence device still works if you use HA.
* based on Debian Wheezy 7.6
* countless bug fixes and package updates, for all details see [https://bugzilla.proxmox.com/ bugtracker] and [https://git.proxmox.com/ GIT]
==Proxmox VE 3.2==
Released 10.03.2014: See [[Downloads]]
* improved SPICE support
** spiceterm: console for OpenVZ and host
** add new console option to datacenter.cfg (java applet vs. spice)
** add multi-monitor support
** GUI: use split-button to easily select SPICE or VNC
** more details on https://pve.proxmox.com/wiki/SPICE
* update qemu to 1.7.0
** add 'pvscsi' to the list of scsi controllers (emulate the VMware PVSCSI device)
** add 'lsi53c810' to the list of scsi controllers (supported on some very old Windows NT versions)
** add 'vmxnet3' to the list of available network card models (emulate VMware paravirtualized network card)
** add drive option 'discard'
** add support for new qemu throttling burst max parameters
** improved live backup
* pve-kernel-2.6.32-27-pve: 2.6.32-121
** update to vzkernel-2.6.32-042stab084.20.src.rpm
** update e1000, igb, ixgbe, netxtreme2, megaraid_sas
** include latest ARECA RAID drivers
** update Broadcom bnx2/bnx2x drivers to 7.6.62
** update aacraid to aacraid-1.2.1-30300.src.rpm
* Ceph Server (Technology Preview)
** new GUI to manage Ceph server running on PVE nodes
** more details on https://pve.proxmox.com/wiki/Ceph_Server
* added Open vSwitch support (Technology Preview)
* Optional 3.10 Kernel (based on RHEL7 beta, currently without OpenVZ support, for testing only)
* storage: new ZFS plugin (Technology Preview), see https://pve.proxmox.com/wiki/Storage:_ZFS
* storage: remove nexenta plugin (ZFS plugin is faster)
* updated GlusterFS to 3.4.2
* ISO installer uses now always GPT partition table
** added 'gdisk' to manage and view partitions via CLI
* based on Debian Wheezy 7.4
* countless bug fixes and package updates (for all details see [https://bugzilla.proxmox.com bugtracker] and [https://git.proxmox.com GIT]
==Proxmox VE 3.1==
Released 21.08.2013: See [[Downloads]]
* pve-kernel-2.6.32-23-pve (2.6.32-109)
** update to vzkernel-2.6.32-042stab079.5.src.rpm
** remove fix-ipoib-add-missing-lock.patch (now upstream)
** include Highpoint 2710 RAID driver
** update megaraid_sas to 6.600.18.00
* pve-cluster (3.0-7)
** suppress another warning about uninitialized value at boot time
** avoid warning when adding a new node (can't create shared ssh key database...)
** avoid warning if /etc/pve/priv/ already exists
** add migration_unsecure param to datacenter.cfg
* pve-firmware (1.0-23)
** update for RHEL6.4 based kernels
* pve-manager (3.1-3)
** add new favicon.ico
** fix perl dependencies
** display subscription status after login
** add GUI for GlusterFS storage
** vzdump: wait one second before unmounting the snapshot (avoid device busy error)
** apt: try to detect new packages
** tell apt to use http_proxy settings
** support to view changelogs for available pve packages
** allow to configure cipher list for pveproxy
** automatically add pve repository to apt sources list
** allow to use serial terminal as display
** avoid apt autoremoval for pve-kernel packages
** pveproxy: use same cipher list as debian apache ssl.conf
** avoid apt autoremoval for pve-kernel packages
** avoid and fix several proxy loops
** correctly initialize $rpcenv in spiceproxy
** fix bug #437: include pve-libspice-server1 in package list
** improved spice proxy code
** add button to start SPICE console
** add spiceproxy server
** fix upgrade shell (quoting bug)
** new pveupgrade script, used to display additional information to the user (reboot required? database up to date? ...)
** add button to display package versions, to provide same info as 'pveversion -v' on GUI.
** fix bug #408: reset ENV before starting shell
** fix bug #379: allow to overwrite existing CTs if user has VM.Backup permissions
** Allow to send notification mail about new packages.
** package manager: run system upgrade inside VNC shell, group packages by 'Origin', remove apt/upgrade API completely.
** package manager: improve changelog viewer, bug fixes
** add package update manager to GUI
** update Norwegian translations
** rrd display : avoid 2 refresh
** avoid gzip http response for png & gif & jar
* pve-qemu-kvm (1.4-17)
** enable glusterfs backend
** add patch to use pce certs for spice by default
** disable spice-socket.patch
** Allow to query spice ticket
** add usbredir support to spice
** add experimental code to support spice
* qemu-server (3.1-1)
** allow pass through of usb parallel devices (--parallel0 /dev/usb/lp0)
** fix bugs in migration code (wrong qw() usage)
** vncproxy: load config from correct node
** allow to use a socket for serial devices
** implement 'qm terminal' to open terminal via serial device
** add ability to run without graphic card ('vga: serial[n]')
** add support for unsecure/fast migration (setting in datacenter.cfg)
** remove spice cert paths (depend on pve-qemu-kvm >= 1.4-16)
** implement spice seamless migration
** support usb redirection devices for spice (usb[n]: spice)
** disable tablet device by default for spice
** spiceproxy API: allow client to choose proxy address
** spiceproxy API: read cert subject name directly using Net::SSLeay
** spice: use TLS (encrypt whole traffic)
** allow to pass SCSI generic devices to guests, for example "scsi0: /dev/sg5"
** cpu flags optimization
** add support for SPICE
* vncterm (1.1-4)
** Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users previously used apache option SSLCertificateChainFile for that).
* pve-libspice-server1 (0.12.4-1)
** initial release and update to 0.12.4
* ceph (0.67.1-1~bpo70+1)
** New upstream release
* libpve-access-control (3.0-6)
** use shorter spiceproxy tickets
** add code to generate tickets for SPICE
* libpve-common-perl (3.0-6)
** auth.conf: avoid warning if file does not exist, fix file permissions
** add parser for "/etc/apt/auth.conf"
* libpve-storage-perl (3.0-10)
** add Glusterfs Plugin
** API: add scan method for glusterfs
** sheepdog: fixes for storeid != 'sheepdog'
* lvm2 (2.02.98-pve4)
** New upstream release (fix for udev rules)
* glusterfs
** support for glusterfs (technology preview, https://pve.proxmox.com/wiki/Storage:_GlusterFS)
* SPICE
** support for Spice protocol (technology preview, https://pve.proxmox.com/wiki/SPICE)
* based on Debian 7.1 (Wheezy)
* Countless bug fixes (for all details see [https://bugzilla.proxmox.com bugtracker] and [https://git.proxmox.com GIT]
==Proxmox VE 3.0==
Released 24.05.2013: See [[Downloads]]
* pve-kernel-2.6.32 2.6.32-100
** fix CVE-2013-2094
* update ceph packages to 0.61.2
* libpve-common-perl 3.0-4:
** fix bug #381: use persistent reservation file for ports
** new function PVE::Tools::next_migrate_port()
* libpve-storage-perl 3.0-6
** rbd : clone volume to same pool that base volume
** extend storage list API with useful parameters for clone
** fix iscsi session scan bug
* pve-cluster 3.0-4
** depend on fuse (seems most utilities moved from fuse-utils to fuse)
** remove depencency on fuse-utils (this package is no longer required)
** fix warning about uninitialize value
** fix bug 383: restart pveproxy instead of apache2
* pve-manager 3.0-20
** fix login problem when http_proxy is set
** updated Chinese, German and French translation
** allow to upload files with spaces in filename (replace with '_')
** allow to delete pools with non-existent VMs/Storage
** use correct icons for templates
** sort templates after regular VMs
** StorageSelector: allow to use new target option, used by Clone
** fix bug #385: correctly handle Accept-Encoding
** fix bug #374: correctly remove destroyed container from pool
* pve-qemu-kvm 1.4-12
** vma create: only store basename of config file.
* qemu-server 3.0-15
** clone disk : keep source volume params
** clone: check is we can clone to target storage
** fix bug #381: use PVE::Tools::next_migrate_port()
** restore: do not restore template flag
* vncterm 1.1-3
** re-enable javascript-events.patch (needed by migrate)
* based on Debian 7.0 (Wheezy)
* new VM clone feature
* new event driven API server (pveproxy)
** completely replace apache2
** efficient support for HTTP keep-alive
* support bootlogd (boot log can be viewed on the GUI)
* update pve-qemu-kvm to 1.4.1
*update kernel to vzkernel-2.6.32-042stab076.7.src.rpm
** changed default [[IO Scheduler]] to 'deadline'
** updated Intel network drivers for e1000e, igb and ixgbe
* Countless bug fixes (for all details see [https://bugzilla.proxmox.com bugtracker] and [https://git.proxmox.com GIT]
==Proxmox VE 2.3==
Released 04.03.2013: See [[Downloads]]
* update qemu-kvm to 1.4.0
* new kvm backup implementation, see [[Backup and Restore]]
* added RBD (ceph) support on GUI
* update kernel to vzkernel-2.6.32-042stab072.10.src.rpm
* include latest Broadcom bnx2/bnx2x drivers
* include latest Adaptec aacraid driver 1.2-1[29900]
* update e1000e to 2.2.14
* update igb to 4.1.2
* update ixgbe to 3.12.6
* enable CONFIG_RT_GROUP_SCHED (also update corosync if you install this kernel)
* extend memory GUI to support ballooning
* implement auto-ballooning
* add HD resize feature to expand disks
* updated network drivers (bnx2/bnx2x/e1000e/igb/ixgbe)
* added omping binaries (for testing multicast between nodes)
* update to latest Debian version 6.0.7
* qcow2 as default storage format, cache=none (previously raw)
* KVM64 as default CPU type (previously qemu64)
* e1000 as default NIC (previously rtl8139)
* task history per VM
* Node Summary: added "KSM sharing" and "CPU Socket count"
* enable/disable tablet for VM on GUI without stop/start of VM (you can use vmmouse instead, for lower CPU usage, works on modern Linux and on all Windows VMs as long as you install the vmmouse drivers)
* bug fixes (for all details see [https://bugzilla.proxmox.com bugtracker] and [https://git.proxmox.com GIT]
==Proxmox VE 2.2==
Released 24.10.2012: See [[Downloads]]
* update kernel to vzkernel-2.6.32-042stab062.2.src.rpm
* update Intel nics drivers (e1000e to 2.1.4, ixgbe to 3.11.33, igb to 4.0.17)
* update qemu-kvm to 1.2.0
* openvz: update vzctl to 4.0
* openvz: use real console instead of 'vzctl enter'
* add [[Live Snapshots]] support (qcow2)
* added Slovenian translation
* kvm: new option to select SCSI controller hardware
* kvm: support up to 32 network devices
* kvm: support up to 16 virtio devices
* kvm: add SATA to GUI
* updated cluster packages
* update to latest Debian version 6.0.6
* bug fixes (for all details see [https://bugzilla.proxmox.com bugtracker] and [https://git.proxmox.com GIT]
==Proxmox VE 2.1==
Released 27.04.2012: See [[Downloads]]
*simplify GUI for users without permissions
*implemented startup/shutdown ordering
*improved task logs on node startup/shutdown
*added Russian, Italian and Turkish translation
*updated corosync cluster stack to 1.4.3
*update LVM to 2.02.95 (fixes snapshot problems)
*bug fixes
==Proxmox VE 2.0==
Updated ISO 5.4.2012: See [[Downloads]]
*pve-kernel-2.6.32-11-pve (2.6.32-65)
**rebase on vzkernel-2.6.32-042stab053.5.src.rpm
*fix for windows timing issues ("args: -no-hpet -no-kvm-pit-reinjection" are now defaults for newer Windows guests)
*subscription key management (commercial subscribers)
*vzdump fixes (qmrestore)
Released 30.03.2012: See [[Downloads]]
(First public beta released on 30.09.2011, rc1 on 16.02.2012, final release on 30.03.2012)
[https://pve.proxmox.com/wiki/Category:Proxmox_VE_2.0 Proxmox VE 2.0 Documentation]
*Complete new GUI
**based on Ext JS 4 JavaScript framework
**fast search-driven interface, capable of handling hundreds and probably thousands of VM´s
**secure VNC console, supporting external VNC viewer with SSL support
**role based user- and permission management for all objects (VM´s, storages, nodes, etc.)
**Support for multiple authentication sources (MS ADS, LDAP, Linux PAM, Proxmox VE authentication)
*[[High Availability Cluster]] support for KVM guests and OpenVZ containers
**resource agents for KVM and OpenVZ
**GUI for managing KVM and OpenVZ HA settings
*Based on Debian 6.0 Squeeze
**long-term 2.6.32 Kernel (based on RHEL6.x) with KVM and OpenVZ as default
*New cluster communication based on corosync, including:
**[[Proxmox Cluster file system (pmxcfs)]]: Database-driven file system for storing configuration files, replicated in realtime on all nodes using corosync
**creates multi-master clusters (no single master anymore!)
**cluster-wide logging
**basis for HA setup´s with KVM guests and OpenVZ containers config files
*RESTful web API [[Proxmox_VE_API]]
**Resource Oriented Architecture (ROA)
**declarative API definition using JSON Schema
**enable easy integration for third party management tools
*Backup/Restore via GUI
**GUI and CLI, works for OpenVZ containers and KVM VM´s
**"Backup Now" via GUI
**Restore via GUI
**All jobs can be monitored as “Recent tasks”
*OpenVZ
**Appliance Downloader, including [https://www.turnkeylinux.org/partners TurnKey Linux] channel
**Multiple storages for OpenVZ container, no limit to /var/lib/vz anymore!!!
**vswap support
**Improved init.log (shows start-up logs of a OpenVZ container)
*Commitment to Free Software (FOSS): public code repository and bug tracker for the 2.x code base.
*Licensed under GNU Affero General Public License, version 3 https://www.gnu.org/licenses/agpl-3.0.html.
==Proxmox VE 1.9==
Updated ISO 27.9.2011, including the following, see [[Downloads]]
Release Notes:
*pve-kernel-2.6.32-6-pve (2.6.32-47)
**update to vzkernel-2.6.32-042stab037.1.src.rpm
**use bnx from upstream
**update igb driver to 3.2.9
**update e1000e driver to 1.6.2
**enable GDTH driver
**update ata_generic to support intel IDE-R
**use BIOS default settings for ASPM (fix-aspm-policy.patch)
*pve-firmware (1.0-14)
**include firmware for new bnx2 drivers
*vzctl (3.0.29-2pve1)
**update to 3.0.29.2
*vzdump (1.2-16) unstable; urgency=low
**use rsync option '--sparse' on first pass (take less disk space - as suggested by John Stevens)
Original 1.9 ISO, released 13.09.2011: See [[Downloads]]
Release Notes:
*pve-kernel-2.6.32 (2.6.32-43)
**rebase on vzkernel-2.6.32-042stab036.1.src.rpm (new stable Openvz kernel branch), updates for drivers including e1000e to 1.5.1, ARECA RAID driver, megaraid_sas, bnx2, igb to 3.1.16, ixgbe to 3.5.14-NAPI, drbd 8.3.10, ...
*vzctl (3.0.28-1pve5)
**update to latest upstream
**set defaull template to debian-6.0-standard
**set CONFIGFILE="pve.auto" in /etc/vz/vz.conf
**merge some fixes from upstream
*pve-manager (1.9-24)
**fix uptime display for 2.6.32 kernel with 1000HZ
**support newer vzctl versions.
**support 'maxfiles' backup option.
*pve-qemu-kvm (0.15.0-1)
**update to upstream 0.15.0
**use pxe roms from upstream qemu-kvm
*qemu-server (1.1-32)
**small fixes for new qemu-kvm 0.15.0
*libpve-storage-perl (1.0-19)
**set LC_ALL instead of LANG (avoid bug when user sets LC_ environment variables)
**iscsi: tolerate errors when not all portals are online.
*vzdump (1.2-15)
**run pre-restart hook after snapshot
==Proxmox VE 1.8==
Updated ISO 16.6.2011, includes current stable packages [[Downloads]]
*pve-kernel-2.6.32 (2.6.32-33)
*aacraid with support for Adaptec 6 series (e.g. 6805)
*KVM 0.14.1
*vzctl 3.0.27
*vzdump 1.2-13
*bug fixes
Released 30.03.2011: See [[Downloads]]
Release Notes:
*update to Debian Lenny 5.0.8
*update to kvm 0.14.0
*updated kernels with many fixes
*protect against Cross Site Request Forgery (added anti-CSRF tokens)
*bug fixes
Detailed change log:
*qemu-server (1.1-30)
**use 'cache=none' for raw images and devices to improve performance
*pve-qemu-kvm (0.14.0-2)
**also install vgabios-qxl.bin vgabios-stdvga.bin vgabios-vmware.bin
**update to 0.14.0
**add fix for eventfd support (msix-eventfd-fix.patch)
**removed kvmtrace (removed from upstream?)
**add vnc keyboard fixes for fr-ca (reported by Pierre-Yves)
*pve-kernel-2.6.32 (2.6.32-32)
**update to Debian Squeeze kernel 2.6.32-32
**remove patch use-init-task-credential.patch (fixed in upstream)
**enable CONFIG_SENSORS_K10TEMP=m
*pve-kernel-2.6.18 (2.6.18-14)
**update to ovzkernel-2.6.18-238.5.1.el5.028stab085.3.src.rpm
**increased kernel API version to '5-pve'
**update to SCST 2.0.0.1
**update to OCFS2 1.4.7
*pve-qemu-kvm-2.6.18 (0.9.1-12)
**update to kvm-83-224.el5.src.rpm
*pve-kernel-2.6.35 (2.6.35-10)
**update to Ubuntu-2.6.35-27.48
*pve-manager (1.8-15)
**protect against Cross Site Request Forgery (added anti-CSRF tokens)
**correctly encode html entities inside notes (avoid persistant XSS vulnerabilities)
**use http_proxy in cron APL download if configured
==Proxmox VE 1.7==
Released 30.11.2010: See [[Downloads]]
Change log:
*libpve-storage-perl (1.0-16)
**several fixes for multipath
*pve-kernel-2.6.18 (2.6.18-10)
**update to ovzkernel-2.6.18-194.26.1.el5.028stab070.14.src.rpm
*pve-kernel-2.6.32 (2.6.32-28)
**update to Debian Squeeze kernel 2.6.32-28 (openvz 2.6.32-dzhanibekov)
**update igb driver to version 2.4.8
**update ixgbe driver to version 3.0.14
*pve-manager (1.7-10)
**we only display the warning about missing openvz if we have no kvm.
**fix config change detection in pvectl
**on the create VM page, query capabilities of remote host (kvm/openvz enabled?) and select resonable defaults.
*pve-qemu-kvm (0.13.0-2)
**do not install unnecessary blobs (manually install blobs, use --disable-blobs)
**update migration and vnc keymap patches for 0.13.0
*vzdump (1.2-9)
**fix veth numbers when restoring openvz containers (vzrestore)
*qemu-server (1.1-25)
**new --repeat option for qmrestore
**use new -netdev syntax for network devices. This enables tcp offload settings, and we can use 'vhost=on' for virtio-net
*Small bug fixes
==Proxmox VE 1.6 (updated) - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5==
==Proxmox VE 1.6 (updated) - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5==
Released 18.10.2010: See [[Downloads]]
Released 18.10.2010: See [[Downloads]]
Line 171: Line 3,869:
*Support new storage model
*Support new storage model
*Allow to specify MAC address (for KVM and OpenVZ in veth mode)
*Allow to specify MAC address (for KVM and OpenVZ in veth mode)
*Added iSCSI Tape support, see http://scst.sourceforge.net/
*Added iSCSI Tape support, see https://scst.sourceforge.net/
*Added SCST modules to the kernel
*Added SCST modules to the kernel
*Provide SCST userspace package
*Provide SCST userspace package
Line 279: Line 3,977:


==Proxmox VE 0.9beta2==
==Proxmox VE 0.9beta2==
Released 22.7.2008, see [http://www.proxmox.com/forum/showthread.php?t=533 Proxmox VE 0.9beta2 released!]
Released 22.7.2008, see [https://forum.proxmox.com/threads/proxmox-ve-0-9beta2-released.533/ Proxmox VE 0.9beta2 released!]


Including:
Including:
Line 289: Line 3,987:
*Extend web interface for KVM guests (add hard drives, network cards, etc.)
*Extend web interface for KVM guests (add hard drives, network cards, etc.)
*Many bug fixes
*Many bug fixes


==Proxmox VE 0.9==
==Proxmox VE 0.9==
First public release: 15.4.2008
First public release: 15.4.2008
[[Category: Installation]]
[[Category: Installation]]

Latest revision as of 13:19, 15 October 2024

Roadmap

  • Offline updates done
  • Cross-cluster migration mechanism - foundation and CLI integration released with Proxmox VE 7.3
  • Enhancement and stabilization of the integrated Software Defined Network solution
    • Integrate local network devices into access control framework. released with Proxmox VE 8.0
    • Stabilize VLAN and source NAT management as first parts of bringing Proxmox VE SDN out of tech preview.
  • Project "Cattle and Pets"
    • Improve user and virtual machine managing experience on big setups
    • Add cluster-wide update and status control center
    • Assist on Ceph upgrades with semi-automated restarts of services and OSDs
    • Improve and extend notifications by allowing one to add more endpoints besides email, each with separate filters released with Proxmox VE 8.1
    • Automated/unattended installation done with Proxmox VE 8.2
  • Text based installer UI
    • Provide a simple text based installer as alternative to the GTK one released with Proxmox VE 8.0
    • Make text based installer ready to fully work even if just a serial console is present. released with Proxmox VE 8.1
  • Cluster Resource Scheduling Improvements
    Short/Mid-Term:
    • Re-balance service on fresh start up (request-stop to request-start configuration change) released with Proxmox VE 7.4
    • Account for non-HA virtual guests
    Mid/Long-Term:
    • Add Dynamic-Load scheduling mode
    • Add option to schedule non-HA virtual guests too
  • Stabilizing the Software-Define Network stack
    • Fully support core SDN stack - done with Proxmox VE 8.1
    • Implement DHCP-based IP address management - tech-preview with Proxmox VE 8.1
    • Improve and polish user interface experience

Release History

See also Announcement forum

Proxmox VE 8.2

Released 24. April 2024: See Downloads (updated ISO release 2 to ship newer EFI secure boot shim, on 22. August 2024.)

  • Based on Debian Bookworm (12.5)
  • Latest 6.8 Kernel as new stable default
  • QEMU 8.1.5
  • LXC 6.0.0
  • ZFS 2.2.3
  • Ceph Reef 18.2.2
  • Ceph Quincy 17.2.7

Highlights

  • New import wizard to migrate guests directly from other hypervisors.
    Connect to other hypervisors using their public APIs and directly migrate guests to Proxmox VE.
    First implementation is for VMware ESXi.
    Guests can be started on Proxmox VE while their data is still being imported to the target storage in the background.
  • Support for automated and unattended installation of Proxmox VE.
    Proxmox VE now ships a tool that prepares a Proxmox VE ISO for automated installation.
    The prepared ISO retrieves all required settings for automated installation from an answer file.
    The answer file can be provided directly in the ISO, on an additional disk such as a USB flash drive, or over the network.
  • Backup fleecing (advanced feature).
    When creating a backup of a running VM, a slow backup target can negatively impact guest IO during the backup process.
    Fleecing can reduce this impact by using fast local storage as a buffer for data blocks.
    However, the fleecing approach requires the use of temporary local storage space.
    Backup fleecing can make sense when backing up IO-heavy guests to a remote Proxmox Backup Server with a slow network connection.
  • Modernized Proxmox VE firewall implementation based on nftables (opt-in technology preview).
    The new implementation is written in Rust and intended to replace the current firewall based on iptables in the future.
    Moving to nftables will improve robustness and make it possible to fix long-standing issues with the old implementation.
    The nftables firewall is provided as a technology preview to gather feedback from the community and needs to be enabled manually.
    With a few exceptions, its feature set is nearly on par with the old firewall implementation.
  • Seamless upgrade from Proxmox VE 7.4, see Upgrade from 7 to 8

Changelog Overview

Enhancements in the web interface (GUI)

  • Fix an issue where the VM creation wizard would not allow creating a VM without an ISO.
  • Fix an issue that prevented users from seeing accessible replication jobs.
  • Fix an issue where the node summary page would not display the version of a running foreign kernel (issue 5121).
  • When editing LVM or directory storages, add an explanatory tooltip to the "Shared" checkbox.
  • Expose the saferemove setting for LVM storages in the GUI.
  • Add some missing packages to the "Package versions" window.
  • When editing ZFS storages, display 16k as the blocksize placeholder to reflect the current ZFS defaults.
  • Change "Syslog" button to "System Log" button to better reflect that the log is gathered from the systemd journal.
  • Fix an issue where the GUI used an incorrect language code for Korean, and provide a clean transition for users who still have a cookie with the incorrect language code set.
  • Improve documentation of the /nodes/{node}/status API call.
  • Fix an issue where the date picker would choose the wrong date after changing to a different month.
  • When granting permissions to a group, make the group selector searchable, similar to the user and token selectors.
  • Clarify the confirmation prompt for removing a certificate without a name.
  • Fix an issue where edit windows would not be correctly masked while loading.
  • Fix an issue where adding a storage to a pool would break edit windows of the same browser session, and guard against similar issues in the future.
  • Display the end-of-life message as a notice up until three weeks before the end-of-life date, and display it as a warning from that point on.
  • Move the "Reset" button for edit windows to an icon-only button in the title bar (issue 5277).
    This reduces the risk of misclicking and accidentally resetting form data.
  • Fix xterm.js not loading in certain OS+Browser constellations, for example iOS (issue 5063).
  • Change the icon for editing guest notes from a gear to a more fitting pencil.
  • Allow to select note contents by double-clicking.
    Previously, this was not possible because double-clicking on a note would always open the editor window.
    This behavior can now be optionally disabled by a browser-local GUI setting.
  • The TFA input field now sets an autocompletion hint for improved compatibility with password managers (issue 5251).
  • Fix an issue where the mobile UI did not allow to log in with TOTP.
  • Display the subscription reminder in the mobile UI as well, for improved consistency with the GUI.
  • The user edit window now disables the "Key IDs" field if the user has TFA configured.
  • The HTTP server now supports compressing responses using deflate in addition to gzip.
  • Improved translations, among others:
    • French
    • German
    • Italian
    • Japanese
    • Korean
    • Simplified Chinese
    • Spanish
    • Traditional Chinese
    • Ukrainian

Virtual machines (KVM/QEMU)

  • The new ESXi importer is available as a storage plugin.
    This allows to import guests from an ESXi node (preferred for performance reasons) or a vCenter instance.
    Guests need to be stopped on the VMware side before the import.
    Live import allows starting the guests on Proxmox VE immediately and migrate their disks in the background.
    Most settings are read from the source and translated to equivalent Proxmox VE configuration options.
    Overrides for configuration options can be defined manually.
  • New QEMU version 8.1.5
    See the upstream changelog for details.
  • Apply some upstream QEMU patches:
    • Fix a potential deadlock resizing disks with IO thread enabled.
    • Fix a potential deadlock when taking a snapshot or a suspend-mode backup with VirtIO block disks and IO thread.
    • Fix an issue where guest IO on a VirtIO block or VirtIO SCSI disk with IO thread could get stuck on backup, storage move or hotplug.
  • Increase the limit of open files for QEMU processes (issue 4507).
    This fixes issues where a VM with many network devices with multi-queue or many virtual RBD disks would exceed the limit, resulting in VM crashes or stuck guest IO.
  • Fix an issue where live-importing certain VMDK images would trigger an assertion failure.
  • If a backup fails, increase the chance of showing the relevant error message by ignoring irrelevant error messages.
  • Add CPU models introduced in QEMU 8.1.
  • Mark Windows Server 2025 as supported by the win11 OS type.
  • Add support for emulated IOMMU (vIOMMU) which enables hypervisor VMs to passthrough PCI devices to nested VMs (issue 3784).
  • Allow up to 256 cores per socket.
  • Add Intel E1000E as a supported network device model in the GUI, as it is a popular option in VMware guests.
  • Allow specifying vendor and product information for virtual SCSI disks (issue 4957). Fix a regression that temporarily prevented creation of SCSI Cloud-init disks (issue 5363).
  • Make IP addresses selectable for copy & paste in the guest agent network information.
  • Fail live migration early if the VNC clipboard is enabled, as this is not yet supported by QEMU.
  • Prevent starting a VM with a 32-bit CPU type and a 64-bit OVMF BIOS as it is unsupported.
  • Prevent CPU hotplug for non-x86_64 VMs as it is unsupported.
  • Fix an issue where rebooting a VM with a passed-through mediated device would fail due to a race condition.
  • Fix an issue where insecure live migration of multiple VMs in parallel could cause some migrations to fail due to a port conflict (issue 4501).
  • Fix an issue where removing a snapshot of a running VM would fail if a disk was moved since VM start (issue 2258).
  • Fix an issue where migration would fail to deactivate volumes on the source node in case the volume has to be renamed on the target node.
  • Fix an issue where the storage for Cloud-init cicustom snippets would not be activated when starting a VM, causing the VM startup to fail (issue 4085).
  • Cloning a VM now succeeds even if deactivating volumes on the source node fails (issue 1734).
    Deactivating a volume can fail if a template is cloned multiple times in parallel, and previously caused the clone task to fail.
  • Increase the default timeout for drive-mirror operations to 10 minutes, to avoid timeouts when moving storage of running VMs.
  • When importing a disk, emit a warning when a fallback format is used instead of the requested format.
  • Allow configuring the VNC clipboard in the GUI.
  • Fix an issue where VMs that were suspended in-guest would be wrongly terminated after backup.

Containers (LXC)

  • New LXC version 6.0.0
    See the upstream changelog for details.
  • Allow filtering firewall logs of containers, similarly to node and VM firewall logs.
  • Add a keep-env option to the pct enter and pct exec commands that controls whether the current environment is passed to the container.
    For backwards compatibility, keep-env currently defaults to enabled, and will default to disabled in the next major release.
  • Make host device passthrough for containers available in the GUI (issue 754).
    API and CLI already supported device passthrough since Proxmox VE 8.1.
  • Avoid passing environment variables for the TLS certificate store location from the host to the container (issue 5194).
    These variables did not leak any sensitive information, but could cause errors in the container if the TLS certificate store was at a different location.
  • When changing the power status of a container, lock its config asynchronously in the task worker to avoid blocking the API.
  • Fix an issue that prevented hotplug of mount points when running kernel 6.5 or later (issue 5160).
  • Change the vmid field type in the /nodes/{node}/lxc response from string to integer to conform with the schema.
  • Fix an issue where volumes could not be hotplugged due to missing volume activation.
  • Fix an issue where a device passthrough entry would be added to the container config even though the entry was invalid.
  • Fix an issue where executing pct fsck would fail due to missing volume activation, and deactivate volumes afterwards even if the command failed.
  • Avoid passing the deprecated noacl mount option for ext4 formatted volumes (issue 4846).

General improvements for virtual guests

  • Stopping a VM or container can now overrule active shutdown tasks (issue 4474).
    Previously, stopping a guest could fail if there was a running shutdown task for that guest.
    When stopping a guest, the GUI now offers to abort running shutdown tasks.
    Overruling shutdown tasks is currently not implemented for HA guests.
  • Allow moving unused disks to another storage.
  • Fractional rate limits for virtual network devices can now be entered without a leading zero.

Improved management for Proxmox VE clusters

  • Modernize handling of host keys for SSH connections between cluster nodes (isse 4886).
    Previously, /etc/ssh/ssh_known_hosts was a symlink to a shared file containing all node hostkeys.
    This could cause problems if conflicting hostkeys appeared in /root/.ssh/known_hosts, for example after re-joining a node to the cluster under its old name.
    Now, each node advertises its own host key over the cluster filesystem. When Proxmox VE initiates an SSH connection from one node to another, it pins the advertised host key.
    For existing clusters, pvecm updatecerts can optionally unmerge the existing /etc/ssh/ssh_known_hosts.
  • Fix an issue where UTF-8-encoded files under /etc/pve would be incorrectly re-encoded as UTF-8.
  • Increase key size for new SSH keys from 2048 to 4096 bit.
  • Improvements and fixes to the notification system introduced in Proxmox VE 8.1:
    • Fix an issue where notification recipients could not be removed from a sendmail or SMTP target (issue 5074).
    • Backup job notifications include the total backup size, which was previously missing (issue 5067).
    • Clarify notification fields in the backup job window.
    • Deprecate notification options that were only relevant in a short transition period before the final notification system was introduced.
    • As built-in targets and matchers cannot be removed, change the corresponding button labels from "Remove" to "Reset".
    • Fix an issue where notification emails would contain a localized Date header that caused issues with some mail clients.
    • A mail forwarded to an SMTP target by proxmox-mail-forward now contains the original message as the message body instead of a message part.
    • Notification mails now contain the Auto-Submitted header.
    • Support lists of values for the exact match-field mode.
    • Notification events for forwarded mails now include the hostname and type.
    • The Gotify notification target now also sets the X-Gotify-Key header for authentication, to improve compatibility with older versions of Gotify (issue 5274).

Backup/Restore

  • Support backup fleecing as an advanced feature.
    When creating a backup of a running VM, the backup process starts sending data blocks to the backup target in the background.
    When the guest attempts to write new data to a block that is not yet backed up, the old data needs to be backed up before the guest write can proceed. This delays the guest write.
    If the backup target storage is slow, for example because it is a Proxmox Backup Server with a slow network connection, this delay can negatively impact VM performance.
    With fleecing enabled, the old data can be buffered on a fast local "fleecing storage" instead, allowing the guest write operations to proceed with a reduced delay.
    Backups with fleecing can have lower impact on VM performance, but temporarily take up additional space on the fleecing storage.
    Fleecing can be configured for a datacenter-wide backup job in the GUI, and be used through the CLI and API.
  • Replace confusing output of null value when successfully removing a backup snapshot on a Proxmox Backup Server storage with a line that indicates success (issue 4971).
  • Show the full error context output in operations on a Proxmox Backup Server storage.
  • Fix an issue where the connection to a Proxmox Backup Server presenting a certificate signed by a CA not trusted by the Proxmox VE node would fail, even if a fingerprint is provided (issue 5248).
  • Advanced options for backup jobs, such as performance settings or bandwidth limits, can now be specified in the GUI (issue 4513). Previously, these settings had to set via API or CLI.
  • Fallbacks for performance-related backup settings are now computed per setting. Previously, a fallback was computed for the performance setting as a whole.
  • Improve logging of failed hook scripts during backup.
  • Switch to modern ntfs3g driver for the single file-restore image, since it supports more features found in the filesystems of current Windows guests (issue 5259).

Storage

  • Replication jobs now fail early with an informative error in case source and target volumes both exist, but at least one of them has no snapshots.
  • Fix an issue where a snapshot created by vzdump could not be deleted because it was wrongly assumed to be a replication snapshot.
  • Clarify error message if an qemu-img invocation fails.
  • Fix an issue where insecure migration or replication would fail if it had to wait for a lock.
  • Support migration of VM templates with disks on LVM-thin storage (issue 1611).

Ceph

  • Fix an issue where the ceph-crash daemon could not post crash reports, and logged recurring warnings to the journal (issue 4759).
    Since ceph-crash does not run as root, it is not allowed to read the Ceph admin keyring.
    The issue is resolved by creating a client.crash keyring with appropriate rights and permissions both for new and existing Ceph clusters.
    The new keyring is created automatically on upgrade, if necessary.
  • Integrate patches that make it possible to manually install and use the upstream Ceph dashboard.
    Due to upstream changes, Proxmox VE 8.0 and 8.1 were incompatible with the Ceph dashboard.
  • Fix an issue where open TCP connections to VMs could freeze or timeout while upgrading Ceph if the firewall is active (issue 5213).
  • Integrate upstream patch that ensures RocksDB is built with compile-time optimizations for increased performance.
  • Fix permissions on subdirectories of /var/lib/ceph to avoid errors when posting crash reports.
  • Fix an issue where usage was reported as "NaN %" if no OSDs were created yet (issue 5293).
  • When expanding the Ceph status in the GUI, wrap the title to make sure it is completely visible (issue 5106).
  • Avoid edge case in which running pveceph purge would remove too many files.
  • Fix an issue where a monitor could not be deleted due to an incorrect existence check (issue 5198).

Access control

  • Keep ACL entries in the user.cfg file sorted, to ease tracking changes with tools like ansible or etckeeper (issue 5335).
  • Require non-root users to enter their current password on password change.
    This is to hedge against a scenario where an attacker has local or even physical access to a computer where a user is logged in.
  • Add dedicated Sys.AccessNetwork privilege to guard API calls that could be abused to gather information about the internal network, for example downloading ISOs (issue 5254).
    This new privilege can be granted to allow users to download ISOs without requiring the more powerful Sys.Modify privilege.
  • Restrict the values of legacy user-keys to the ones that were allowed in the deprecated initial TFA system.
  • Align the validation regular expression for the ACR value in OpenID Connect realms with the settings in Proxmox Backup Server (see related issue 5190).
  • Restrict the attributes and their values collected during LDAP sync to avoid issues with unexpected LDAP entries.
  • Add liberal length restrictions on user attributes (names, email addresses), to avoid bloating the user.cfg, which would needlessly restrict the number of ACL entries that can be stored.

Firewall & Software-Defined Networking

  • Firewall based on nftables as an optional technology preview.
    Introduce a new modern Proxmox VE firewall implementation written in Rust using nftables.
    After installing and enabling the new implementation, firewall rules are enforced using nftables instead of iptables.
    Moving to nftables will make firewall bridges obsolete, which reduces complexity and allows fixing long-standing issues with the old implementation.
    The new implementation is very close to feature parity with the old implementation. Some minor differences, such as the handling of REJECT rules, still need to be addressed.
    The nftables firewall is provided as a technology preview to gather feedback from the community.
  • IP allocation for VNets with active DHCP plugin and using both IPv4 and IPv6 are now atomic, in the sense that a failure to obtain an allocation for one family removes a successful one for the other family.
  • Allow arbitrary names for bridges. Previously, bridges had to be named vmbrN to be recognized (issue 545).
  • Fix guest startup when a zone has DHCP enabled, but no DHCP ranges configured for any of the subnets.
  • Ensure that addresses for both IPv4 and IPv6 are requested in case that a DHCP range was added for one family after a guest has already got an allocation for the other.
  • Add support for ipv6 prefix-list filtering in frr
  • Adapt the generated FRR config to work around an upstream issue and update to FRR version 8.5.2.
  • Use the correct ordering and add support for IPv6 routes in EVPN controller.
  • Use the proxy configured for the datacenter in SDN for external API calls.
  • Improve interaction with the external IPAM plugins phpIPAM and NetBox.
  • Remove deprecated and now forbidden priority field from change request to PowerDNS (issue 5275).
  • Correctly detect array values, such as DHCP ranges, as changed for validation.
  • Handle VNets that are pending, but not yet committed, when creating the list of all VNets.
  • Fix an issue that triggered Perl's Taint Mode when an API daemon attempted to delete DHCP config files.
  • Improve the testing framework for the SDN stack, by refactoring the code to enable mocking interactions with the system.
  • Fix an issue where firewall rules using IPs/CIDRs could not be edited (issue 4963) and changes would not be detected properly.
  • Fix an issue where editing an EVPN zone would not accept an empty primary exit node.
  • Provide explicit type information for the pending field in the API returns.
  • Derive the BGP router-id from the interface MAC-Address in IPv6 underlay networks (issue 5364).
  • Fix the syntax for IPv6 route map entries in the generated FRR config (issue 5361).
  • Verify ISIS Network ID for syntactical correctness (issue 5344).
  • Add support for parsing and preserving BGP community-list entries in the FRR config (issue 5319).
  • Set the configured MTU of a network as DHCP option.
  • Keep aliases in cluster.fw file sorted, to ease tracking changes with tools like ansible or etckeeper (issue 5335).
  • Fix the interaction between ifupdown2 and packages which install hooks for network configuration (issue 5009).
    This fixes an issue where hosts could hang during network setup after a reboot if both ntpsec and ntpsec-ntpdate are installed.

Improved management of Proxmox VE nodes

  • A new meta package proxmox-secure-boot-support pulls in all the needed packages to enable secure-booting Proxmox systems with the default keys. This makes it easier to switch an existing system to Secure Boot.
  • The UEFI firmware update executable fwupd-amd64-signed is now signed with the Proxmox secure-boot keys, making it possible to use fwupd in Proxmox systems with secure-boot enabled.
  • Proxmox repackaged the r8125-dkms driver needed for 2.5G RealTek NICs, as the driver is not merged in the upstream kernel, in order to add compatibility with the new 6.8 kernel series.
  • Wake on LAN now allows configuring the bind interface and broadcast address used for sending the magic packet (issue 5255).
  • Automatically rewrite the LVM config to filter out RBD disks, if possible. Previously, LVM on the host would recognize LVM Physical Volumes located on VM disks backed by KRBD.
  • Order the ZFS pool specific import units created by Proxmox VE for new pools before the general zfs-import-cache and zfs-import-scan units, preventing harmless but misleading errors in the journal while booting (issue 4835).
  • Fix a network configuration issue where creating a new VLAN on a bridge would overwrite the vlan-id of an existing VLAN (issue 5141).
  • Clarify the description of the internal pvesubscription set-offline-key command.
  • Avoid misleading error message in pvesubscription in case the offline key activation was successful.
  • Avoid logging warnings on EFI systems without Secure Boot support.
  • Increase the maximum size before rotating the pveam logs from 50 KiB to 256 KiB, and warn if rotation fails.
  • Support for adding custom ACME enabled CA's with optional authentication through External Account Binding (EAB) is now also present in the GUI (issue 4497, issue 5093).
  • Avoid spurious warnings about uninitialized values in various situations.
  • When a service restart fails while upgrading pve-manager or pve-ha-manager, avoid failing the upgrade.
  • Improved pvereport to provide a better status overview:
    • Add packet counter to iptables-save output and format it as a table.
    • Add the current kernel command line.
    • List currently defined jobs.
    • Include more details about recognized APT sources and packages that are held back.
    • Extend the list of reported packages, by recently added library packages
  • Improvements to Proxmox Offline Mirror:
    • Consider the subscription status when configuring mirrors for the ceph repositories provided by Proxmox.
    • Improve UX in promxox-offline-mirror-helper, when having multiple subscription keys available at the chosen mountpoint.
    • Add dark mode to the documentation.
    • Fix an wrong configuration setting for allowing weak RSA cryptographic parameters.
    • Improve path handling with command line arguments.
    • Support repositories that do not provide a Priority field (issue 5249).

Installation ISO

  • Support for automated and unattended installation of Proxmox VE.
    Introduce the proxmox-auto-install-assistant tool that prepares an ISO for automated installation.
    The automated installation ISO reads all required settings from an answer file in TOML format.
    One option to provide the answer file is to directly add it to the ISO. Alternatively, the installer can retrieve it from a specifically-labeled partition or via HTTPS from a specific URL.
    If the answer file is retrieved via HTTPS, URL and fingerprint can be directly added to the ISO, or obtained via DHCP or DNS.
    See the wiki page on Automated Installation for more details.
  • memtest86+ got updated to version 7.0.
  • Fix indefinite hangs during country detection (issue 4872).
  • The parsing of hostname options was fixed to catch invalid entries (issue 5230).
  • Fix an issue where setting ZFS compression to off did not have any effect, due to a change in upstream defaults.
  • Deprecated checksum options got removed in the advanced options for ZFS pools.
  • Improve the layout of widgets in the GTK-based installer for consistent margins and alignment.

Notable changes

  • Kernel 6.8 is reported to fix a long-standing issue where VMs running on hosts with multiple NUMA nodes could temporarily freeze for short durations. The issue was more likely to occur if KSM and/or NUMA balancing was active. See the corresponding forum thread for more information.
  • Previously, the LXC power management endpoints under /api2/json/nodes/{node}/lxc/{vmid}/status/ acquired the config lock synchronously in the API handler, meaning that a successful HTTP response implied that the task worker was able to acquire the lock. Now, they acquire the config lock asynchronously in the task worker, meaning that the task worker may still fail to acquire the lock despite the successful HTTP response. This change may require adjustment of tools that use the API directly and relied on the previous behavior.
  • When creating a backup and specifying custom performance settings, fallbacks for unspecified settings are now computed per performance setting. Previously, a fallback was computed for the performance setting as a whole. In a few edge cases, this may change the effective performance settings for backups. See the commit for details.

Known Issues & Breaking Changes

Kernel 6.8

The Proxmox VE 8.2 releases will install and use the 6.8 Linux kernel by default, a major kernel change can have a few, hardware specific, side effects.

You can avoid installing the 6.8 kernel by pinning the proxmox-default-kernel package version before the upgrade. The last version to depend on kernel 6.5 is 1.0.1.

To pin the package to that version, create a file in /etc/apt/preferences.d/proxmox-default-kernel with the following content. This will keep proxmox-default-kernel on the old version until that file is deleted, and a new upgrade is initiated:

Package: proxmox-default-kernel
Pin: version 1.0.1
Pin-Priority: 1000

Kernel: Change in Network Interface Names

Upgrading kernels always carries the risk of network interface names changing, which can lead to invalid network configurations after a reboot. In this case, you must either update the network configuration to reflect the name changes, or pin the network interface to its name beforehand.

See the reference documentation on how to pin the interface names based on MAC Addresses.

Currently, the following models are known to be affected at higher rates:

  • Models using i40e. Their names can get an additional port suffix like p0 added.

Kernel: DKMS

Since the default kernel changes to 6.8, it may happen that installed DKMS modules will not build anymore. For instance, as of April 2024 this is the case for the latest NVIDIA GRID driver 17.1.

Aside from simply removing and not using those modules, one possible workaround is to pin the proxmox-default-kernel package to an older version before the upgrade. The last version to depend on kernel 6.5 is 1.0.1.

Kernel: Split Lock Detection Slowing Down VMs

If the host CPU supports split lock detection and a VM is performing misaligned memory accesses, the host kernel might artificially slow down the VM as of kernel version 5.19 or later.

There's now a wiki article on split lock detection with more information.

Old Ceph Crash Reports

The Ceph crash service monitors other Ceph daemons for crashes and reports them as a health warning. This service did not work properly with Proxmox VE's ceph server integration due to some internal ceph restructuring. This has been fixed with the ceph packages uploaded with the Proxmox VE 8.2 release, but this also means that it will now catch up on reporting older crashes, even if they happened a few weeks or months ago.

If you see a Ceph health warning due to services being reported as crashed, please also check the date of the crash to avoid mistaking it for a recent crash caused by the upgrade.

Kernel: intel_iommu now defaults to on

The intel_iommu parameter defaults to on in the kernel 6.8 series. Enabling IOMMU can cause problems with older hardware, or systems with not up to date BIOS, due to bugs in the BIOS.

The issue can be fixed by explicitly disabling intel_iommu on the kernel commandline (intel_iommu=off) following the reference documentation.

Kernel: Broadcom Infiniband driver issue

The bnxt_re module causes issues with some Broadcom NICs, which have their Infiniband functionality enabled. As Infiniband is not used in most deployments simply preventing the module from loading mitigates the issue. Create a file /etc/modprobe.d/bnxt-re-blacklist.conf containing:

blacklist bnxt_re

Afterwards make sure to update the initramfs with update-initramfs -k all -u.

Alternatively you can also install Broadcom's niccli utility and the corresponding dkms module to disable the Infiniband functionality of the NIC permamently. See the relevant post in our community forum.

Proxmox VE 8.1

Released 23. November 2023: See Downloads (updated ISO release 2 with current package set including updated kernel and ZFS 2.2.2, on 07. February 2024.)

  • Based on Debian Bookworm (12.2)
  • Latest 6.5 Kernel as new stable default
  • QEMU 8.1.2 (8.1.2 with ISO refresh)
  • LXC 5.0.2
  • ZFS 2.2.0 with stable fixes backported (2.2.2 with ISO refresh)
  • Ceph Reef 18.2.0
  • Ceph Quincy 17.2.7

Highlights

  • Secure Boot support.
    Proxmox VE now includes a signed shim bootloader trusted by most hardware's UEFI implementations. All necessary components of the boot chain are available in variants signed by Proxmox.
    The Proxmox VE installer can now be run in environments where Secure Boot is required and enabled, and the resulting installation can boot in such environments.
    Existing Proxmox VE installations can be switched over to Secure Boot without reinstallation by executing some manual steps, see the documentation for details.
    How to use custom secure boot keys has been documented in the Secure Boot Setup wiki. For using DKMS modules with secure boot see the reference documentation.
  • The core of Proxmox VE's Software-Defined Network stack moved from experimental to supported and includes new features.
    Proxmox VE SDN allows fine-grained control of virtual guest networks at the datacenter level.
    The new automatic DHCP IP address management (IPAM) plugin can be used to transparently assign IPs to virtual guests in Simple zones. (tech-preview)
    The web UI now allows inspecting and editing DHCP leases managed by the built-in IPAM plugin.
  • New flexible notification system.
    Send notifications not only via the local Postfix MTA, but also via authenticated SMTP or to Gotify instances.
    Flexible notification routing with matcher-based rules to decide which targets receive notifications about which events.
  • Proxmox Server Solution GmbH, the company behind Proxmox VE development and infrastructure, was assigned an official Organizationally Unique Identifier (OUI) BC:24:11 from the IEEE to use as default MAC prefix for virtual guests.
    This OUI can be used for virtual guests inside private networks by all users and is set as new default MAC-Address prefix in the datacenter options.
  • Ceph Reef is now supported and the default for new installations.
    Reworked defaults brings improved performance and increased reading speed out of the box, with less tuning required.

Changelog Overview

Enhancements in the web interface (GUI)

  • Improvements to bulk actions:
    • Add a new "Bulk Suspend" action to suspend a selection of guests in one action.
    • Add a new section above the guest list for configuring guest filters and add a button for clearing filters.
    • Allow to filter guests by their tags.
    • Reorder fields and drop obvious warning about local storage to improve on screen-space utilization.
    • Reword the message displayed for bulk actions in the task log to "Bulk Start/Stop/Migrate". The message shown previously was "Start/Stop/Migrate all" and could be misleading in case not all guests were affected (issue 2336).
    • The "Bulk Migrate" action is now hidden on standalone nodes, as there is no valid migration target in that case.
  • Improvements to the node summary panel:
    • The summary now indicates whether the node was booted in legacy (BIOS) mode, EFI mode, or EFI mode with Secure Boot enabled.
    • The currently running kernel is now reported more compactly by indicating only the version and the build date.
  • Allow to automatically decompress an ISO file when downloading from a URL to a storage. The decompression algorithm can be set in the GUI (issue 4849).
  • Allow moving VMs and containers from one pool to a different pool in one operation.
  • Avoid needlessly reloading the GUI after ordering a certificate via ACME for a different cluster node.
  • The permission editor now also shows the ACL paths for notifications and PCI/USB mappings.
  • The resource tree now displays the usage in percent when hovering over a storage.
  • If the configured tree shape for tags is not "Full", the resource tree now displays a tooltip with the tag name when hovering over the configured shape.
  • Ensure the SPICE config is downloaded with the correct file extension on Safari to avoid problems with macOS application association (issue 4947)
  • Fix an issue where the "Migrate" button stayed disabled even if selecting a valid target node.
  • Fix a bug where the backup job editor window would add an invalid entry for VMID 0 when using selection mode "all".
  • Improve error message when creating a VM with an invalid hostname: Clarify that a valid hostname, not a valid DNS name, is expected (issue 4874).
  • When uploading a subscription, ignore surrounding whitespace in the subscription key to avoid confusing errors.
  • Improve the focus handling when editing tags to allow tabbing though editable tag fields.
  • Allow adding tags already when creating VMs and containers.
  • Increase height of the VM and container creation wizard to obtain a 4:3 ratio.
  • When creating an IP/CIDR inside an IPSet, the GUI now requires that an IP/CIDR is provided. Previously, the GUI accepted an empty field, but the API threw an error.
  • Update external links to proxmox.com that changed during the website redesign.
  • Fix an issue where the OK button would stay disabled when editing an ACME DNS challenge plugin (issue 4531).
  • Fix an issue where clicking "Reset" in the zpool creation window could cause an error when re-opening the window (issue 4951).
  • Fix an issue where users could write notes with links containing JavaScript code. This JavaScript code would be executed when a (different) user clicked on such a link.
  • HTML-encode API results before rendering as additional hardening against XSS.
  • Improved translations, among others:
    • Croatian (NEW!)
    • Georgian (NEW!)
    • Arabic
    • Catalan
    • German
    • Italian
    • Polish
    • Simplified Chinese
    • Traditional Chinese
    • Ukrainian
    • Several remaining occurrences of the GiB unit in the GUI can now be translated (issue 4551).

Virtual machines (KVM/QEMU)

  • New QEMU version 8.1.
    See the upstream changelog for details.
  • Add clipboard support to the VNC console in the GUI. For now the feature cannot be enabled in the GUI and has to be manually enabled via API or CLI. After installing the SPICE guest tools, text can be copied from/to the guest clipboard using the noVNC clipboard button.
  • When creating a Windows VM, allow adding a second CD/DVD drive directly in the creation wizard.
    This makes it easier to add the Windows VirtIO drivers ISO before starting the VM.
  • Remove the 10-minute timeout for allocating VM disks when restoring from backup, as this timeout may be exceeded if disks are large or network storage is involved (issue 2817).
  • Log a warning when starting a VM with a deprecated machine version.
  • Fix issues where shutdown and reboot commands would time out (instead of failing immediately) on ACPI-suspended VMs.
  • Enabling or disabling CPU hotplug for a running VM did not work reliably and is not allowed anymore.
  • Avoid leaving potentially large amounts of memory assigned to the QEMU process after backup.
  • Fix an issue where heavy network traffic or connection issues during a backup to Proxmox Backup Server could cause an unsolicited write to the first sector of a backed-up SATA disk, which usually contains the boot-sector (issue 2874).
  • Fix an issue where a race condition could cause a VM crash during backup if iothread is enabled.
  • Fix an issue where each pause and resume operation (for example when taking a snapshot) would increase the number of open file descriptors of the QEMU process, which could eventually lead to crashes.
  • Fix an issue where starting a VM with machine type q35 and multiple IDE drives would fail.
  • cloud-init: Fix issues where non-root users could not regenerate the cloud-init drive or set the ciupgrade option.
  • Start VMs using PCI passthrough with a higher timeout that is calculated from the configured memory. Previously, the timeout was reported to be too short when using PCI passthrough.
  • Fix an issue where qmeventd failed to retrieve VMIDs from processes on hybrid cgroup systems and logged errors to the journal.
  • Fix an issue where remote migration would fail for certain combinations of source/target storage, for example from qcow2 on directory to LVM-thin.
  • Fix an issue where backup of a VM template with a TPM would fail (issue 3963).
  • Fix an issue where the VNC proxy would fail if the LC_PVE_TICKET was not set (issue 4522).
  • Backports of several upstream kernel patches:
    • Fix an issue where VMs with a restricted CPU type could get stuck after live-migration from a host with kernel 5.15 to a host with kernel 6.2.
    • Fix an issue where VMs could get stuck after several days of uptime if KSM, ballooning, or both, were enabled.
    • The FLUSHBYASID flag is now exposed to nested VMs when running on an AMD CPU. This fixes an issue where some hypervisors running in a VM would fail to start nested VMs.
    • Fix an issue with recovering potential NX huge pages that resulted in a warning logged to the journal (issue 4833).
    • Fix an issue where only one NVMe device would be recognized even though multiple are present (issue 4770).

Containers (LXC)

  • Support device passthrough for containers. The new dev0/dev1/... options take the path of host device. Optionally, owner and permission settings for the device node inside the container can be given. For now, the option cannot be set in the GUI and has to be manually set via API or CLI.
  • Allow specifying multiple SSH keys in the container creation wizard (issue 4758).
  • Show privileged status as a separate row in the guest status view in the GUI.
  • Show distribution logo and name in the guest status view in the GUI.
  • Fix an issue where network would fail to come up for Fedora containers.
  • Add an API endpoint /nodes/{node}/lxc/{vmid}/interfaces for querying network interfaces of a running container.
  • Improve architecture detection for NixOS containers, which would previously produce a warning and default to x86_64 in case /bin/sh did not (yet) exist in the container.
  • The pct status command does not report guest CPU usage anymore, as there is currently no fast way to measure it (issue 4765).
  • Restoring a container from a PBS backup now honors the ignore-unpack-errors flag (issue 3460).
  • Fix an issue where Fedora containers would not have a container-getty on first boot.

General improvements for virtual guests

  • Show progress of offline disk migration in the migration task log by use of dd's status=progress argument (issue 3004).
  • Proxmox VE now has an officially assigned OUI from the IEEE BC:24:11 to be used as product-specific MAC prefix. This is now used by default instead of assigning purely random MACs. (issue 4764).

HA Manager

  • Notification for HA events, like fencing, are now configurable via the new modular notification system.
  • An issue with the target selection during service recovery, where a fenced node was selected as target, was fixed (issue 4984).

Improved management for Proxmox VE clusters

  • New flexible notification system.
    Allows sending notifications to different targets. The local Postfix MTA, previously the sole notification option, is now one of several target types available.
    Two new target types include: smtp allowing direct notification emails via authenticated SMTP, and gotify, which sends notifications to a Gotify instance.
    Flexible notification routing is possible through matcher-based rules that determine which targets receive notifications for specific events.
    Match rules can select events based on their severity, time of occurrence, or event-specific metadata fields (such as the event type). Multiple rules can be combined to implement more complex routing scenarios.
    Email notifications now contain an Auto-Submitted header to avoid triggering automated replies (issue 4162)
  • Name resolution to find an IP for a node's name now looks at all IPs associated with the name, only excluding loopback addresses. Additionally, a better warning is logged in case of a failed lookup.
  • pvecm updatecerts, which is used to ensure a consistent state of the certificates in a cluster, was reworked for increased robustness:
    Files not being stored in the cluster filesystem are now created earlier.
    The call now waits for the node to join the quorate partition of the cluster instead of failing. This is especially helpful during node-bootup, when running before starting pveproxy.service.
    The error message in case the call fails due to missing quorum was reworded for better clarity.
  • The MAC addresses of the guests under SDN are now cached in the cluster filesystem for improved lookup speed in a cluster.

Backup/Restore

  • Backups and backup jobs can now be configured with a notification mode for a smooth migration to the new notification system.
    The legacy-sendmail mode replicates the previous behavior of sending an email via the local Postfix MTA if an email is configured.
    The notification-system mode sends notifications exclusively using the new notification system.
    The default mode auto behaves like legacy-sendmail if an email address is configured, and like notification-system if no email address is configured.
    Thus, existing backup jobs without a configured email address will default to sending notification emails to the root user after the upgrade to Proxmox VE 8.1.
  • Allow setting the pbs-entries-max parameter. In order to prevent failing container backups with a huge number of files in a directory, it can help to set it to a higher value than the default (issue 3069).
  • Improvements to the vma CLI tool that handles VMA backup files:
    • The vma extract command now optionally takes a filter to only extract specific disks from the backup (issue 1534).
    • Fix an issue where the vma create command could not write to tmpfs (issue 4710).
  • Improvements to file restore:
    • Fix an issue where the settings for ZFC ARC minimum and maximum were not properly set for the temporary file-restore VM.
    • Fix an issue where debug log messages were not printed even though the PBS_QEMU_DEBUG environment variable was set.
  • Fix an issue with backups of diskless VMs to Proxmox Backup Server: Even though encryption was enabled, such backups would not be encrypted. Since the backup contained no disks, this did not reveal any VM data, but the VM configuration was stored in plaintext (issue 4822).
  • File restore now allows downloading .tar.zst archives as an alternative to .zip archives.
  • Improved handling of backups with master key:
    • Abort the backup if the the running QEMU binary does not support master keys, instead of just printing a warning. Master keys are supported in QEMU builds of Proxmox VE since version 6.4.
    • If no encryption key is configured, the backup task will explicitly warn that the backup will be unencrypted.
    • The backup log now prints only one message that encryption is enabled, instead of previously two messages.
  • Allow to configure whether restore should overwrite existing symlinks or hard links, when directly invoking proxmox-backup-client restore (issue 4761)

Storage

  • Improvements to the iSCSI storage backend:
    • Try to log into all discovered portals for a target, instead of just the single portal initially configured for the storage. This way, the storage can now become available in a multipath setup if at least one portal is online, even if the single configured portal is offline.
    • The backend is now usable immediately after installing Open-iSCSI. Previously, some services needed to be restarted first.
  • Fix an issue where a replication job could not be run or deleted if it referred to a storage that does not exist anymore.
  • SMB/CIFS: Fix connection check in case an empty domain is provided.
  • The BTRFS plugin received a fix for creating base templates when falling back to the standard directory variant.

Ceph

  • Support installing Ceph 18.2 Reef and make it the default release for new setups.
  • Allow creating multiple OSDs per physical device via API and CLI, and display such setups properly in the GUI. Multiple OSDs on one device can be useful when using fast NVMe drives that would be bottle-necked by a single OSD service (issue 4631).
  • When creating a pool, read the default values for size/min_size from the Ceph configuration instead of using hard-coded default values 3/2 (issue 2515).
    There are use cases where different values for size/min_size make sense, for example 4/2 if a cluster spans two rooms.
  • The pveceph install commands now asks the user to confirm the Ceph version to be installed (issue 4364).
  • Improve discoverability of Ceph warnings by providing a tabular view and a button to copy warning details.
  • Report OSD memory usage more accurately by using the Proportional Set Size (PSS) of the OSD process. Previously, memory usage was read from the OSD service and thus included the page cache, leading to extremely high values shown in the GUI.
  • Use snake_case when setting options in Ceph config files to ensure consistency within that file (issue 4808).
  • Mark global pg_bits setting as deprecated and make it a no-op. The setting has been deprecated since Ceph 13.
  • Improve reporting of cluster health:
    • Replace "Error" category for PG states with "Warning" and "Critical" categories to allow more fine-grained assessment of the cluster state.
    • Rename "Working" state to "Busy" state to better convey its meaning.

Access control

  • Support nested pools up to a nesting depth of 3 levels for greater flexibility in structuring VMs and containers (issue 1148).
    Pool names can now contain at most two slashes (allowing to structure them as parent/child/grandchild).
    Permissions are inherited along the path according to the usual inheritance rules.
  • Improvements to LDAP/AD realms:
    • When adding/updating an LDAP/AD realm, there is now the option to directly check if the bind works, instead of having to wait for the first sync. This check is enabled by default in the GUI and can be disabled in the advanced options if needed.
    • Forbid specifying a Bind DN without a password in the GUI, which is already forbidden by the API.
    • Expose the mode option in the GUI that allows switching between LDAP, LDAPS and LDAP via STARTTLS. This option was already supported by the backend and succeeds the secure option which allowed switching between LDAP and LDAPS only.
    • Fix an issue with enforced TFA where certain sync settings would cause the TFA restriction to not be enforced.
    • It is now possible to update only the password field for the bind-user of an LDAP realm, this failed previously.
    • Allow setting the case-sensitive option of AD realms, which was previously only editable via CLI, in the GUI.
  • Existing-but-disabled TFA factors can no longer circumvent realm-mandated TFA.
  • The list of SDN-related ACL paths now reflects all SDN objects, ensuring that there are no settings which remain root@pam only.
  • A mistyped entry of pools instead of pool in the default set of ACL paths was fixed.
  • Unlocking a user now also resets the TFA failure count.

Firewall & Software-Defined Networking

  • The core of Proxmox VE's Software-Defined Network stack has been lifted from experimental to supported.
  • New DHCP plugin for SDN (tech preview).
    Enabling DHCP for a zone will start a DHCP server that can automatically assign IP addresses to associated virtual guests (VMs and containers).
    Currently, only Simple zones are supported, and dnsmasq is the only supported DHCP server.
    Each subnet of a Simple zone can now be configured with DHCP ranges.
    When a virtual guest associated to the zone starts, the DHCP plugin queries the zone's IPAM for an IP address and offers it to the virtual guest.
    If the built-in Proxmox VE IPAM is used, active DHCP leases can be viewed and edited conveniently on the web UI.
  • IS-IS was added as a further SDN controller, next to EVPN and BGP.
  • The interfaces section of the frr is now parsed in order to support multiple underlay networks (like IS-IS).
  • MAC learning on SDN bridges can now selectively be disabled for individual plugins. This is implemented for the EVP plugin.
  • A warning is logged if the main network configuration (/etc/network/interfaces) does not source the SDN controlled configuration (/etc/network/interfaces.d/sdn), because the SDN configuration would be ignored in that case.
  • The error reporting for problems with vnet generation was improved, by pointing to the relevant task log.
  • The firewall log can now be also displayed for a specific timespan instead of showing the live-view (issue 4442).
  • Fix an issue where scoped alias resolution would fail with an error.
  • Enabling VLAN-awareness for an EVPN zone is unsupported and now fails instead of just printing a warning (issue 4917).
  • Fix an issue where an empty subnet could not be deleted if it has a gateway defined.
  • The IPAM selector, which is a required choice, is not hidden behind the Advanced checkbox in the UI anymore.
  • The identifying CIDR for a vnet is now named Subnet to improve clarity.
  • A systemd.link(5) configuration is now shipped to both keep bridges up even if there's no port connected, and to prevent that a random MAC-address is assigned to bridges or bond interfaces.
  • A ethtool is now a hard dependency of ifupdown2, matching the common need of disabling offloading features of certain NICs.
  • Prevent a crash in ifupdown2 caused by an error in a third-party plugin in /etc/network/ifup.d/.
  • The accept_ra and autoconf sysctl settings are now also applied for bridge interfaces.
  • ifupdown2 now correctly recognizes when remote IPs for vxlan are configured by external sources and does not remove them on reconfiguration.

Improved management of Proxmox VE nodes

  • Secure Boot support.
    Proxmox VE now ships a shim bootloader signed by a CA trusted by most hardware's UEFI implementation. In addition, it ships variants of the GRUB bootloader, MOK utilities and kernel images signed by Proxmox and trusted by the shim bootloader.
    New installation will support Secure Boot out of the box if it is enabled.
    Existing installations can be adapted to Secure Boot by installing optional packages, and possibly reformatting and re-initializing the ESP(s), without the need for a complete reinstallation. See the wiki article for more details.
  • The kernel shipped by Proxmox is shared for all products. This is now reflected in the renaming from pve-kernel and pve-headers to proxmox-kernel and proxmox-headers respectively in all relevant packages.
  • The new proxmox-default-kernel and proxmox-default-headers meta-packages will depend on the currently recommended kernel-series.
    • Avoid logging benign but confusing warnings about a segfault in pverados.
  • Many edge-cases encountered during the upgrade from PVE 7.4 to 8 by our user-base are now detected and warned about in the improved pve7to8 checks:
    • Warn if DKMS modules are detected, as many of them do not upgrade smoothly to the newer kernel versions in PVE 8.
    • Warn if the PVE 7 system does not have the correct meta-package of grub installed ensures to actually upgrade the installed bootloader to the newest version.
    • The check for old cgroupv1 containers was adapted to not cause false positives on current containers (for example Fedora 38).
  • Support for adding custom ACME enabled CA's which require authentication through External Account Binding (EAB) on the commandline (issue 4497).
  • Using the Console/Shell on a PVE node is now possible for all users with the appropriate permissions (Sys.Console). The restriction to the pam was removed. The users will still need to login as a system user on the shell though.
  • With the Proxmox repositories having support for fetching them directly the changelogs for new package versions shown in the UI are now all gathered with apt changelog.
  • The pvesh debug tool now also supports yielding output for streaming API calls, like for example the syslog.
  • The documentation on firmware updates provided by the operating system has been extended and revised, helping administrators to identify if their setup is optimal.

Installation ISO

  • The ISO is able to run on Secure Boot enabled machines.
  • The text-based UI got significant improvement based on the feedback received from the first release in PVE 8.0.
  • The current link-state of each network interface is now displayed in the network configuration view, helping in identifying the correct NIC for the management interface (issue 4869).
  • If provided by the DHCP server, the hostname field is already filled out with the information from the lease.
  • The arc_max parameter for installations on ZFS can now be set in the Advanced Options. If not explicitly set by the user, it is set to a value targeting 10% of system memory instead of 50%, which is a better fit for a virtualization workload (issue 4829).
  • The correct meta-package of grub is now installed based on the boot mode (grub-pc or grub-efi-amd64). This ensures that the bootloader on disk gets updated when there is an upgrade for the grub package.
  • The text-based UI is now also available over a serial console, for headless systems with a serial port.
  • /var/lib/vz backing the local storage is now created as separate dataset for installations on ZFS (issue 1410).
  • The root dataset on ZFS installations now uses acltype=posixacl in line with upstream's recommendation.
  • Kernel parameters passed on the commandline during install are now also set in the target system (issue 4747).
  • Fix the warning that is shown in case the address family (IPv4, IPv6) of the host IP and DNS server do not match.
  • The text-based UI now sets the correct disk-size for the selected disk, instead of limiting the installation to the size of the first disk in the list (issue 4856).
  • For better UX, the text-based UI now also displays a count-down before automatically rebooting.
  • The screensaver in the graphical installer is now disabled.
  • The graphical installer now displays the units used for disk-based options.
  • The kernel commandline parameter vga788 is now set for both the graphical debug and all text-based UI installation options. This improves compatibility of the installer with certain hardware combinations.

Other Notable changes

  • Existing backup jobs without a configured email address did not send email notifications before the upgrade, but will default to sending email notifications to the root user via the new notification system after the upgrade to Proxmox VE 8.1.
    In order to disable notification emails, either change the job's notification mode to legacy-sendmail or configure the notification system to ignore backup job notifications.

Known Issues & Breaking Changes

Kernel

  • With Kernel 6.5 and ZFS it can happen that the host hits a kernel bug when starting a VM with hugepages, and the host must be rebooted.
    More information can be found in the forum and in the bug reports for ZFS and Linux kernel.
  • Some users with Intel Wi-Fi cards, like the AX201 model, reported that initialization of the card failed with Linux kernel 6.5.
    This is still being investigated. You should avoid booting into the new kernel if you have no physical access to your server and an Intel Wi-Fi device is used as its only connection. See the documentation for how to pin a kernel version.
  • Some SAS2008 controllers need a workaround to get detected since kernel 6.2, see the forum thread for details.
  • For certain Linux VMs with OVMF and guest kernels >= 6.5, there might be issues with SCSI disk hot(un)plug. This is a more general issue and is currently being investigated and will be fixed on Linux upstream.
  • The TPM (Trusted Platform Module) hardware random number generator (RNG) is now disabled on all AMD systems equipped with a firmware-based TPM (fTPM) device. This change was implemented due to such RNGs causing stutters in many systems. Affected systems should switch the RNG source from /dev/hwrng to an alternative, like /dev/urandom.
Reference: kernel commit "tpm: Disable RNG for all AMD fTPMs"
  • Some Dell models, which appear to include all those using a BCM5720 network card, have a compatibility issue with the tg3 driver in the kernel based on version 6.5.11.
    From our current understanding 14th Generation Dell Servers (T140, R240, R640,...) are affected, while others (e.g., R630, R620, R610,...) do not seem to be affected. We are currently investigating this issue. In the meantime, we recommend pinning the kernel to version 6.2 on affected hosts.
    Some users report that disabling the X2APIC option in the BIOS resolved this issue as a workaround.

Network Configuration

  • Systems installed on top of Debian or those installed before Proxmox VE 7.0 will be switched by default from the ifupdown network configuration implementation to the modern ifupdown2.
    This switch occurs because the stabilized SDN package is now marked as a recommendation for various Proxmox VE packages. Consequently, it will be installed on all systems that have kept the APT::Install::Recommends config at its default true value, leading to the inclusion of the ifupdown2 package.
    While ifupdown2 aims to be backward compatible with the legacy ifupdown, some details may still differ. Currently, we are aware of one such difference, particularly regarding the default value for accepting IPv6 router advertisement requests (accept_ra). In the legacy ifupdown, accept_ra is set to 2 ("Accept Router Advertisements even if forwarding is enabled") as long as no gateway is configured. However, in ifupdown2, it always defaults to 0 ("Do not accept Router Advertisements") as a security measure, requiring administrators to actively opt-in.
    If you rely on router advertisements being accepted, you can simply add accept_ra 2 to the respective interface section in /etc/network/interfaces.

Virtual Machines

  • The pve-edk2-firmware package, which provides (U)EFI firmware for virtual machines, has been split up in multiple packages. Users of the fully supported amd64/x86_64 architectures do not need to change anything.
    The OVMF variants, used for amd64/x86_64 based virtual machines, got moved into pve-edk2-firmware-ovmf and pve-edk2-firmware-legacy, these will always be installed automatically on upgrade.
    The AAVMF variants, used for the experimental ARM64 VM integration, have been moved to pve-edk2-firmware-aarch64, this package won't be automatically installed on upgrade, if you rely on the experimental ARM integration you need to manually install this package.
  • With the 8.1 machine version, QEMU switched to using SMBIOS 3.0 by default, utilizing a 64-bit entry point. Since the SMBIOS 32-bit and 64-bit entry points can coexist, and most modern operating systems set up both, the general impact should be minimal
    However, certain operating systems or appliances, such as Juniper's vSRX, do not have a 64-bit entry point set up and might fail to boot with the new machine type.
    For affected VMs, you can explicitly pin the machine version to 8.0 in the web interface. Note that the machine version of VMs with a Windows OS type is automatically pinned to the most recent version at the time of creation.
    Upstream commit reference: QEMU commit

Proxmox VE 8.0

Released 22. June 2023: See Downloads

  • Based on Debian Bookworm (12.0)
  • Latest 6.2 Kernel as stable default
  • QEMU 8.0.2
  • LXC 5.0.2
  • ZFS 2.1.12
  • Ceph Quincy 17.2.6

Highlights

  • New major release based on the great Debian Bookworm.
  • Ceph Quincy enterprise repository.
    Access the most stable Ceph repository through any Proxmox VE subscription.
  • Add access realm sync jobs.
    Synchronize users and groups from an LDAP/AD server automatically at regular intervals.
  • Integrate host network bridge and VNet access when configuring virtual guests into the ACL system of Proxmox VE.
    With the new SDN.Use privilege and the new /sdn/zones/<zone>/<bridge-or-vnet>/<vlan-tag> ACL object path, you can give out fine-grained usage permissions for specific networks to users.
  • Create, manage and assign resource mappings for PCI and USB devices for use in virtual machines (VMs) via API and web UI.
    Mappings allow you to give out access to one or more specific device(s) to a user, without them requiring root access.
    For setups that require VMs, with hardware passthrough to be able to (offline-) migrate to other nodes, mappings can be used to ensure that the VM also has a valid device for passthrough on the target node.
  • Add virtual machine CPU models based on the x86-64 psABI Micro-Architecture Levels and use the widely supported x86-64-v2-AES as default for new VMs created via the web UI.
    The x86-64-v2-AES provides important extra features over the qemu64/kvm64 like SSE3, SSE4 and SSE4.1 and was slightly adapted to also provide AES support, improving performance of many computing operations dramatically.
    See the Virtual Machines (KVM/QEMU) section for details.
  • Add new text-based UI mode for the installation ISO, written in Rust using the Cursive TUI (Text User Interface) library.
    You can use the new TUI mode to workaround issues with launching the GTK based graphical installer, sometimes observed on both very new and rather old hardware.
    The new text mode shares the code executing the actual installation with the existing graphical mode.

Changelog Overview

Enhancements in the web interface (GUI)

  • The Ceph repository selection now takes into account the subscription status of the complete cluster and recommends the optimal version for the cluster.
  • Improved Dark color theme:
The Dark color theme, introduced in Proxmox VE 7.4, received a lot of feedback from our community, which resulted in further improvements.
  • Set strict SameSite attribute on the Authorization cookie
  • The Markdown parser, used in notes, has been improved:
    • it allows setting the target for links, to make any link open in a new tab or window.
    • it allows providing URLs with a scheme different from HTTP/HTTPS;
      You can now directly link to resources like rdp://<rest-of-url>, providing convenience links in the guest notes.
    • tag-names and protocols are matched case-insensitive.
  • The mobile UI code was refactored to not suffer from incompatible changes made for the web-based GUI.
  • The generated CSR used by the built-in ACME client now sets the correct CSR version (0 instead of 2).
  • Uploading files now only computes the MD5 sum of the uploaded file, if it can be used for comparison with the user-provided one.
  • Firewall settings: Improve the alignment of permissions checked by the web UI with the permissions actually required by the API.
  • Explicitly disallow internal-only tmpfilename parameter for file uploads.
  • Fix multipart HTTP uploads without Content-Type header.
  • Show Ceph pool number in the web UI, as it is often mentioned in Ceph warnings and errors.
  • You can now set the subdir option of the CIFS storage type in the web interface, not only via API/CLI.
  • Improved translations, among others:
    • Ukrainian (NEW)
    • Japanese
    • Simplified Chinese
    • Traditional Chinese
    • The size units (Bytes, KB, MiB,...) are now passed through the translation framework as well, allowing localized variants (e.g., for French).
    • The language selection is now localized and displayed in the currently selected language

Virtual machines (KVM/QEMU)

  • New QEMU version 8.0:
    • The virtiofsd codebase was replaced by a new and improved implementation based on Rust, which is packaged separately.
    • QEMU Guest Agent now has initial support for NetBSD and OpenBSD.
    • Many more changes, see the upstream changelog for details.
  • Add virtual machine CPU models based on the x86-64 psABI Micro-Architecture Levels.
    The x86-64 levels provide a vendor-agnostic set of supported features and reported CPU flags.
    Models like x86-64-v2-AES provides important extra features over the qemu64/kvm64 like SSE3, SSE4 and SSE4.1 and was slightly adapted to also provide AES support, improving performance of many computing operations dramatically.
    This model is well-supported by all x86-64 hardware released in the last decade, to be specific since Intel Westmere (launched in 2010) and AMD Opteron 6200-series "Interlagos" (launched in 2011), enabling Proxmox VE to use it as the default CPU model for crating new VMs via the web UI.
  • Create, manage and assign resource mappings for PCI and USB devices for use in VMs via API and web UI.
    Mappings allow you to give out access to one or more specific device(s) to a user, without them requiring root access.
    For setups that require VMs, with hardware passthrough to be able to (offline-) migrate to other nodes, mappings can be used to ensure that the VM also has a valid device for passthrough on the target node.
    New ACL object paths: /mapping/pci/<id> and /mapping/usb/<id> refer to the defined PCI and USB mappings.
    New privileges: Mapping.Audit allows to view resource mappings, Mapping.Modify allows to create or edit resource mappings, and Mapping.Use allows to pass through devices to VMs using the mapping.
    New roles: PVEMappingUser, with the privilege to view and use mappings, and PVEMappingAdmin with the additional privilege to edit mappings.
  • Avoid invalid smm machine flag for aarch64 VM when using serial display and SeaBIOS.
  • Warn if a network interface is not connected to a bridge on VM startup. This can happen if the user manually edited the VM config.
  • Fix an issue with the improved code for volume size information gathering for passed through disks during backup preparation.
  • Workaround breaking driver changes in newer Nvidia grid drivers, which prevented mediated devices (mdev) to be reclaimed upon guest exit.
  • Prefer an explicit configured SMBIOS UUID for Nvidia vGPU passthrough.
    If a uuid command line parameter is present, it will be preferred over the default auto-generated UUID, containing the VMID and mdev index.
This fixes an issue, with software inside the guest, which relies on a specific and unique UUID setting.
  • Improved gathering of current setting for live memory unplugging.
  • Avoid sending a temporary size of zero to QEMU when resizing block devices. Previously, this was done when resizing RBD volumes, but it is not necessary anymore.
  • When resizing a disk, spawn a worker task to avoid HTTP request timeout (issue 2315).
  • Allow resizing qcow2 disk images with snapshots (issue 517).
  • cloud-init improvements:
    • Introduce ciupgrade option that controls whether machines should upgrade packages on boot (issue 3428).
    • Better align privilege checks in the web UI with the actual privileges required in the backend.
    • Fix an issue where the hostname was not properly set on Fedora/CentOS machines, by passing the hostname via the fqdn option.
    • Fix an issue where displaying pending changes via qm and pvesh caused an error.
    • Allow setting network options with VM.Config.Cloudinit privileges, instead of requiring the more powerful VM.Config.Network privilege.
  • Drop unused QMP commands for getting the link and creating/deleting internal snapshots.
  • Replace usages of deprecated -no-hpet QEMU option with the hpet=off machine flag.

Containers (LXC)

  • Improve handling of /etc/machine-id on clone operations - the file is now only truncated, if the source did not explicitly set it to 'uninitialized' or remove it. Thus, the admin can decide if they want first-boot semantics or not (see machine-id (5)).
  • Set memory.high cgroup limit to 99.6% of configured memory. This setting gives the container a chance to free memory before directly running into an Out-of-Memory (OOM) condition. It is applied on lxc.conf generation and on hot-plugging memory to a running container.
  • Warn users on conflicting, manual, lxc.idmap entries.
    Custom uid/gid map entries can become quite complicated and cause overlaps fast.
    By issuing a warning upon container start, the user should find the wrong entry directly.
  • When resizing a disk, perform plausibility checks already before spawning the worker task. This allows invalid requests to fail earlier.
  • General code improvements, adhering to best practices for Perl code.

General improvements for virtual guests

  • When cloning guests, the validation of the provided name of the clone is now happening in the frontend, improving UX.

HA Manager

  • Stability improvements of manual maintenance mode:
    • Fix an issue where a request for enabling maintenance mode on a node is lost, in case the rebooted node is the current active Cluster Resource Manager (CRM).
    • Fix an issue where a shutdown policy other than migrate could cause a node in maintenance mode to leave maintenance mode too early or fence itself.
    • Fix an issue where ha-rebalance-on-start could cause a newly added and already-running service to be shut down and migrated to another node.
      Now, ha-rebalance-on-start ignores services that are already running.
  • When enabling or disabling maintenance mode via the CLI, the ha-manager command now checks whether the provided node exists.
    This avoids misconfigurations, e.g., due to a typo in the node name.

Improved management for Proxmox VE clusters

  • The rsync invocation used when joining nodes via ssh, which is deprecated, has been adapted to changes in rsync CLI argument parsing in Bookworm.

Backup/Restore

  • Improve performance of backups that use zstd on fast disks, by invoking zstd without the --rsyncable flag (issue 4605).
  • Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.
  • When restoring from backups via the web interface, the VM/CT name is now validated client-side before sending an API request. This helps catching invalid names early.
  • The web UI now sorts backups by date, whereas it previously sorted backups first by VMID and then by date. The VMID is added as an extra column for users who would like to restore the previous sorting order (issue 4678).
  • Fix an issue where the backup job editor window occasionally did not show the selected guests (issue 4627).
  • The fs-freeze-on-backup option of the QEMU guest agent, which controls whether the filesystem should be frozen for backups, can now be set in the web UI.
  • Improve permission model for backup jobs: Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • Clarify description of the ionice setting.

Storage

  • The file-based storage-types have two new config options create-base-path and create-subdirs. They replace the mkdir option and separate two different concepts:
create-base-path decides if the path to the storage should be created if it does not exist,
create-subdirs decides if the content-specific sub-directories (guest images, ISO, container template, backups) should be created.
Conflating both settings in the single mkdir option caused a few unwanted effects in certain situations (issue 3214).
  • The CIFS storage type can now be configured with custom mount options, as it was already possible for the NFS storage type.
  • The subdir option of the CIFS storage type can now be configured in the web interface. The option can be used to mount a subdirectory of a SMB/CIFS share and was previously only accessible via the API/CLI.
  • Improve API documentation for the upload method.
  • The API now allows to also query replication jobs that are disabled.
  • Allow @ in directory storage path, as it is often used to signify Btrfs subvolumes.
  • When resizing RBD volumes, always round up sizes to the nearest integer. This avoids errors due to passing a floating-point size to the RBD tooling.

Ceph

  • Add support for new Ceph enterprise repositories. When installing Ceph via pveceph install or the web UI, you can now choose between the test, no-subscription and enterprise (default) repositories. The -test-repository option of the pveceph install command was removed.
  • Add pveceph osddetails command to show information about OSDs on the command line, with a level of detail that is comparable to the web UI/API.
  • Drop support for Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8.
  • Remove overly restrictive validation of public_network during monitor creation. Configuring a public network like 0::/0 or 0::/1 caused a superfluous "value does not look like a valid CIDR network" error.
  • The Ceph installation wizard in the web UI does not create monitors and managers called localhost anymore and uses the actual node name instead.

Access control

  • Add possibility to define realm sync jobs in the web UI. These allow to synchronize users and groups from an LDAP server automatically at regular intervals.
  • Add TFA/TOTP lockout to protect against an attacker who has obtained the user password and attempts to guess the second factor:
    • If TFA failed too many times in a row, lock this user account out of TFA for an hour. If TOTP failed too many times in a row, disable TOTP for the user account. Using a recovery key will unlock a user account.
    • Add pveum tfa unlock command and /access/users/{userid}/unlock-tfa API endpoint for manually unlocking users.
    • Add TFA lockout status to responses of /access/tfa and /access/users endpoints.
  • Fix validity check for LDAP base DNs that was overly strict starting from Proxmox VE 7.4. For example, the check rejected base DNs containing both dashes and spaces (issue #4609).
  • When authenticating via PAM, pass the PAM_RHOST item. With this, it is possible to manually configure PAM such that certain users (for example root@pam) can only log in from certain hosts.
  • Add pveum tfa list command for listing second factors on the command line.
  • The access/ticket API endpoint does not support the deprecated login API (using new-format=0) anymore.
  • Remove the Permission.Modify privilege from the PVESysAdmin and PVEAdmin roles and restrict it to the Administrator role. This reduces the chances of accidentally granting privilege modification privileges.
  • Login with TFA: In order to improve UX, fix wording of messages related to recovery keys.
  • Forbid creating roles with names starting with PVE to reserve these role names for use in future upgrades.
  • SDN.Use is required on a bridge/vnet (or its zone) in order to configure it in a guest vNIC.
    • use /sdn/zones/localnetwork or /sdn/zones/localnetwork/<bridge> to allow usage of all or specific local bridges.
    • use /sdn/zones/<zone> or /sdn/zones/<zone>/<bridge> to allow usage of all or specific vnets in a given SDN zone.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths, instead of arbitrary roles.

Firewall & Software Defined Networking

  • Allow to distinguish IP sets and aliases with the same name defined on the datacenter level and on the guest level by providing an explicit prefix (issue 4556). Previously, the innermost IP set/alias took precedence, which is still the default behavior if no prefix is provided.
  • Fix an issue where an allowed special ICMP-type could accidentally be added as destination port for a layer 4 protocol, breaking firewall rule loading.
  • Fix setting the correct vlan-protocol for QinQ zones if the bridge is vlan-aware.(issue 4683
  • Fix an issue where routing between zones was enabled by default in exit nodes. This has been fixed by adding null-routes for each other zone prefix to each zone (issue 4389).
  • Correctly order vrf and router bgp vrf entries by vrf name in the frr configuration. (issue 4662)
  • For setups where a node is primary exit node for one vrf and secondary exit for a different vrf, the configuration now also adds the second vrf's default route. (issue 4657)
  • Allow specifying a custom vxlan-tunnel port per interface.
  • Update the frr configuration generation to the version of frr shipped in Debian Bookworm.
  • Fix an issue where reloading the network configuration on a remote node created an error, which hid the actual issue with the network configuration.
  • Add support for IPv6 SLAAC and router advertisement configuration in /etc/network/interfaces to ifupdown2.
  • Fix live reloading when changing VLAN and VXLAN specific attributes.
  • Add support for creating an OVS bridge which tags traffic with a specific VLAN tag to ifupdown2.
This is to match the possibility in ifupdown.

Improved management of Proxmox VE nodes

  • pve7to8 compatibility check script added.
As with previous major upgrades, Proxmox VE 7 ships a script checking for issues with the current node/cluster. It should point to any issues which might prevent a successful major upgrade.
  • Outdated pve6to7 compatibility check script was removed.
  • Fix an issue where the web UI would display no APT repositories during a major upgrade.
  • The new version of grub2 provided by Debian Bookworm (2.06-13) fixes an issue where a host using LVM would fail to boot with a message disk `lvmid/...` not found, even though the LVM setup is healthy.

Installation ISO

  • Add new text-based UI mode for the installation ISO, written in Rust using the Cursive TUI (Text User Interface) library.
    You can use the new TUI mode to workaround issues with launching the GTK based graphical installer, sometimes observed on both very new and rather old hardware.
    The new text mode shares the code executing the actual installation with the existing graphical mode.
  • The version of BusyBox shipped with the ISO was updated to version 1.36.1.
  • The Ceph Quincy repository provided by Proxmox is configured by default to deliver updates for the Ceph client, even if no Proxmox Ceph hyper-converged server is set up.
  • Detection of unreasonable system time.
If the system time is older than the time the installer was created, the system notifies the user with a warning.
  • ethtool is now shipped with the ISO and installed on all systems.
  • systemd-boot is provided by its own package instead of systemd in Debian Bookworm and is installed with the new ISO.

Notable bugfixes and general improvements

  • Most git repositories now have a dsc Makefile target to create a Debian Source Package and additionally a sbuild target to create the source package and build it using sbuild.

Known Issues & Breaking Changes

Browser Support

This Proxmox VE release uses some newer JavaScript, CSS and/or HTML features and requires the following browser version baseline:

  • Firefox 79 (released on 2020-07-28)
  • Chrome/Chromium 85 (released 2020-08-25)
  • Safari 14 (released 2020-09-16)

This is in line with our documented minimal required browser release. If you are using older browser versions, we strongly recommend that you upgrade to the latest available version for best compatibility and to avoid many (security) issues.

Storage

  • Storage activation now checks that every content type uses a different directory, in order to prevent unexpected interactions between different content types.
    This breaks setups in which the content-dirs option was set up to map different content types to the same directory, and setups in which some content directories were manually set up as symlinks to a common directory.
  • The mkdir option is considered deprecated, it got split into create-base-path and create-subdirs as fine-grained replacement.
    While Proxmox VE will continue to support this option during the 8.x release series, we recommend switching over to the new options already now.

Kernel

  • Previous 6.2 kernels had problems with incoming live migrations when all of the following were true:
    • VM has a restricted CPU type (e.g., qemu64) – using CPU type host or Skylake-Server is ok!
    • the source host uses an Intel CPU from Skylake Server, Tiger Lake Desktop, or equivalent newer generation.
    • the source host is booted with a kernel version 5.15 (or older) (e.g. when upgrading from Proxmox VE 7.4)
In this case, the VM could hang and use 100% of the CPU of one or more vCPUs.
This was fixed with pve-kernel-6.2.16-4-pve in version 6.2.16-5. So make sure your target host is booted with this (or a newer) kernel version if the above points apply to your setup.
  • Kernels based on 6.2 have a degraded Kernel Samepage Merging (KSM) performance on multi-socket NUMA systems.
    • Depending on the workload this can result in a significant amount of memory that is not deduplicated anymore.
    • This issue went unnoticed for a few kernel releases, making a clean backport of the fixes made for 6.5 hard to do without some general fall-out.
    Until we either find a targeted fix for our kernel, or change the default kernel to a 6.5 based kernel (planned for 2023'Q4), the current recommendation is to keep your multi-socket NUMA systems that rely on KSM on Proxmox VE 7 with it's 5.15 based kernel.

QEMU

  • QEMU 8.0 removed some previously deprecated features. Proxmox VE 8 won't use the -chardev tty and -chardev parport aliases anymore, and no other features were used by the Proxmox VE stack. Thus, only installations using args inside their guest configs need to check the compatibility. See the Qemu changelog on the topic for details.
    • The removed features in QEMU 8.0 also include the Proxmox VE-specific, but unused/deprecated QMP commands get_link_status, snapshot-drive and delete-drive-snapshot.
  • Only root@pam is now allowed to clone and restore guests with passed through PCI/USB devices that are not using the new mapping feature. To allow regular users to clone and restore with PCI/USB devices, create a mapping and give the user 'Mapping.Use' on that.
  • Trying to pass through the same PCI device multiple times in a single guest now fails earlier. A qm showcmd for example does not generate an output anymore in that case.
  • When passed through device is configured as multifunction (or 'All Functions' in the web UI) with a set mediated device (mdev) this now generates an error instead of a warning. Use the specific function instead.
  • cloud-init: If the VM name is not a FQDN and no DNS search domain is configured, the automatically-generated cloud-init user data now contains an additional fqdn option. This fixes an issue where the hostname was not set properly for some in-guest distributions. However, the changed user data will change the instance ID, which may cause the in-guest cloud-init to re-run actions that trigger once-per-instance. For example, it may regenerate the in-guest SSH host keys.
  • Migration doesn't scan all local storages for orphaned volumes anymore. Instead, only the volumes referenced in the configuration (including snapshots) are picked up. This avoids unexpected errors where an unavailable local storage could fail migration even if no volume referenced that storage.

Container

  • The lxc.id_map configuration key has been deprecated for a long time by lxc and was replaced by lxc.idmap. With this release, its presence is considered an error. The key can only be present if it was manually added to a guest configuration.
  • The lxcfs is now built with fuse 3. This upgrade is done on a major release, since all running containers need to be restarted afterwards.
  • Migration doesn't scan all local storages for orphaned volumes anymore. Instead, only the volumes referenced in the configuration (including snapshots) are picked up. This avoids unexpected errors where an unavailable local storage could fail migration even if no volume referenced that storage.

Authentication & Permission System

  • There is a new SDN.Use privilege (and corresponding PVESDNUser role) that is required to configure virtual NICs in guests. See "Access control" section above for details!
  • The Permission.Modify privilege has been removed from the PVESysAdmin and PVEAdmin roles, in order to reduce the chances of accidentally granting the privilege to modify privileges. If a particular setup requires a role with this privilege, it is necessary to define a new custom role and use that instead of PVESysAdmin/PVEAdmin.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths. Previously they could assign any role to specific VM/storage/pool paths. As the privileges usable on specific VM/storage/pool paths were quite limited, this did not allow privilege escalation, but restricting the capabilities now allows adding more powerful privileges in future versions without breaking changes.
  • Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • User accounts will now be locked after too many attempts to authenticate with a second factor. This is intended to protect against an attacker who has obtained the user password and attempts to guess the second factor. Unlocking requires either a successful login with a recovery key or a manual unlock by an administrator.

Node Management

  • Systems booting via UEFI from a ZFS on root setup should install the systemd-boot package after the upgrade.
    The systemd-boot was split out from the systemd package for Debian Bookworm based releases. It won't get installed automatically upon upgrade from Proxmox VE 7.4 as it can cause trouble on systems not booting from UEFI with ZFS on root setup by the Proxmox VE installer.
    Systems which have ZFS on root and boot in UEFI mode will need to manually install it if they need to initialize a new ESP (see the output of proxmox-boot-tool status and the relevant documentation).
    Note that the system remains bootable even without the package installed (the bootloader that was copied to the ESPs during intialization remains untouched), so you can also install it after the upgrade was finished.
    It is not recommended installing systemd-boot on systems which don't need it, as it would replace grub as bootloader in its postinst script.

API

  • The API can handle array-type data differently, while trying staying backward compatible.
    Instead of being able to pass the individual elements separated by null bytes, you can pass the data directly as array.
  • Before Proxmox VE 8, the API endpoint to trigger a QEMU guest agent (QGA) command execution (/nodes/{node}/qemu/{vmid}/agent/exec) allowed passing a command as a single string, which would then be automatically split at whitespace. This was deemed too brittle and is not supported anymore. You must now send the command and all its arguments as a proper array of strings.

OVS Network

  • The Open vSwitch (OVS) version shipped with Proxmox VE 8 / Debian Bookworm contains a bug that causes it to fail to provide a working network when run on a system with 2 CPU cores or less. This is caused by the handler responsible for filling the forward database (FDB) not being started, so forwarding doesn't work. OVS doesn't log any errors or warnings in this case. Proxmox VE is primarily targeted at server hardware, most of which now has 3 cores or more. If you're still affected on a production system, please report it to our Bugzilla instance.

Proxmox VE 8.0 beta1

Released 9. June 2023: See Downloads

Note: this is a test version not meant for production use yet.

  • Based on Debian 12 Bookworm (testing)
  • Latest 6.2 Kernel as stable default
  • QEMU 8.0.2
  • LXC 5.0.2
  • ZFS 2.1.11
  • Ceph Quincy 17.2.6

Highlights

  • New major release based on the great Debian Bookworm.
  • Seamless upgrade from Proxmox VE 7.4, see Upgrade from 7 to 8
  • Ceph Quincy enterprise repository.
    Access the most stable Ceph repository through any Proxmox VE subscription.
  • Add access realm sync jobs.
    Synchronize users and groups from an LDAP/AD server automatically at regular intervals.
  • Integrate host network bridge and VNet access when configuring virtual guests into Proxmox VE's ACL system.
    With the new SDN.Use privilege and the new /sdn/zones/<zone>/<bridge-or-vnet>/<vlan-tag> ACL object path, one can give out fine-grained usage permissions for specific networks to users.

Changelog Overview

Enhancements in the web interface (GUI)

  • The Ceph repository selection now takes into account the subscription status of the complete cluster and recommends the optimal version for the cluster.
  • Improved Dark color theme:
The Dark color theme, introduced in Proxmox VE 7.4, received a lot of feedback from our community, which resulted in further improvements.
  • Set strict SameSite attribute on the Authorization cookie
  • The Markdown parser, used in notes, has been improved:
    • it allows setting the target for links, to make any link open in a new tab or window.
    • it allows providing URLs with a scheme different from HTTP/HTTPS;
      You can now directly link to resources like rdp://<rest-of-url>, providing convenience links in the guest notes.
    • tag-names and protocols are matched case-insensitive.
  • The mobile UI code was refactored to not suffer from incompatible changes made for the web-based GUI.
  • The generated CSR used by the built-in ACME client now sets the correct CSR version (0 instead of 2).
  • Uploading files now only computes the MD5 sum of the uploaded file, if it can be used for comparison with the user-provided one.
  • Firewall settings: Improve the alignment of permissions checked by the web UI with the permissions actually required by the API.
  • Explicitly disallow internal-only tmpfilename parameter for file uploads.
  • Fix multipart HTTP uploads without Content-Type header.
  • Show Ceph pool number in the web UI, as it is often mentioned in Ceph warnings and errors.
  • Improved translations, among others:
    • Ukrainian (NEW)
    • Japanese
    • Simplified Chinese
    • Traditional Chinese
    • The size units (Bytes, KB, MiB,...) are now passed through the translation framework as well, allowing localized variants (e.g., for French).
    • The language selection is now localized and displayed in the currently selected language

Virtual Machines (KVM/QEMU)

  • New QEMU Version 8.0:
    • The virtiofsd codebase was replaced by a new and improved implementation based on Rust, which is packaged separately.
    • QEMU Guest Agent now has initial support for NetBSD and OpenBSD.
    • Many more changes, see the upstream changelog for details.
  • Avoid invalid smm machine flag for aarch64 VM when using serial display and SeaBIOS.
  • Warn if a network interface is not connected to a bridge on VM startup. This can happen if the user manually edited the VM config.
  • Fix an issue with the improved code for volume size information gathering for passed through disks during backup preparation.
  • Workaround breaking driver changes in newer Nvidia grid drivers, which prevented mediated devices (mdev) to be reclaimed upon guest exit.
  • Prefer an explicit configured SMBIOS UUID for Nvidia vGPU passthrough.
    If a uuid command line parameter is present, it will be preferred over the default auto-generated UUID, containing the VMID and mdev index.
This fixes an issue, with software inside the guest, which relies on a specific and unique UUID setting.
  • Improved gathering of current setting for live memory unplugging.
  • Avoid sending a temporary size of zero to QEMU when resizing block devices. Previously, this was done when resizing RBD volumes, but it is not necessary anymore.
  • When resizing a disk, spawn a worker task to avoid HTTP request timeout (issue 2315).
  • Allow resizing qcow2 disk images with snapshots (issue 517).
  • cloud-init improvements:
    • Introduce ciupgrade option that controls whether machines should upgrade packages on boot (issue 3428).
    • Better align privilege checks in the web UI with the actual privileges required in the backend.
    • Fix an issue where the hostname was not properly set on Fedora/CentOS machines, by passing the hostname via the fqdn option.
    • Fix an issue where displaying pending changes via qm and pvesh caused an error.
    • Allow setting network options with VM.Config.Cloudinit privileges, instead of requiring the more powerful VM.Config.Network privilege.
  • Drop unused QMP commands for getting the link and creating/deleting internal snapshots.
  • Replace usages of deprecated -no-hpet QEMU option with the hpet=off machine flag.

Containers (LXC)

  • Improve handling of /etc/machine-id on clone operations - the file is now only truncated, if the source did not explicitly set it to 'uninitialized' or remove it. Thus, the admin can decide if they want first-boot semantics or not (see machine-id (5)).
  • Set memory.high cgroup limit to 99.6% of configured memory. This setting gives the container a chance to free memory before directly running into an Out-of-Memory (OOM) condition. It is applied on lxc.conf generation and on hot-plugging memory to a running container.
  • Warn users on conflicting, manual, lxc.idmap entries.
    Custom mappings can become quite complicated and cause overlaps fast.
    By issuing a warning upon container start, the user should find the wrong entry directly.
  • When resizing a disk, perform plausibility checks already before spawning the worker task. This allows invalid requests to fail earlier.
  • General code improvements, adhering to best practices for Perl code.

General improvements for virtual guests

  • When cloning guests, the validation of the provided name of the clone is now happening in the frontend, improving UX.
  • Add config files in /etc/pve/mapping and privileges Mapping.* in preparation for cluster-wide mapping of PCI/USB devices.

HA Manager

  • Stability improvements of manual maintenance mode:
    • Fix an issue where a request for enabling maintenance mode on a node is lost, in case the rebooted node is the current active Cluster Resource Manager (CRM).
    • Fix an issue where a shutdown policy other than migrate could cause a node in maintenance mode to leave maintenance mode too early or fence itself.
    • Fix an issue where ha-rebalance-on-start could cause a newly added and already-running service to be shut down and migrated to another node.
      Now, ha-rebalance-on-start ignores services that are already running.
  • When enabling or disabling maintenance mode via the CLI, the ha-manager command now checks whether the provided node exists.
    This avoids misconfigurations, e.g., due to a typo in the node name.

Improved management for Proxmox VE clusters

  • The rsync invocation used when joining nodes via ssh, which is deprecated, has been adapted to changes in rsync CLI argument parsing in Bookworm.

Backup/Restore

  • Improve performance of backups that use zstd on fast disks, by invoking zstd without the --rsyncable flag (issue 4605).
  • Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.
  • When restoring from backups via the web UI, the VM/CT name is now validated client-side before sending an API request. This helps catching invalid names early.
  • The web UI now sorts backups by date, whereas it previously sorted backups first by VMID and then by date. The VMID is added as an extra column for users who would like to restore the previous sorting order (issue 4678).
  • Fix an issue where the backup job editor window occasionally did not show the selected guests (issue 4627).
  • The fs-freeze-on-backup option of the QEMU guest agent, which controls whether the filesystem should be frozen for backups, can now be set in the web UI.
  • Improve permission model for backup jobs: Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • Clarify description of the ionice setting.

Storage

  • The file-based storage-types have two new config options create-base-path and create-subdirs. They replace the mkdir option and separate two different concepts:
create-base-path controls if the path to the storage should be created if it does not exist,
create-subdirs controls if the content-specific subdirectories (e.g., guest images, ISO images, container templates, or backups) should be created.
Conflating both settings in the single mkdir option caused a few unwanted effects in certain situations (issue 3214).
  • The CIFS storage type can now be configured with custom mount options, as it was already possible for the NFS storage type.
  • The subdir option of the CIFS storage type can now be configured in the web interface. The option can be used to mount a subdirectory of a SMB/CIFS share and was previously only accessible via the API/CLI.
  • Improve API documentation for the upload method.
  • The API now allows to also query replication jobs that are disabled.
  • Allow @ in directory storage path, as it is often used to signify Btrfs subvolumes.
  • When resizing RBD volumes, always round up sizes to the nearest integer. This avoids errors due to passing a floating-point size to the RBD tooling.

Ceph

  • Add support for new Ceph enterprise repositories. When installing Ceph via pveceph install or the web UI, you can now choose between the test, no-subscription and enterprise (default) repositories. The -test-repository option of the pveceph install command was removed.
  • Add pveceph osddetails command to show information about OSDs on the command line, with a level of detail that is comparable to the web UI/API.
  • Drop support for hyper-converged Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8.
    Proxmox VE 8 will support managing Quincy and newer Ceph server releases, setups still using Pacific can upgrade to Ceph Quincy before upgrading Proxmox VE from 7 to 8.
    The Ceph 17.2 Quincy client will still support accessing older Ceph server setups as a client.
  • Remove overly restrictive validation of public_network during monitor creation. Configuring a public network like 0::/0 or 0::/1 caused a superfluous "value does not look like a valid CIDR network" error.
  • The Ceph installation wizard in the web UI does not create monitors and managers called localhost anymore and uses the actual node name instead.

Access Control

  • Add possibility to define realm sync jobs in the web UI. These allow to synchronize users and groups from an LDAP server automatically at regular intervals.
  • Add TFA/TOTP lockout to protect against an attacker who has obtained the user password and attempts to guess the second factor:
    • If TFA failed too many times in a row, lock this user account out of TFA for an hour. If TOTP failed too many times in a row, disable TOTP for the user account. Using a recovery key will unlock a user account.
    • Add pveum tfa unlock command and /access/users/{userid}/unlock-tfa API endpoint for manually unlocking users.
    • Add TFA lockout status to responses of /access/tfa and /access/users endpoints.
  • Fix validity check for LDAP base DNs that was overly strict starting from Proxmox VE 7.4. For example, the check rejected base DNs containing both dashes and spaces (issue 4609).
  • When authenticating via PAM, pass the PAM_RHOST item. With this, it is possible to manually configure PAM such that certain users (for example root@pam) can only log in from certain hosts.
  • Add pveum tfa list command for listing second factors on the command line.
  • The access/ticket API endpoint does not support the deprecated login API (using new-format=0) anymore.
  • Remove the Permission.Modify privilege from the PVESysAdmin and PVEAdmin roles and restrict it to the Administrator role. This reduces the chances of accidentally granting privilege modification privileges.
  • Login with TFA: In order to improve UX, fix wording of messages related to recovery keys.
  • Forbid creating roles with names starting with PVE to reserve these role names for use in future upgrades.
  • SDN.Use is required on a bridge/vnet (or its zone) in order to configure it in a guest vNIC.
    • use /sdn/zones/localnetwork or /sdn/zones/localnetwork/<bridge> to allow usage of all or specific local bridges.
    • use /sdn/zones/<zone> or /sdn/zones/<zone>/<bridge> to allow usage of all or specific vnets in a given SDN zone.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths, instead of arbitrary roles.

Firewall & Software Defined Networking

  • Allow to distinguish IP sets and aliases with the same name defined on the datacenter level and on the guest level by providing an explicit prefix (issue 4556). Previously, the innermost IP set/alias took precedence, which is still the default behavior if no prefix is provided.
  • Fix an issue where an allowed special ICMP-type could accidentally be added as destination port for a layer 4 protocol, breaking firewall rule loading.
  • Fix setting the correct vlan-protocol for QinQ zones if the bridge is vlan-aware.(issue 4683
  • Fix an issue where routing between zones was enabled by default in exit nodes. This has been fixed by adding null-routes for each other zone prefix to each zone (issue 4389).
  • Correctly order vrf and router bgp vrf entries by vrf name in the frr configuration. (issue 4662)
  • For setups where a node is primary exit node for one vrf and secondary exit for a different vrf, the configuration now also adds the second vrf's default route. (issue 4657)
  • Allow specifying a custom vxlan-tunnel port per interface.
  • Update the frr configuration generation to the version of frr shipped in Debian Bookworm.
  • Fix an issue where reloading the network configuration on a remote node created an error, which hid the actual issue with the network configuration.
  • Add support for IPv6 SLAAC and router advertisement configuration in /etc/network/interfaces to ifupdown2.
  • Fix live reloading when changing VLAN and VXLAN specific attributes.
  • Add support for creating an OVS bridge which tags traffic with a specific VLAN tag to ifupdown2.
This is to match the possibility in ifupdown.

Improvements for the management of Proxmox VE Nodes

  • pve7to8 compatibility check script added.
As with previous major upgrades, Proxmox VE 7 ships a script checking for issues with the current node/cluster. It should point to any issues which might prevent a successful major upgrade.
  • Outdated pve6to7 compatibility check script was removed.
  • Fix an issue where the web UI would display no APT repositories during a major upgrade.
  • The new version of grub2 provided by Debian Bookworm (2.06-13) fixes an issue where a host using LVM would fail to boot with a message disk `lvmid/...` not found, even though the LVM setup is healthy.

Installation ISO

  • The version of BusyBox shipped with the ISO was updated to version 1.36.1.
  • The Proxmox-provided Ceph Quincy repo will be set-up by default, providing updates for a modern Ceph client even if Proxmox Ceph hyper-converged setup is not in use.
  • Detection of unreasonable system time.
If the system time is older than the time the installer was created, the system notifies the user with a warning.
  • ethtool is now shipped with the ISO and installed on all systems.
  • systemd-boot is provided by its own package instead of systemd in Debian Bookworm and is installed with the new ISO.

Notable bug fixes and general improvments

  • Most git repositories now have a dsc Makefile target to create a Debian Source Package and additionally a sbuild target to create the source package and build it using sbuild.

Known Issues & Breaking Changes

Browser Support

This Proxmox VE release uses some newer JavaScript, CSS and/or HTML features and requires the following browser version baseline:

  • Firefox 79 (released on 2020-07-28)
  • Chrome/Chromium 85 (released 2020-08-25)
  • Safari 14 (released 2020-09-16)

This is well below our requirement to use a browser released in the last twelve months. If you are using older browser versions, we strongly recommend that you upgrade to the latest available version for best compatibility and to avoid many (security) issues.

Storage

  • Storage activation now checks that every content type uses a different directory, in order to prevent unexpected interactions between different content types.
    This breaks setups in which the content-dirs option was set up to map different content types to the same directory, and setups in which some content directories were manually set up as symlinks to a common directory.
  • The mkdir option is considered deprecated, it got split into create-base-path and create-subdirs as fine-grained replacement.
    While Proxmox VE will continue to support this option during the 8.x release series, we recommend switching over to the new options already now.

QEMU

  • QEMU 8.0 removed some previously deprecated features. Proxmox VE 8 won't use the -chardev tty and -chardev parport aliases anymore, and no other features were used by the Proxmox VE stack. Thus, only installations using args inside their guest configs need to check the compatibility. See the Qemu changelog on the topic for details.
    • The removed features in QEMU 8.0 also include the Proxmox VE-specific, but unused/deprecated QMP commands get_link_status, snapshot-drive and delete-drive-snapshot.

Container

  • The lxc.id_map configuration key has been deprecated for a long time by lxc and was replaced by lxc.idmap. With this release, its presence is considered an error. The key can only be present if it was manually added to a guest configuration.
  • The lxcfs is now built with fuse 3. This upgrade is done on a major release, since all running containers need to be restarted afterwards.

Authentication & Permission System

  • There is a new SDN.Use privilege (and corresponding PVESDNUser role) that is required to configure virtual NICs in guests. See SDN section above for details!
  • The Permission.Modify privilege has been removed from the PVESysAdmin and PVEAdmin roles, in order to reduce the chances of accidentally granting the privilege to modify privileges. If a particular setup requires a role with this privilege, it is necessary to define a new custom role and use that instead of PVESysAdmin/PVEAdmin.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths. Previously they could assign any role to specific VM/storage/pool paths. As the privileges usable on specific VM/storage/pool paths were quite limited, this did not allow privilege escalation, but restricting the capabilities now allows adding more powerful privileges in future versions without breaking changes.
  • Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • User accounts will now be locked after too many attempts to authenticate with a second factor. This is intended to protect against an attacker who has obtained the user password and attempts to guess the second factor. Unlocking requires either a successful login with a recovery key or a manual unlock by an administrator.

Others

  • The API can handle array-type data differently, while staying backward compatible.
    Instead of being able to pass the individual elements separated by null bytes, you can pass the data directly as array.

Proxmox VE 7.4

Released 23. March 2023: See Downloads

  • Based on Debian Bullseye (11.6)
  • Latest 5.15 Kernel as stable default
  • Newer 6.2 kernel as opt-in
  • QEMU 7.2
  • LXC 5.0.2
  • ZFS 2.1.9
  • Ceph Quincy 17.2.5
  • Ceph Pacific 16.2.11

Highlights

  • Proxmox VE now provides a dark theme for the web interface.
  • Guests in resource tree can now be sorted by their name, not only VMID.
  • The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.
  • Added CRM command to the HA manager to switch an online node manually into maintenance mode (without reboot.

Changelog Overview

Enhancements in the web interface (GUI)

  • Add a fully-integrated "Proxmox Dark" color theme variant of the long-time Crisp light theme.
By default, the prefers-color-scheme media query from the Browser/OS will be used to decide the default color scheme.
Users can override the theme via a newly added Color Theme menu in the user menu.
  • Add "Proxmox Dark" color theme to the Proxmox VE reference documentation.
The prefers-color-scheme media query from the Browser/OS will be used to decide if the light or dark color scheme should be used.
The new dark theme is also available in the Proxmox VE API Viewer.
  • Local storage types that are located on other cluster nodes can be added.
A node selector was added to the Add Storage wizard for the ZFS, LVM, and LVM-Thin storage types.
  • Automatically redirect HTTP requests to HTTPS for convenience.
This avoids "Connection reset" browser errors that can be confusing, especially after setting up a Proxmox VE host the first time.
  • Task logs can now be downloaded directly as text files for further inspection.
  • It is now possible to choose the sort-order of the resource tree and to sort guests by name.
  • Fix loading of changelogs in case additional package repositories are configured.
  • Improve editing of backup jobs:
    • Add a filter to the columns of the guest selector.
    • Show selected, but non-existing, guests.
  • Remove the "Storage View" mode from the resource tree panel.
This mode only showed the storage of a cluster and did not provide additional information over the folder or server views.
  • The Proxmox Backup Server specific columns for verification and encryption status can now be used for sorting in the backup content view of a storage.
  • Polish the user experience of the backup schedule simulator by splitting the date and time into two columns and better check the validity of the input fields.
  • Improve accessibility for screens with our minimal required display resolution of 720p
    • add scrolling overflow handler for the toolbar of the backup job view
    • rework the layout of the backup job info window for better space usage and reduce its default size
  • Fix search in "Guests without backup" window.
  • Node and Datacenter resource summary panels now show the guest tag column by default.
  • Show role privileges when adding permissions.
  • Allow the use of the `-` character in snapshot names, as the backend has supported this for some time.
  • Update the noVNC guest viewer to upstream version 1.4.0.
  • Fix overly-strict permission check that prevented users with only the VM.Console privilege from accessing the noVNC console.
  • Align permission check for bulk actions with the ones enforced by the API.
Switch the check from the Sys.PowerMgmt privilege to the correct VM.PowerMgmt one.
  • Invalid entries in advanced fields now cause the advanced panel to unfold, providing direct feedback.
  • HTML-encode API results before rendering as additional hardening against XSS.
  • Fix preselection of tree elements based on the URL after login.
  • Fix race condition when switching between the content panel of two storage before one of them hasn't finished loading.
  • Metric server: Expose setting the verify-certificate option for InfluxDB as advanced setting
  • Replace non-clickable checkbox with icons for backup jobs, APT repositories, and replication jobs.
  • Fix error when editing LDAP sync setting and only a single parameter is not set to a non-default value.
  • Add missing online-help references for various panels and edit windows.
  • Improved translations, among others:
    • Arabic
    • French
    • German
    • Italian
    • Japanese
    • Russian
    • Slovenian
    • Simplified Chinese

Virtual Machines (KVM/QEMU)

  • New QEMU Version 7.2:
    • QEMU 7.2 fixes issues with Windows Guests, installed from a German ISO, during installation of the VirtIO drivers.
    • Fix crash of VMs with iSCSI disks on a busy target.
    • Fix rare hang of VMs with IDE/SATA during disk-related operations like backup and resize.
    • Many more changes, see the upstream changelog for details.
  • Taking a snapshot of a VM with large disks following a PBS backup occasionally was very slow. This has been fixed (issue #4476).
  • Running fsfreeze/fsthaw before starting a backup can now optionally be disabled in the QEMU guest agent options.
Note: Disabling this option can potentially lead to backups with inconsistent filesystems and should therefore only be disabled if you know what you are doing.
  • Cloning or moving a disk of an offline VM now also takes the configured bandwidth limits into consideration (issue #4249).
  • Fix an issue with EFI disks on ARM 64 VMs.
  • Add safeguards preventing the moving of disks of a VM using io_uring to storage types that have problems with io_uring in some kernel versions.
  • General improvements to error reporting. For example, the error messages from query-migrate are added when a migration fails and a configured, but non-existing physical CD-ROM drive, results in a descriptive error message.
  • Allow users to destroy a VM even if it's suspended.
  • Fix a race-condition when migrating VMs on highly loaded or slower clusters, where the move of the guest's config file to the target node directory might not have been propagated to the target node.
  • Rolling back a VM to a snapshot with state (memory) and still selecting to start the VM after the rollback does not cause an error anymore (rollbacks with state result in a running VM).
  • Deleting snapshots of running VMs, with a configured TPM on Ceph storages with krbd enabled, is now possible.
  • Fix command execution via pvesh and QEMU guest agent in VMs on other cluster nodes.
  • Update Linux OS version description to include 6.x kernels.

Containers (LXC)

  • Update to LXC 5.0.2 and lxcfs 5.0.3.
  • Allow riscv32 and riscv64 container architectures through the binfmt_misc kernel capability.
After installing the qemu-user-static and binfmt-support packages one can use a RISC-V based rootfs image to run as container directly on an x86_64/amd64 Proxmox VE host.
  • Create /etc/hostname file on Alma Linux, CentOS, and Rocky Linux containers. With this, DHCP requests sent by the container now include its hostname.
  • Add option to disconnect network interfaces of containers, similarly to network interfaces of VMs.
  • Make container start more resilient after OOM or node crash (empty AppArmor profile files do not cause a crash).
  • Improve cleanup upon failed restores (remove the container configuration if restore fails due to an invalid source archive, remove firewall configuration).
  • Ignore bind or read-only mount points when running pct fstrim.
  • During container shutdown, wait with a timeout in case lxc-stop fails. This prevents the shutdown task from running indefinitely and having to be aborted manually.
  • Templates:
    • Updated Debian Bullseye template from 11.3 to 11.6.
    • Updated Proxmox Mail Gateway template from 7.0 to 7.2.

General improvements for virtual guests

  • The "Bulk Stop" action was renamed to "Bulk Shutdown" to better describe its behavior.
  • Allow overriding timeout and force-stop settings for bulk shutdowns.
  • Allow bulk actions even if the user does not have the required privileges for all guests but has the privileges for each guest involved in the bulk action.

HA Manager

  • Add CRM command to switch an online node manually into maintenance (without reboot).
When a node goes into maintenance mode all active HA services will be moved to other nodes, but automatically migrate them back once the maintenance mode is disabled again.
  • The HA Cluster Resource Scheduler (CRS) stack was expanded to rebalance VMs & CTs automatically on start, not only recovery.
One can now enable the ha-rebalance-on-start option in the datacenter.cfg or via the web UI to use Proxmox CRS to balance on service start up.
  • A new intermediate state request_started has been added for the stop -> start transitions of services.
  • Improve scheduling algorithm for some cases.
    • make CPU load matter more if there is no memory load at all
    avoids boosting tiny relative differences over higher absolute loads.
    • use a non-linear averaging algorithm when comparing loads.
    The previous algorithm was blind in cases where the static node stats are the same and there is (at least) one node that is over committed when compared to the others.

Improved management for Proxmox VE clusters

  • Ensure that the current working directory is not in /etc/pve when you set up the cluster using the pvecm CLI tool.
Since pmxcfs, which provides the mount point for /etc/pve, is restarted when you set up the cluster, a confusing "Transport endpoint is not connected" error message would be reported otherwise.
  • The proxmox-offline-mirror tool now supports fetching data through an HTTP proxy.
  • Fetching the changelog of package updates has been improved:
    • The correct changelog will be downloaded if repositories from multiple Proxmox projects are configured, for example if one has Proxmox VE and Proxmox Backup Server installed on the same host.
    • Support getting the for packages coming from a Debian Backports repository.
  • You can now configure if you want to receive a notification mail for new available package updates.
  • The wrapper for acme.sh DNS-validation plugins received fixes for 2 small issues:
    • a renaming of parameters for the acmedns plugin was pulled from upstream.
    • a missing method was added to fix an issue with the dns_cf.sh plugin.
  • Improved pvereport: In order to provide a better status overview, add the following information:
    • /etc/pve/datacenter.cfg.
    • ceph health detail.
  • OpenSSL errors are now reported in full to ease troubleshooting when managing the nodes certificate.
  • Add missing or newly added/split-out packages to the Proxmox VE apt version API, also used for the pveversion -v call:
    • proxmox-mail-forward
    • proxmox-kernel-helper
    • libpve-rs-perl

Backup/Restore

  • Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.

Storage

  • It is now possible to override the specific subdirectories for content (ISOs, container templates, backups, guest disks) to custom values through the content-dirs option.
  • The CIFS storage type can now also directly mount a specific subdirectory of a share, thus better integrating into already existing environments.
  • The availability check for the NFSv4 storage type was reworked in order to work with setups running without rpcbind.
  • Fix ISO upload via HTTP in a few edge cases (newlines in filenames, additional headers, not sent by common browsers).
  • Fix caching volume information for systems which both have a local ZFS pool storage and a ZFS over iSCSI storage configured during guest disk rescan.

Storage Replication

  • Extend support for online migration of replicated VM guests.
One can now also migrate VMs if they included snapshots, as long as those are only on replicated volumes.

Disk Management

  • Improve showing the SMART values for the correct NVMe devices.

Ceph

  • Expose more detailed OSD information through the API and use that to add an OSD Detail window in the web interface.
You can now check the backing device, logical volume info, front- and back- network addresses and more using the new OSD detail window.
  • Show placement groups per OSD in the web interface.
  • Improve schema description for various Ceph-related API endpoints.
This also improves the api-viewer and pvesh tool for various Ceph-related API endpoints.
  • Fix broken cmd-safety endpoint that made it impossible for non-root users to stop/destroy OSDs and monitors.
  • Allow admins to easily set up multiple MDS per node to increase redundancy if more than one CephFS is configured.

Access Control

  • ACL computation was refactored causing a significant performance improvement (up to a factor of 450) on setups with thousands of entries.
  • It is now possible to override the remove-vanished settings for a realm when actively syncing it in the GUI.
  • Allow quoted values in LDAP DN attributes when setting up an LDAP realm.

Firewall & Software Defined Networking

  • ipsets can be added even with set host-bits. For example, 192.0.2.5/24 is now a valid input. Host-bits get cleared upon parsing (resulting in 192.0.2.0/24 in the example).
  • Firewall logs can be restricted to a timeframe with the since and until parameters to the API call
  • The conditional loading of nf_conntrack_helpers was dropped for compatibility with kernel 6.1.
  • Not adding link-local IPv6 addresses on the internal guest-communication devices was fixed in a corner-case.
  • The MTU is now set to the value of the parent bridge on the automatically generated VLAN-bridge devices for non-VLAN-aware bridges.
  • The EVPN plugin now also merges a defined prefix-list from /etc/frr/frr.conf.local.

Installation ISO

  • the version of BusyBox shipped with the ISO was updated to version 1.36.0.
  • The EFI System Partition (ESP) defaults to 1 GiB of size if the root disk partition (hdsize) is bigger than 100 GB.
  • UTC can now be selected as timezone during installation.

Notable bug fixes

  • An issue with OVS network configuration where the node would lose connectivity when upgrading Open vSwitch (see https://bugs.debian.org/1008684).
  • A race condition in the API servers causing failed tasks when running a lot of concurrent API requests was fixed.

Known Issues & Breaking Changes

  • In QEMU 7.2, it is a hard error if audio initialization fails rather than a warning.
This can happen, for example, if you have an audio device with SPICE driver configured but are not using SPICE display. To avoid the issue, make sure the configuration is valid.
  • With pve-edk2-firmware >= 3.20221111-1 we know of two issues affecting specific set ups:
    • virtual machines using OVMF/EFI with very little memory (< 1 GiB) and certain CPU types (e.g. host) might no longer boot.
    Possible workarounds are to assign more memory or to use kvm64 as the CPU type.
    The background for this problem is that OVMF << 3.20221111-1 used to guess the address (bit) width only from the available memory, but now there is more accurate detection that better matches what the configured CPU type provides. The more accurate address-width can lead to a larger space requirement for page tables.
    • The (non-default) PVSCSI disk controller might result in SCSI disk not being detected inside the guess in regressions.
    We're still investigating this, until then you might either evaluate if your VM really requires the non-standard PVSCSI controller, use the SATA bus instead, or keep using the older pve-edk2-firmware package.
  • Because of hardware errata, the XSAVES feature had to be disabled by the kernel for certain AMD CPUs, resulting in start issues with VMs and certain VM CPU types like EPYC-ROME.
For affected VMs you will find a message like host doesn't support requested feature: CPUID.0DH:EAX.xsaves [bit 3] logged in the kernel log.
As a workaround, use a different CPU model for the VM, like host, EPYC, or a custom CPU model that has the xsaves flag disabled.

Proxmox VE 7.3

Released 22. November 2022: See Downloads

  • Based on Debian Bullseye (11.5)
  • Latest 5.15 Kernel as stable default (5.15.74)
  • Newer 5.19 kernel as opt-in
  • QEMU 7.1
  • LXC 5.0.0
  • ZFS 2.1.6
  • Ceph Quincy 17.2.5
  • Ceph Pacific 16.2.10

Highlights

  • Ceph Quincy support. It is also the default for new installations
  • Initial Cluster Resource Scheduling (CRS) support
  • Tags for Virtual Guests in web-interface for better categorizing/searching/...
  • Support for Proxmox Offline Mirror to update and manage subscriptions of air gapped systems

Changelog Overview

Enhancements in the web interface (GUI)

  • Show tags for virtual guests in the resource tree and allow edits.
  • Improved UX for mediated PCIe devices - they now also show the name.
  • Improved Certificate View - for example for certificates with many SANs.
  • Node disk UI: gracefully handle adding the same local storage (e.g. a zpool with the same name) to multiple nodes.
  • Expose node configurations like wakeonlan and the delay for starting guests on boot for each node in the web interface.
  • Improved translations, among others:
    • Arabic
    • Dutch
    • German
    • Italian
    • Polish
    • Traditional Chinese
    • Turkish
  • Improve rendering complex formats in the api-viewer widget

Virtual Machines (KVM/QEMU)

  • New major version of QEMU 7.1
  • Support for pinning a VM to certain CPU cores via taskset
  • In the web interface, new VMs default to iothread enabled and VirtIO SCSI-Single selected as SCSI controller (if supported by the guest OS)
  • New VMs use qemu-xhci USB controller, if supported by the guest OS (Windows >= 8, Linux >= 2.6)
USB devices can now be hot-plugged
Pass through up to 14 USB devices (previous 5) to a virtual machine
  • Align virtio-net parameters for the receive- (rx) and transmit- (tx) queue sizes with upstream best-practices
  • Use the more efficient packed format for multi-queues
  • Allow up to 64 rx and tx multi-queues (previously 16)
  • Cloud-init improvements: changes to the cloud-init settings now can be made available in the config-drive ISO inside the guest directly
  • Disable io_uring for CIFS backed VM disks by default - to mitigate an issue with CIFS and io_uring present since kernel 5.15
  • Improved handling for VMs with passed through PCIe-devices:
    • Cleanup of created mdev devices, even if the VM could not be started
    • Longer timeouts between sending SIGTERM and SIGKILL to allow for a cleanup upon termination
    • Prevent suspending a VM with passed through PCIe device, as the device's state cannot be saved

Containers (LXC)

  • New major LXC version 5.0.0
  • More robust cgroup mode detection, by explicitly checking the type of /sys/fs/cgroup
  • Support for newer distribution versions:
    • Fedora 37 and preparation for 38
    • Devuan 12 Daedalus
    • Preparation for Ubuntu 23.04
  • Bind-mounts are now also directly applied to a running container
  • Fix a bug when cloning a locked container: It does not create an empty config anymore, but fails correctly
  • Improvements to the systemd version detection inside containers
  • Volumes are now always deactivated upon successful move_volume, not only if the source volume is to be removed: preventing dangling krbd mappings
  • New pre-made templates available for:
    • AlmaLinux 9
    • Alpine 3.16
    • Centos 9 Stream
    • Fedora 36
    • Fedora 37
    • OpenSUSE 15.4
    • Rocky Linux 9
    • Ubuntu 22.10
  • Refreshed existing templates:
    • Gentoo (2022-06-22-openrc)
    • ArchLinux (2022-11-11)

General improvements for virtual guests

  • Add option to disable MAC learning on bridges (the guest-mac addresses are added statically, no broadcast packets are flooded to those ports, thus no spurious answers are replied, which broke certain hoster network setups)
  • Improve cleanup of backup-jobs upon purging the configuration for a removed VM
  • Optionally restart a guest after rollback to snapshot
  • Framework for remote migration to cluster-external Proxmox VE hosts

HA Manager

    • Cluster Resource Scheduling (CRS) tech-preview: Improve new-node selection for when the HA Manager needs to find a new host node for a HA service, in the following cases:
    • recovering it after fencing its node
    • on node shutdown, if the migrate shutdown-policy is enabled
    • on HA group configuration changes, if the current node is not in the highest priority set anymore
Use the TOPSIS multi-criteria decision analysis method for finding a better target
Having established the CRS foundation, Proxmox developers plan to extend it with a dynamic load scheduler and live load balancing in future releases

Cluster

  • Fix a permission issue in the QDevice status API
The API call for obtaining the API status needs privileged access, but was run directly in the unprivileged daemon leading to spurious permission denied errors
  • Fix race-condition between writing corosync.conf and reloading corosync on update

Backup/Restore

  • Improved namespace support for the Proxmox Backup Server storage type
  • Improvements to the parsing of the template variables of the backup notes
The notes template for backups, introduced in Proxmox VE 7.2, received a number of bug-fixes and improvements
  • Added option repeat-missed, allowing one to opt-out from the default behavior of running missed jobs on a new boot
  • The VM used for single-file-restore with QEMU guests now has support for increasing its memory (e.g. to handle many ZFS datasets inside the guest)
  • Improved configuration validation with Proxmox Backup Server encryption (for example, do not fall back to plain-text if the encryption key is missing)
  • When deleting vzdump backup files the corresponding notes and log are also removed.

Storage

  • Support ZFS dRAID vdevs when creating a zpool via the API & GUI. dRAID improves recovery times when a disk failure occurs.
A dRAID setup makes most sense for either a large (15+) amount of disks, or a medium+ amount of huge disks (15+ TB).
  • Align SMART status API with Proxmox Backup Server fields
  • Support Notes and the Protected setting for backups stored on BTRFS storage types.

Storage Replication

  • Don't send mails on bogus errors: e.g. when a replication could not be started because the guest is currently being migrated.
  • Upon replication failure the first 3 retries are scheduled in a shorter time, before waiting for 30 minutes before retrying - improving the consistency upon short network hiccups.
  • Cleanup replication state of guests running on another note: as can happen after a HA-fence.
  • Make interaction of replication state and configuration changes more robust: e.g. in the case of first removing all volumes from one storage, and then removing the VM before the next replication was run.

pve-zsync

  • support --compressed option resulting in an already compressed dataset to be sent as is to the destination (thus removing the need to decompress and potentially re-compress on the target).

Ceph

  • Improved UX when creating new clusters
The network selection and duplicate IP checking was improved.
It's no longer possible to run into an error, by selecting a different node for the first monitor than the one you are connected to (prevents trying to create a monitor on nodes without installed Ceph packages).
  • Added heuristic checks if it is OK to stop or remove a ceph MON, MDS, or OSD service.
The Web UI will now show a warning if the removal / stop of a service will affect the operation of the cluster.
  • Support for installing Ceph Quincy via Proxmox VE CLI and GUI.

Access Control

  • Improve naming of WebAuthn parameters in the GUI.
  • Increase OpenID code size - compatibility with Azure AD as OpenID provider.
  • Only require write-access (quorum) to TFA config for recovery keys.
All other TFA methods only need read-access to the config. This makes it possible to login to a node, which is not in the quorate partition, even if your user has TFA configured.
  • Fix a hard to trigger update issue with rotating the private key used for signing the access tickets, resulting in falsely rejected API calls.
  • Fix creation of tokens for other users, by anyone except root@pam
a bug prevented user A from creating a token for user B, despite having the relevant permissions
  • Better logging for expired tokens.

Firewall, Networking & Software Defined Networking (tech-preview)

  • Fix setting MTU on setups using OVS.
  • ifupdown2 now handles point-to-point settings correctly
  • ifupdown2 can now add a OVSBrige with a vlan-tag as ports to an OVSBridge (fakebridge)
  • Fix updating MTU if a bridge-port is plugged into a different bridge.
  • Firewall security groups can now be renamed with the changes directly being picked up from pve-firewall
  • Stricter parsing of guest config files in pve-firewall, making it possible to actually disable the guest firewall while keeping the config file around.
  • Improved handling on externally added ebtables rules: If a rule was added to a table different than filter, pve-firewall still tried to parse and add it to the filter table upon rule compilation.

Improved management for Proxmox VE clusters:

  • Proxmox Offline Mirror: The tool supports subscriptions and repository mirrors for air-gapped clusters. Newly added proxmox-offline-mirror utility can now be used to keep Proxmox VE nodes, without access to the public internet up-to-date and running with a valid subscription.
  • New mail-forwarding binary proxmox-mail-forward: no functional change, but unifies the configuration for sending the system-generated mails to the email address configured for root@pam
  • Improvements to pvereport - providing a better overview of the status of a Proxmox VE node the following information was added/improved:
    • ceph-device-list
    • stable ordering of guests and network information
    • proxmox-boot-tool output
    • arcstat output

HTTP & REST-API Daemon

  • File-uploads now support filenames with spaces.
  • File-uploads now support files with a size smaller than 16 KB
  • Improved input sanitization of API URLs as additional security layer.

Installation ISO

  • Fixed the permissions of /tmp inside the installation environment (e.g. for the edge-case of users installing software manually from a debug-shell)
  • Make the size requirement of 8 GB a warning - most installations can run with less space, but might need adaptation after install (e.g. moving the log-destination to another device) - keep 2 GB as hard limit
  • Rework auto-sizing of root, guest-data and SWAP partitions & avoid creating the LVM-Thin pool in favor of root partition space on small setups.

Mobile App

  • update to flutter 3.0
  • support and target Android 13
  • fix buttons hidden behind Android's soft nav buttons
  • provide feedback about running backup tasks: a bug prevented any visual feedback in the app, when starting a backup (although the backup was started).

Known Issues & Breaking Changes

  • Virtual Guest Tags:
    Duplicate tags are now filtered out when updating the tag property of a virtual guest.
    Duplication detection and sorting is handled case-insensitive by default, all tags are treated lower case. One can change that in the datacenter.cfg configuration using the case-sensitive boolean property of the tag-style option.

Proxmox VE 7.2

Released 4. May 2022: See Downloads

  • Based on Debian Bullseye (11.3)
  • Ceph Pacific 16.2.7
  • Ceph Octopus 15.2.16 (continued support until mid 2022)
  • Kernel 5.15.30
  • LXC 4.0
  • QEMU 6.2
  • ZFS 2.1.4

Changelog Overview

  • Enhancements in the web interface (GUI):
    • Enhance backup restore dialog by allowing administrators to override some basic settings for the virtual guest being restored.
    • Allow for safe reassignment of a VM disk or CT volume to another guest on the same node.
    • Support configuring the range that new VMIDs are selected from when creating a VM or CT.
    You can set the upper and lower boundaries in the datacenter's options panel. Setting lower equal to upper disables auto-suggestion completely.
    • Enable passing the keyring secret for external Ceph clusters in the web interface when adding an RBD or CephFS storage to Proxmox VE.
    • Render "ghost" OSDs in the Ceph dashboard to make admins aware of such leftover entries.
    • Provide the "next-run" time, i.e., the time a backup job will be executed next, in the datacenter's backup job overview.
    • Display a prominent Start Guest button in the Console view of guests, if they are not running .
    • Improve performance for the task log viewer in the GUI.
    • Move the actions affecting a disk or volume of a virtual guest to its own submenu.
    • Switch some bitmap-based VM & CT resource icons over to vector based SVG.
    • Allow assigning up to 16 PCI(e) devices to a VM via the web interface.
    The previous limit of 5 has already expanded in the backend.
    • Improved translations, among others:
      • Arabic
      • Basque
      • German
      • Japanese
      • Polish
      • Traditional Chinese
      • Turkish
  • Virtual Machines (KVM/QEMU)
    • Add support for the accelerated virtio-gl (VirGL) display driver.
    • Enable SPICE support by default for the VirtIO and VirGL display types.
    • If VMs were paused before migration starts, keep them paused after migration finishes.
    • Support for importing disk-images via the API - currently not available in the GUI.
    • Support for reading OVF files via the API - currently not available in the GUI.
    • The PCI vendor and device IDs can be overridden for a configuration passthrough entry, and do not need to correspond to the actual IDs of the physical device.
    • Increase the timeout for the fsthaw operation issued through the QEMU guest agent.
    • Fixed issues that occur in certain storage layouts, when migrating local VM state volumes, such as the TPM state or cloud-init.
    • Improved clean-up of unneeded resources upon restore, for example, if a restore fails, drop the left-over snapshots of replaced disks and cloud-init disks.
    • Improved consistency when cloning from a snapshot (rather than from the current state) - the EFI disk's state is also taken from the snapshot.
  • Containers (LXC)
    • Support for NixOS container templates
    • Support Ubuntu 22.04, Devuan 4.0 and the upcoming Fedora 36
    • Provide new pre-made templates for:
      • Ubuntu 22.04
      • Devuan 4.0
      • Alpine 3.15
    • Refreshed some existing templates:
      • CentOS 8 Stream,
      • Arch Linux
      • Debian 11 (to the 11.3 point release)
    • Containers with multiple mount points can now be successfully trimmed with pct fstrim, even if one of the mount points does not support trimming, for example, because it's on a ZFS subvolume.
    • Add target-storage support to migration.
    • Allow lazytime mount option for containers.
    • Fix an issue with wrong ownership for files/directories created by the container stack on unprivileged containers.
    Files on Red Hat and SUSE based templates were created as root on the host and showed up as owned by nobody inside the container, resulting in NetworkManager failing.
  • General improvements for virtual guests
    • Better handling of snapshot removal (for example, after finishing a backup) when storage replication is configured.
  • HA Manager
    • Improve handling of huge numbers of services per node, by improving the LRM scheduler that starts workers.
    ha-manager uses a statically configurable number of workers to handle the services, by scheduling jobs in repeating scheduling rounds with a time-slice of 10s max.
    Since the services that have already started must also be checked to ensure that they are still in the target state, it could happen during large deployments that the services starved at the end of the queue.
    The issue was fixed by additionally sorting the services by the amount of time in which they hadn't been scheduled. This increased the number of configurable services to be well above the previous implementation.
    • Handle an edge case where a node would get stuck in fence state, if all services were removed from it before the node actually fenced itself.
    • The ha-simulator can now better help to test races in scheduling (on the different nodes) by introducing a skip-round
  • Cluster
    • A segfault triggered with certain usage patterns was fixed.
    The issue was reported in the context of a cluster having many sessions opened with username and password (instead of API tokens) by a monitoring system.
    • The lower and upper boundaries to be considered when selecting the next free VMID can now be set in the datacenter.cfg.
    The settings are available over the web interface's Datacenter -> Options panel.
  • Backup/Restore
    • Allow for the configuration of a notes-template for backup jobs, which will be evaluated once the job runs and added to each resulting backup of said job.
    With this, useful meta-information can be added, allowing backups to be easier to distinguish, so that a relevant one can be found quickly if required.
    Note templates can contain template variables like {{guestname}} or {{cluster}}.
    • The next-event scheduling code was updated and now uses the more powerful one from Proxmox Backup Server via a Perl to rust binding (perlmod).
    In addition to the existing weekday, time and time range support, you can now specify:
      • Dates: For example 2015-10-21 or *-12-31 23:50 (New Year's Eve, 10 minutes before midnight every year)
      • Date ranges: Sat *-1..7 15:00 (first Saturday every month at 15:00)
      • Repeating ranges Sat *-1..7 */30 (first Saturday every month, every half hour)
    • Add a 'max-protected-backups' property to the storage definitions, to address issues with running into the configured general limit, due to having protected backups on the storage which should not count towards it.
    • Add a job-init hook that is run once upfront, on each node that hosts guests which are included in the backup job.
  • Storage
    • Increase the timeout of ZFS operations for synchronous and worker calls to 10 seconds and a minimum of 5 minutes respectively, based on a report of the current settings being too low.
  • pve-zsync
    • Improved resiliency of the crontab parser to better cope with manual modifications.
    • Support syncing TPM state.
  • Ceph
    • Support creating and destroying erasure coded pools, which can also be added as Proxmox VE Storage entries.
    • Support for selecting existing custom device classes when adding OSDs through the web interface.
    • Out/In commands for OSDs are now executed on the node the user is connected to, instead of the node where the OSDs are located.
    This makes it possible to disable OSDs in the GUI, in case the target node is not available (for example in a disaster recovery situation).
  • Access Control
    • More fine-grained control over how users or groups that vanished while syncing realms should be handled.
    Administrators can now choose if the user and all their configured values (for example TFA configurations, ACL entries) are set to the state in LDAP, or if only the users are added/removed, while their properties remain.
    This makes it possible to handle removed users, without losing the TFA configuration for all users who did not change since the last sync.
  • OpenID realms support configuring of scopes, authentication context class reference (ACR) values, and whether the provider prompts the user for re-authentication and consent.
    • Fixed an issue with using Azure as an OpenID provider (unexpected Content-Type-Encoding leading to rejected logins).
    • WebAuthn TFA configurations now support only optionally providing the origin parameter for clustered systems.
    • Improved robustness in case of misconfigured U2F and WebAuthn settings.
    Logins with other TFA methods do not get rejected if the U2F/WebAuthn configuration is incorrect.
    • TFA settings can now be viewed by users with User.Modify and Sys.Audit permissions in addition to root@pam
  • Firewall & Software Defined Networking (tech preview)
    • Set the ICMP type in the firewall rules GUI directly.
    • Add support for configuring anycast addresses on guests, by advertising the whole subnet to the EVPN network.
    • Add configuration for allowing exit-notes to reach guests inside the EVPN network, for example if they need to talk with a DNS server running inside a virtual guest.
    • Support for floating IPs on guests (by not suppressing ARP/ND packets).
    • Hide bridges that a user doesn't have access to, if vNets are in use.
  • Improvements for managing a Proxmox VE system
    • It is now possible to select a particular kernel version to boot persistently from a running system through proxmox-boot-tool kernel pin
    You can select a particular version to be used either indefinitely or just for the next boot. This removes the need to watch the boot process to select the desired version in the bootloader screen.
    • Administrators can now disable MAC learning on a bridge in /etc/network/interfaces with the bridge-disable-mac-learning flag.
    This reduces the number of packets flooded on all ports (for unknown MAC addresses), preventing issues with certain hosting providers (for example, Hetzner), which resulted in the Proxmox VE node getting disconnected.
    • The location of the private key-file used for pveproxy can now be set in /etc/default/pveproxy, supporting certain setups which need sensitive data to be stored in a different location due to regulatory requirements.
    • The TLS configuration options for pveproxy now support disabling TLS 1.2 (to run in a pure TLS 1.3 environment) and configuring the advertised TLS1.3 ciphers.
    • Client-initiated TLS renegotiation was disabled, following best-practices.
    • The pveschedule daemon was improved based on the feedback received from its initial release with Proxmox VE 7.1.
    • Data sent to a configured metrics server is now formatted in a more strict fashion to prevent type mismatches, which were possible if a guest had a numeric-only name.
  • Mobile App
    • Provide a simple dark theme and enable it if the system settings are configured to use dark designs.
    • Provide an inline console by relaying noVNC for VMs, and xterm.js for containers and the Proxmox VE node shell in the GUI.
    • Improved support for notches and full-screen mode.
  • Installation ISO
    • Updated memtest86+ to the completely rewritten 6.0b version, adding support for UEFI and modern memory like DDR5.
    • Support for restricting the partitioned space for BTRFS installs, in order to leave unpartitioned space, which can be used for swap, for example.
    • Improved accuracy of displayed disk size by adding a decimal point. Otherwise, a 1.8 TB disk, for example, was shown as being just 1 TB.
    • Support for creating RAID0 configurations with differently sized disks.
    • Improved UX when setting the hdsize for RAID setups.
    • ZFS installs can now be configured to use zstd or GZIP compression.
    • Better support for installing ZFS on Systems with certain bugs in their UEFI implementation (HP Thin clients).
    • Drop the kernel command line quiet flag when booting in the advanced mode to provide more verbosity.
  • Notable bug fixes
    • To prevent a network outage during the transition from ifupdown to ifupdown2, the ifupdown package was modified to not stop networking upon its removal.
    • For backups of running VMs with multiple disks, it was possible that write operations happening after the backup start synchronization point to the non-primary disks to be included in the backup.
    This has been fixed with QEMU 6.2.0 by initializing the relevant bitmap early enough.

Known Issues

  • Setups mounting a QNAP NFS share could not be mounted with NFS version 4.1 with kernel pve-kernel-5.15.30-2-pve - the issue has been mitigated in kernels pve-kernel-5.15.35-2 and above.
    If your QNAP NFS share cannot be mounted upgrade the kernel and reboot.
    As an alternative mitigation you can explicitly set the NFS version to 4 (not 4.1 or auto).
  • PCI(e) pass through related:
    • Systems passing through a GPU may be affected from the switch to the SYS_FB (system frame buffer) KConfig build options using the simplefb module as driver in the new default 5.15 based kernel.
      The sys-fb allows taking over the FB from the firmware/earlier boot stages. Note that Proxmox VE uses the legacy simplefb driver over the modern simpledrm one due to regressions and issues we encountered on testing with the latter.
      Most of those issues are already fixed in newer kernels and Proxmox VE may try to switch to the modern, DRM based FB driver once it moves to 5.17, or newer, as its default kernel.
      If your systems is configured to pass through the (i)GPU, and you had to avoid the host kernel claiming the device, you may now need to also add video=simplefb:off to the kernel boot command line.
    • Setups using vendor-reset for PCIe pass through need to adapt to changes of the new default 5.15 based kernel, see For details see this issue.
      They must run the command echo 'device_specific' > /sys/bus/pci/devices/<PCI-ID>/reset_method before the VM is started. This can be automated by using a systemd service or using a on-boot cron script.
      Alternatively one can also use a VM hook script with the pre-start hook.
  • intel_iommu defaulted to on in kernels 5.15 based kernels up to pve-kernel-5.15.35-3-pve with version 5.15.35-6. An enabled IOMMU can cause problems with older hardware - issues were reported with, for example, HP DL380 g8 servers, and Dell R610 servers (both older than 10 years).
    The issue can be fixed by explicitly disabling intel_iommu on the kernel commandline (intel_iommu=off) following the reference documentation - https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline
    As the setting has been reverted in newer pve-kernel-5.15 packages the issue is now mostly relevant during installation from the ISO.
  • Certain systems may need to explicitly enable iommu=pt (SR-IOV pass-through) on the kernel command line.
    There are some reports for this to solve issues with Avago/LSI RAID controllers, for example in a Dell R340 Server booted in legacy mode.
  • Certain systems may need to work around a HW issue with TDP (two-dimensional paging), a now by default enabled technic to speed up VM to host, or vice versa, switches.
    If you see a kernel log message containing KVM: entry failed, hardware error 0x80000021 you are affected.
    This has been adressed with the kernel package pve-kernel-5.15.39-3-pve with version 5.15.39-3 and newer, which is available on all repositories.

Proxmox VE 7.1

Released 17. November 2021: See Downloads (updated ISO release 2 with updated kernel and bug fixes 7. December 2021)

  • Based on Debian Bullseye (11.1)
  • Ceph Pacific 16.2.6
  • Ceph Octopus 15.2.15 (continued support until mid 2022)
  • Kernel 5.13 default
  • LXC 4.0
  • QEMU 6.1
  • ZFS 2.1

Changelog Overview

  • Enhancements in the web interface (GUI):
    • Improved, more flexible scheduling options for backup jobs.
    Backup jobs are now run by a new scheduler daemon (pvescheduler) and support the same more flexible calendar-event schedules already known from the verify and garbage-collection jobs in Proxmox Backup Server.
    • Support configuration of backup retention policy for each backup job.
    Brings the powerful and flexible retention settings from the API and CLI to the GUI, making it easier to set up smart backup policies.
    • Add Schedule Simulator tool to the backup job overview panel to help get the calendar event format right.
    • Simplify the creation of virtual guests with multiple disks.
    Additional disks can now be added from the creation wizard, eliminating the need to add them after creating the VM or Container.
    • Added a combo box under advanced options, for selecting the asynchronous IO mode for each disk of a virtual machine.
    • Add a Notes panel for the datacenter. It can interpret Markdown and render it as HTML.
    • Improvements to the UX for configuring metric servers.
    • Two-factor authentication improvements:
      • Support for WebAuthn (superseding U2F) and one-time recovery keys.
      • Support for configuring multiple factors for a single account.
    • Improved translations, among others:
      • Arabic
      • Basque
      • Brazilian Portuguese
      • French
      • German
      • Simplified Chinese
      • Traditional Chinese
      • Turkish
  • Virtual Machines (KVM/QEMU):
    • Support for Windows 11 OS type
    • UEFI Secure Boot
    Newly created VMs have support for secure boot verification, with an option to have Linux distribution and Microsoft keys pre-enrolled.
    • TPM support
    Support for adding a Trusted Platform Module to any VM, fulfilling the requirements of modern operating systems, for example Windows 11.
    • Store the current time and the installed QEMU version, when creating a VM.
    This information is also used to work around changes in the QEMU PCI/ACPI device hotplug layer, which might cause network interface name-changes otherwise.
    • SCSI and VirtIO VM disks can be marked read-only.
    • Allow re-assigning a disk to another VM.
    • qm move_disk has been renamed to qm move-disk.
    The old syntax is still supported for backwards compatibility.
    • Changed the initial default resolution from 800x600 to 1024x768 when using OVMF (EFI).
    Note that the resolution is saved in the EFIvars, so this mainly affects new VMs. You can override the preferred resolution in the OVMF settings menu (Device Manager -> Platform Config).
  • Container
    • Support Fedora 35 and Ubuntu 21.10.
    • Support and provide templates for the AlmaLinux and Rocky Linux distributions, which claim to be 1:1 binary compatible with RHEL.
    • Support zstd-compressed templates.
    Leverage the highly efficient, high-ratio, state-of-the-art compression algorithm for CT templates.
    • Enable the nesting feature for new unprivileged containers created via the web interface.
      The nesting feature ensures better interoperability with modern systemd versions used in newer templates.
    • Better handling of unmanaged containers.
    • Allow re-assigning a mount point volume to another container.
    • pct move_volume has been renamed to pct move-volume.
    The old syntax is still supported for backwards compatibility.
    • Improved support for restoring backups of privileged containers as unprivileged ones (use of unprivileged containers is strongly recommended wherever possible).
    • Added support for container CPU-sets on hosts with cgroupv2 layout.
  • General improvements for virtual guests
    • Better interoperability between storage replication and rollback to an independent snapshot.
  • Cluster
    • Updated corosync to include bug-fixes for issues occurring during network recovery.
    This could have otherwise lead to loss of quorum on all cluster nodes, which in turn would cause a cluster-wide fencing event in case HA was enabled.
  • Disk Management
    • Support wiping specific partitions.
    Added more fine-grained control over what part of a disk you want to clean up for re-use.
    • Support creating storages on a particular partition, in addition to using the whole disk.
    • Support using partitions for specific use-cases, like Ceph Journal.
    • Support for zstd compression when creating a new zpool via API/GUI.
  • Storage
    • Make pre-allocation policy configurable for file-based storage.
    Fully pre-allocated images can provide better performance, but take a longer time when being created - see qemu-img(1) for details.
  • Backup/Restore
    • Augmented scheduling options for backups.
    Backup jobs are now run by a new scheduler daemon (pvescheduler) and support the more flexible schedule-options known from Proxmox Backup Server verify and garbage collection jobs (which in turn are inspired by systemd-timers).
    • Support for protected backups, which will not be pruned and cannot be removed manually, without removing the protected flag.
  • Storage Replication
    • Scheduling of replication jobs is now handled by the pvescheduler daemon instead of a systemd-timer running every minute, resulting in vastly reduced logging on the nodes.
  • Ceph
    • Full support for configuring external Ceph clusters via API.
    Before, the keyring needed to be put in the correct place via shell.
    • Multiple CephFS instances are supported.
  • Access Control
    • OpenID Authentication now honors the configured http-proxy in the datacenter settings.
    • Two-Factor Authentication
      • Add support for multiple 2nd factors.
      • Add recovery keys as 2nd factor.
      • Add WebAuthn 2nd factor support.
      • Rework TFA and login UI accordingly.
  • Improvements regarding the upgrade from version 6.4:
    • A bug in proxmox-boot-tool, which could make a Proxmox VE system running the 5.11 kernel unbootable upon upgrade, was fixed.
    • Privileged containers with old systemd versions can now run successfully - the Proxmox VE node still needs to be booted into the legacy cgroup layout.

Noteable or Breaking Changes

  • API deprecations, moves and removals
    • The /version API method now only returns the console setting of the datacenter config, instead of all settings (use the /cluster/options instead).
  • CLI changes
    • qm: rename move_disk command to move-disk with an alias for backward compatibility.
  • Config file changes
    • The /etc/sysctl.d/pve.conf snippet has been moved to /usr/lib/sysctl.d/10-pve.conf, in accordance with current recommendations (see sysctl.d(5)). Local modifications need to be re-enabled again - they are preserved in /etc/sysctl.d/pve.conf.dpkg-old).
  • Two-Factor Authentication:
    • Keys stored by admins in the user entry (this is how it worked back before the users could add TOTP and U2F keys themselves, and how it worked for Yubico OTP until now) are now deprecated.
    • The two-step login procedure in the API has been changed to work like in the Proxmox Backup Server. For now, the new procedure is opt-in and the old one still works, but is deprecated.

Known Issues

  • Web-interface glitches
    • Please reload the web-interface after the upgrade to avoid various glitches or issues all around.
    Ensure that the browser caches are circumvented on reload, for example by using CTRL + SHIFT + R (switch CTRL with CMD for MacOS).
  • Two-Factor Authentication:
    • Keys stored by admins in the user entry (see above) still work, but will not be visible in the TFA panel until that user’s TFA configuration has been updated once, by registering any new TFA entry, such as by adding recovery keys.
    • When in a cluster, changes to TFA entries can only be done after all nodes have been upgraded, since the TFA configuration file format changed and will be automatically upgraded to the new format with the first change.
    • Old API clients which support 2nd factors via two-step logins may be unable to log in to accounts which have multiple TFA entries, or ones which were not supported previously.
  • With the addition of pvescheduler - the new backup schedules are no longer configured in the cluster-wide /etc/pve/vzdump.cron file. While this file is still read and will remain supported for the foreseeable future, changing an existing backup schedule will cause the schedule to be stored in the new config file /etc/pve/jobs.cfg.
    This is not problematic, as long as not all nodes in a cluster have been upgraded to the new version, since backups will only run on nodes who read the new config file.
    To avoid any problems, simply upgrade all nodes in your cluster before making any changes to scheduled backups.
  • Low potential for full-cluster fencing event when upgrading HA-enabled clusters using libknet1 version 1.21-pve1
    • Under certain network load situations, that is one set of nodes having very low left-over bandwidth and another set has a lot of available bandwidth, a cluster (partly) running libknet1 version 1.21-pve1 can trigger a bug in corosync/kronosnet. With HA enabled this can then lead to a fencing event for the entire cluster.
    If a separate network is used for corosync/kronosnet the chance that the bug can get triggered is highly reduced.
    See bug #3672 for details.
    • As a precaution, it is advised to disable HA, in order to prevent a fencing event. This is done by stopping the `pve-ha-lrm` service on all nodes, then stopping the `pve-ha-crm` service on all nodes prior to upgrading. After all nodes have been upgraded, HA should be enabled again before rebooting any nodes, by first restarting `pve-ha-lrm` on all nodes, then restarting `pve-ha-crm` on all nodes.
  • Garbled output during installation of Debian in VMs when standard VGA display and OVMF/UEFI is configured
    • Debian's Kernel package currently does not enable the simplefb module and thus uses the legacy efifb frame buffer, this can lead to glitches with the OVMF version that Proxmox VE 7.1 ship.
    As workaround, you can either use QXL (SPICE) for the VM's display, use SeaBIOS or disable Secure Boot in OVMF and use the non-graphical, text based installer.
  • VMs with SATA as disk bus may show issues when used with io_uring
    • For now this can be worked around by switching the disk's Async IO mode to threads (for cache = write back/through) or native (for cache = off, none or direct sync), respectively, in the Advanced options of the disk edit dialogue.
    • Note that SATA is really not the best choice for a VM's disk bus in general, rather use (VirtIO-) SCSI for best performance and feature set. For Windows VirtIO support is available through the Windows VirtIO Drivers

Proxmox VE 7.0

Released 6. July 2021: See Downloads, updated ISO release 2 with updated base system to final Debian 11.0 Bullseye and bug fixes 31. August 2021

  • Based on Debian Bullseye (11)
  • Ceph Pacific 16.2 as new default
  • Ceph Octopus 15.2 continued support
  • Kernel 5.11 default
  • LXC 4.0
  • QEMU 6.0
  • ZFS 2.0.4

Changelog Overview

  • Installer:
    • Rework the installer environment to use switch_root instead of chroot, when transitioning from initrd to the actual installer.
      This improves module and firmware loading, and slightly reduces memory usage during installation.
    • Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
    • Improve ISO detection:
      • Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
      • Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
    • Use zstd compression for the initrd image and the squashfs images.
    • Setup Btrfs as root file system through the Proxmox VE Installer (Technology preview)
    • Update to busybox 1.33.1 as the core-utils provider.
  • Enhancements in the web interface (GUI):
    • The node summary panel shows a high level status overview, while the separate Repository panel shows in-depth status and list of all configured repositories. Basic repository management, for example, activating or deactivating a repository, is also supported.
    • Notes panels for Guests and Nodes can now interpret Markdown and render it as HTML.
    • On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
    • The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
    • Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
    • Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
    • Improved translations, among others:
      • Arabic
      • French
      • German
      • Japan
      • Polish
      • Turkish
  • Access Control:
    • Single-Sign-On (SSO) with the new OpenID Connect access realm type.
    You can integrate external authorization servers, either using existing public services or your own identity and access management solution, for example, Keycloack or LemonLDAP::NG.
    • Added new permission Pool.Audit to allow users to see pools, without permitting them to change the pool.
    See breaking changes below for some possible impact in custom created roles.
  • Virtual Machines (KVM/QEMU):
    • QEMU 6.0 has support for io_uring as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
    The new default can be overridden in the guest config via qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native (where, for example, DRIVE would be scsi0 and the OPTS could be get from qm config VMID output).
    • EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
    • Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
      • This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
      • Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
    With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
    • VM snapshot states are now always removed when a VM gets destroyed.
    • Improved logging during live restore.
  • Container
    • Support for containers on custom storages.
    • Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
  • Migration
    • QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
    Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
    • Containers: The force parameter to pct migrate, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as shared.
  • High Availability (HA):
    • Release LRM locks and disable watchdog protection if all services of the node the LRM is running on, got removed and no new ones were added for over 10 minutes.
    This reduced the possible subtle impact of an active watchdog after a node was cleared of HA services, for example, when HA services were previously only configured for evaluation.
    • Add a new HA service state recovery and transform the fence state in a transition to that new state.
    This gains a clear distinction between to be fenced services and the services whose node already got fenced and are now awaiting recovery.
    • Continuously retry recovery, even if no suitable node was found.
    This improves recovery for services in restricted HA groups, as only with that the possibility of a quorate and working partition but no available new node for a specific service exists.
    For example, if HA is used for ensuring that a HA service using local resource, like a VM using local storage, will be restarted and up as long as the node is running.
    • Allow manually disabling HA service that currently are in recovery state, for more admin control in those situations.
  • Backup and Restore
    • Backups of QEMU guests now support encryption using a master key.
    • It is now possible to back up VM templates with SATA and IDE disks.
    • The maxfiles parameter has been deprecated in favor of the more flexible prune-options.
    • vzdump now defaults to keeping all backups, instead of keeping only the latest one.
    • Caching during live restore got reworked, reducing total restore time required and improving time to fully booted guest both significantly.
    • Support file-restore for VMs using ZFS or LVM for one, or more, storages in the guest OS.
  • Network:
    • Default to the modern ifupdown2 for new installations using the Proxmox VE official ISO. The legacy ifupdown is still supported in Proxmox VE 7, but may be deprecated in a future major release.
  • Time Synchronization:
    • Due to the design limitations of systemd-timesync, which make it problematic for server use, new installations will install chrony as the default NTP daemon.
    If you upgrade from a system using systemd-timesyncd, it's recommended that you manually install either chrony, ntp or openntpd.
  • Ceph Server
    • Support for Ceph 16.2 Pacific
    • Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple public_networks defined.
    Note that multiple public_networks are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
    • Improved support for IPv6 and mixed setups, when creating a Ceph monitor.
    • Beginning with Ceph 16.2 Pacific, the balancer module is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs.
    • Newly created Bluestore OSDs will benefit from the newly enabled sharding configuration for rocksdb, which should lead to better caching of frequently read metadata and less space needed during compaction.
  • Storage
    • Support for Btrfs as technology preview
      • Add an existing Btrfs file system as storage to Proxmox VE, using it for virtual machines, container, as backup target or to store and server ISO and container appliance images.
    • The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit[footnotes 7.0 1].
    • More use of content-type checks instead of checking a hard-coded storage-type list in various places.
    • Support downloading ISO and Cont appliance images directly from a URL to a storage, including optional checksum verifications.
  • Disk Management
    • Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential.
    Note that with using this feature any data on the disk will be destroyed permanently.
  • pve-zsync
    • Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
  • Firewall
    • The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations).
  • Certificate management
    • The ACME standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface.

Breaking Changes

  • Pool permissions
The old permission Pool.Allocate now only allows users to edit pools, not to see them. Therefore, Pool.Audit must be added to existing custom roles with the old Pool.Allocate to preserve the same behavior. All built-in roles are updated automatically.
  • VZDump
    • Hookscript: The TARFILE environment variable was deprecated in Proxmox VE 6, in favor of TARGET. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
    • The size parameter of vzdump has been deprecated, and setting it is now an error.
  • API deprecations, moves and removals
    • The upgrade parameter of the /nodes/{node}/(spiceshell|vncshell|termproxy) API method has been replaced by providing upgrade as cmd parameter.
    • The /nodes/{node}/cpu API method has been moved to /nodes/{node}/capabilities/qemu/cpu
    • The /nodes/{node}/ceph/disks API method has been replaced by /nodes/{node}/disks/list
    • The /nodes/{node}/ceph/flags API method has been moved to /cluster/ceph/flags
    • The db_size and wal_size parameters of the /nodes/{node}/ceph/osd API method have been renamed to db_dev_size and wal_dev_size respectively.
    • The /nodes/<node>/scan/usb API method has been moved to /nodes/<node>/hardware/usb
  • CIFS credentials have been stored in the namespaced /etc/pve/priv/storage/<storage>.pw instead of /etc/pve/<storage>.cred since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
  • The external storage plugin mechanism had a ABI-version bump that reset the ABI-age, thus marking an incompatible breaking change, that external plugins must adapt before being able to get loaded again.
  • qm|pct status <VMID> --verbose, and the respective status API call, only include the template line if the guest is a template, instead of outputting template: for guests which are not templates.
  • qm vncproxy now always requires a password to be set via the LC_PVE_TICKET environment variable.

Known Issues

  • Network: Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
    • Some may change their name. For example, due to newly supported functions, a change from enp33s0f0 to enp33s0f0np0 could occur.
      We observed such changes with high-speed Mellanox models.
    • Bridge MAC address selection has changed in Debian Bullseye - it is now generated based on the interface name and the machine-id (5) of the system.
    Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
  • Container:
    • cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 or newer support cgroupv2 [1], as do containers that do not use systemd as init system in the first place (e.g., Alpine Linux or Devuan).
    CentOS 7 and Ubuntu 16.10 are two prominent examples for Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, for details and possible fixes see: https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat

Proxmox VE 7.0 beta1

Released 24. June 2021: See Downloads

  • Based on Debian Bullseye (11)
  • Ceph Pacific 16.2 as new default
  • Ceph Octopus 15.2 continued support
  • Kernel 5.11 default
  • LXC 4.0
  • QEMU 6.0
  • ZFS 2.0.4
  • Installer:
    • Rework the installer environment to use switch_root instead of chroot, when transitioning from initrd to the actual installer.
      This improves module and firmware loading, and slightly reduces memory usage during installation.
    • Automatically detect HiDPI screens, and increase console font and GUI scaling accordingly. This improves UX for workstations with Proxmox VE (for example, for passthrough).
    • Improve ISO detection:
      • Support ISOs backed by devices using USB Attached SCSI (UAS), which modern USB3 flash drives often do.
      • Linearly increase the delay of subsequent scans for a device with an ISO image, bringing the total check time from 20s to 45s. This allows for the detection of very slow devices, while continuing faster in general.
    • Use zstd compression for the initrd image and the squashfs images.
    • Update to busybox 1.33.1 as the core-utils provider.
  • Enhancements in the web interface (GUI)
    • Notes panels for Guests and Nodes can now interpret Markdown and render it as HTML.
    • On manually triggered backups, you can now enable pruning with the backup-retention parameters of the target storage, if configured.
    • The storage overview now uses SI units (base 10) to be consistent with the units used in the graphs.
    • Support for security keys (like YubiKey) as SSH keys, when creating containers or preparing cloud-init images.
    • Improved rendering for IOMMU-groups when adding passthrough PCI devices to QEMU guests.
    • Improved translations, among others:
      • Arabic
      • French
      • German
      • Polish
  • Virtual Machines (KVM/QEMU):
    • QEMU 6.0 has support for io_uring as an asynchronous I/O engine for virtual drives - this is now the default for newly started or migrated guests.
    The new default can be overridden in the guest config per drive via qm set VMID --DRIVE EXISTING-DRIVE-OPTS,aio=native.
    • EFI disks stored on Ceph now use the writeback caching-mode, improving boot times in case of slower or highly-loaded Ceph storages.
    • Unreferenced VM disks (not present in the configuration) are not destroyed automatically any more:
      • This was made opt-in in the GUI in Proxmox VE 6.4 and is now also opt-in in the API and with CLI tools.
      • Furthermore, if this clean-up option is enabled, only storages with content-types of VM or CT disk images, or rootdir will be scanned for unused disk-volumes.
    With this new default value, data loss is also prevented by default. This is especially beneficial in cases of dangerous and unsupported configurations, for example, where one backing storage is added twice to a Proxmox VE cluster with an overlapping set of content-types.
    • VM snapshot states are now always removed when a VM gets destroyed.
    • Improved logging during live restore.
  • Container
    • Support for containers on custom storages.
    • Clone: Clear the cloned container's `/etc/machine-id` when systemd is in use or that file exists. This ID must be unique, in order to prevent issues such as MAC address duplication on Linux bridges.
  • Migration
    • QEMU guests: The migration protocol for sending the Spice ticket changed in Proxmox VE 6.1. The code for backwards compatibility has now been dropped, prohibiting the migration of VMs from Proxmox VE 7.0 to hypervisors running Proxmox VE 6.1 and older.
    Always upgrade to the latest Proxmox VE 6.4, before starting the upgrade to Proxmox VE 7.
    • Containers: The force parameter to pct migrate, which enabled the migration of containers with bind mounts and device mounts, has been removed. Its functionality has been replaced by marking the respective mount-points as shared.
  • Backup and Restore
    • Backups of QEMU guests now support encryption using a master key.
    • It is now possible to back up VM templates with SATA and IDE disks.
    • The maxfiles parameter has been deprecated in favor of the more flexible prune-options.
    • vzdump now defaults to keeping all backups, instead of keeping only the latest one.
  • Network:
    • Default to the modern ifupdown2 for new installations using the Proxmox VE official ISO. The legacy ifupdown is still supported in Proxmox VE 7, but may be dropped in a future major release.
  • Time Synchronization:
    • Due to the design limitations of systemd-timesync, which make it problematic for server use, new installations will install chrony as the default NTP daemon.
    If you upgrade from a system using systemd-timesyncd, it's recommended that you manually install either chrony, ntp or openntpd.
  • Ceph Server
    • Support for Ceph 16.2 Pacific
    • Ceph monitors with multiple networks can now be created using the CLI, provided you have multiple public_networks defined.
    Note that multiple public_networks are usually not needed, but in certain deployments, you might need to have monitors in different network segments.
    • Improved support for IPv6 and mixed setups, when creating a Ceph monitor
    • Beginning with Ceph 16.2 Pacific, the balancer module is enabled by default for new clusters, leading to better distribution of placement groups among the OSDs
    • Newly created Bluestore OSDs will benefit from the newly enabled sharding configuration for rocksdb, which should lead to better caching of frequently read metadata and less space needed during compaction
  • Storage
    • The outdated, deprecated, internal DRBD Storage plugin has been removed. A derived version targeting newer DRBD is maintained by Linbit[footnotes 7.0 1].
  • Disk Management
    • Wiping disks is now possible from the GUI, enabling you to clear disks which were previously in use and create new storages on them. Note, wiping a disk is a destructive operation with data-loss potential. Any data on the disk will be destroyed.
  • pve-zsync
    • Separately configurable number of snapshots on source and destination, allowing you to keep a longer history on the destination, without the requirement to have the storage space available on the source.
  • Firewall
    • The sysctl settings needed by pve-firewall are now set on every update to prevent disadvantageous interactions during other operations (for example package installations)
  • Certificate management
    • The ACME Standalone plugin has improved support for dual-stacked (IPv4 and IPv6) environments and no longer relies on the configured addresses to determine its listening interface

Breaking Changes

  • Pool permissions
Added new permission Pool.Audit to allow users to see pools, without permitting them to change the pool. The old permission Pool.Allocate now only allows users to edit pools, not to see them. Therefore, Pool.Audit must be added to custom roles with the old Pool.Allocate to preserve the same behavior. All built-in roles are updated automatically.
  • VZDump
    • Hookscript: The TARFILE environment variable was deprecated in Proxmox VE 6, in favor of TARGET. In Proxmox VE 7, it has been removed entirely and thus, it is not exported to the hookscript anymore.
    • The size parameter of vzdump has been deprecated, and setting it is now an error.
  • API deprecations, moves and removals
    • The upgrade parameter of the /nodes/{node}/(spiceshell|vncshell|termproxy) API method has been replaced by providing upgrade as cmd parameter.
    • The /nodes/{node}/cpu API method has been moved to /nodes/{node}/capabilities/qemu/cpu
    • The /nodes/{node}/ceph/disks API method has been replaced by /nodes/{node}/disks/list
    • The /nodes/{node}/ceph/flags API method has been moved to /cluster/ceph/flags
    • The db_size and wal_size parameters of the /nodes/{node}/ceph/osd API method have been renamed to db_dev_size and wal_dev_size respectively.
    • The /nodes/<node>/scan/usb API method has been moved to /nodes/<node>/hardware/usb
  • CIFS credentials have been stored in the namespaced /etc/pve/priv/storage/<storage>.pw instead of /etc/pve/<storage>.cred since Proxmox VE 6.2 - existing credentials will get moved during the upgrade allowing you to drop fallback code.
  • qm|pct status <VMID> --verbose, and the respective status API call, only include the template line if the guest is a template, instead of outputting template: for guests which are not templates.

Note that there is no API-stability guarantee during the beta.

Known Issues

  • Network: Due to the updated systemd version, and for most upgrades, the newer kernel version (5.4 to 5.11), some network interfaces might change upon reboot:
    • Some may change their name. For example, due to newly supported functions, a change from enp33s0f0 to enp33s0f0np0 could occur.
      We observed such changes with high-speed Mellanox models.
    • Bridge MAC address selection has changed in Debian Bullseye - it is now generated based on the interface name and the machine-id (5) of the system.
    Systems installed using the Proxmox VE 4.0 to 5.4 ISO may have a non-unique machine-id. These systems will have their machine-id re-generated automatically on upgrade, to avoid a potentially duplicated bridge MAC.
If you do the upgrade remotely, make sure you have a backup method of connecting to the host (for example, IPMI/iKVM, tiny-pilot, another network accessible by a cluster node, or physical access), in case the network used for SSH access becomes unreachable, due to the network failing to come up after a reboot.
  • Misc. bug fixes and improvements

Notes

Proxmox VE 6.4

Released 28. April 2021

  • Based on Debian Buster (10.9)
  • Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20
  • Kernel 5.4 default
  • Kernel 5.11 opt-in
  • LXC 4.0
  • QEMU 5.2
  • ZFS 2.0.4 - new major version
  • Virtual Machines (KVM/QEMU):
    • Support pinning a VM to a specific QEMU machine version.
    • Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation.
    This improves stability and guarantees that the hardware layout can stay the same even with newer QEMU versions.
    • Address issues with hanging QMP commands, causing VMs to freeze on disk resize and indeterministic edge cases.
    Note that some QMP timeout log messages are still being investigated, they are harmless and only of informative nature.
    • cloud-init: re-add Stateless Address Autoconfiguration (SLAAC) option to IPv6 configuration.
    • Improve output in task log for mirroring drives and VM live-migration.
  • Container
    • Improved cgroup v2 (control group) handling.
    • Support and provide appliance templates for Alpine Linux 3.13, Devuan 3, Fedora 34, Ubuntu 21.04.
  • Backup and Restore
    • Implement unified single-file restore for virtual machine (VM) and container (CT) backup archives located on a Proxmox Backup Server.
    The file-restore is available in the GUI and in a new command line tool proxmox-file-restore.
    • Live-Restore of VM backup archives located on a Proxmox Backup Server.
    No more watching the task log, waiting for a restore to finish; VMs can now be brought up while the restore runs in the background.
    • Consistent handling of excludes for container backups across the different backup modes and storage types.
    • Container restores now default to the privilege setting from the backup archive.
  • Ceph Server
    • Improve integration for placement group (PG) auto-scaler status and configuration.
      Allow configuration of the CRUSH-rule, Target Size and Target Ratio settings, and automatically calculating the optimal numbers of PGs based on this.
  • Storage
    • Support editing of backup notes on any CephFS, CIFS or NFS storage.
    • Support configuring a namespace for accessing a Ceph pool.
    • Improve handling ZFS pool by doing separate checks for imported and mounted.
    This separation helps in situations where a pool was imported but not mounted, and executing another import command failed.
  • Disk Management
    • Return partitions and display them in tree format.
    • Improve detection of disk and partition usage.
  • Enhancements in the web interface (GUI)
    • Show current usage of host memory and CPU resources by each guest in a node's search-view.
    • Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, providing consistency with the units used in the current usage gauge.
    • Make columns in the firewall rule view more responsive and flexible by default.
    • Improve Ceph pool view, show auto-scaler related columns.
    • Support editing existing Ceph pools, adapting the CRUSH-rule, Target Size and Target Ratio, among other things.
  • External metric servers:
    • Support the InfluxDB 1.8 and 2.0 HTTP(s) API.
    • Allow use of InfluxDB instances placed behind a reverse-proxy.
  • Proxmox VE API Proxy Daemon (pveproxy)
    • Make listening IP configurable (in /etc/default/pveproxy). This can help to limit exposure to the outside (e.g. by only binding to an internal IP).
    • pveproxy now listens for both IPv4 and IPv6, by default
  • Installation ISO:
    • Installation on ZFS:
      • if booted with legacy BIOS (non-UEFI), now also copy the kernel images to the second VFAT partition (ESP), allowing the system to boot from there with grub, making it possible to enable all ZFS features on such systems.
      • set up the boot-partition and boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable.
    • The installer environment attempts to do an NTP time synchronization before actually starting the installation, avoiding telemetry and cluster issues, if the RTC had a huge time-drift.
  • pve-zsync
    • Improved snapshot handling allowing for multiple sync intervals for a source and destination pair.
    • Better detection of aborted syncs, which previously caused pve-zsync to stop the replication.
  • Firewall
    • Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by iptables-restore due to its size limitations. We recommended you to create and use IP-Sets for that use case.
    • Improvements to the command-line parameter handling.

Known Issues

  • Please avoid using zpool upgrade on the "rpool" (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
    See the documentation for determining the bootloader used, if you're unsure.
    Setups installed with the Proxmox VE 6.4 ISO are not affected, as there the installer always sets up an easier to handle, vfat formatted, ESP to boot.
    See the ZFS: Switch Legacy-Boot to Proxmox Boot Tool article about how to switch over to a safer boot variant for legacy GRUB-booted setups with ZFS as root filesystem.
  • New default bind address for pveproxy and spiceproxy, unifying the default behavior with Proxmox Backup Server
    • With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4 0.0.0.0:8006 and IPv6 [::]:8006) by default.
    Should you wish to prevent it from listening on IPv6, simply configure the IPv4 wildcard as LISTEN_IP in /etc/default/pveproxy:
    LISTEN_IP="0.0.0.0"
    • Additionally, the logged IP address format changed for IPv4 in pveproxy's access log (/var/log/pveproxy/access.log). They are now logged as IPv4-mapped IPv6 addresses instead of:
    192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
    the line now looks like:
    ::ffff:192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
    If you want to restore the old logging format, also set LISTEN_IP="0.0.0.0"
  • Resolving the Ceph `insecure global_id reclaim` Health Warning
    With Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20 we released an update to fix a security issue (CVE-2021-20288) where Ceph was not ensuring that reconnecting/renewing clients were presenting an existing ticket when reclaiming their global_id value.
    Updating from an earlier version will result in the above health warning.
    See the forum post here for more details and instructions to address this warning.

Proxmox VE 6.3

Released 26. November 2020

  • Based on Debian Buster (10.6)
  • Ceph Octopus 15.2.6 (first stable release) and Ceph Nautilus 14.2.15
  • Kernel 5.4 LTS
  • LXC 4.0
  • QEMU 5.1
  • ZFS 0.8.5
  • Proxmox Backup Server Integration
    • Stable Proxmox Backup Server integration: The stable version 1.0 of Proxmox Backup Server is now integrated and enterprise support is available from the Proxmox support team.
    • Data encrypted on client-side before backing up to Proxmox Backup Server.
  • Ceph
    • Stable integration of Ceph Octopus.
    • Add selector to choose which supported Ceph version to install in the GUI configuration wizard.
    • Recovery progress is displayed in the Ceph status panel.
    • Show and allow setting of Placement Group (PG) auto-scaling mode of Ceph pools.
    • Set device class when creating OSDs, especially if the auto-detection yields the wrong class.
  • Enhancements in the GUI
    • Improved VM boot device order editor:
      • It is now possible to select multiple devices per type (disk, network) for booting.
      • Booting from passed through PCI devices (e.g., NVMe drives) is supported.
      • Improved user experience with a drag-and-drop UI.
    • GUI for editing external metric servers: You can now connect your Proxmox VE nodes to InfluxDB or Graphite using the GUI, instead of having to manually edit /etc/pve/status.cfg
    • Optional TLS certificate verification for LDAP and AD authentication realms.
    • Improve high-DPI display and browser zoom compatibility.
    • Split up storage content view by type.
    • Backup/Restore:
      • Overview of all guests, which aren't included in any backup at all.
      • Detailed view per backup job, showing all covered guests and which of their disks are backed up.
    • Display optional comments for all storage types.
      • Proxmox Backup Server additionally displays the verification state of all backup snapshots.
    • Better usability for preventing accidental snapshot rollback
      • The GUI now makes it difficult to accidentally confuse snapshot removal with snapshot rollback.
  • Storage
    • Add highly flexible backup retention with "keep" settings: The new backup retention settings, which augment and replace the "Max Backups" setting, enable you to decide how many backups to keep per timeframe and implement enhanced retention policies per storage or backup job.
    • Better handling of container volume activation on ZFS.
    • Increased timeout for connecting to CIFS and NFS storage over slow links.
    • Improve querying SSD wear leveling.
    • Small improvements to the configuration handling of the LIO iSCSI provider for ZFS over iSCSI storage.
    • ZFS disk management: create a systemd service to unconditionally import a pool created using the GUI.
  • Container
    • Add support for current Devuan containers.
    • Add support for Kali Linux containers.
    • Update list of supported releases for Ubuntu, Fedora and CentOS.
    • Support setting a custom timezone per container.
    • Improve startup monitoring.
    • Add a debug parameter to pct start, to output the debug log of the container start.
    • Support systems with up to 8192 cores.
    • Optionally ignore mount points while running pct fstrim.
    • Fix aborting of backups on Ceph backed containers with a large IO load, by issuing fsfreeze before snapshotting.
  • QEMU
    • Fast, incremental backups to Proxmox Backup Server using dirty-bitmaps.
    • Handle guest shutdowns (power down from within a VM) during backups.
    • Improved boot order selection allowing booting from multiple virtual disks and from passed through PCI devices (e.g., NVMe drives).
    • Allow pass through of certain older Intel iGPU models with QEMU's 'legacy-igd' mode.
    • Support more CPU options, for example SSE4.2
    • Better support for hugepages across multiple NUMA nodes.
  • General improvements for virtual guests
    • Improved handling of replicated guests when migrating.
  • Clustering
    • Harden locking in the clustered configuration filesystem (pmxcfs), avoids a possible freeze when joining, messaging or leaving a closed-process-group.
  • User and permission management
    • Improved support for using client certificates/keys when connecting to AD/LDAP realms.
    • Optional support for case-insensitive logins with AD/LDAP realms.
    • Fine-grained permissions for SDN and CloudInit.
    • Better handling of clock jumps for rotating keys.
  • Firewall
    • Improved API for matching ICMP-types.
  • Documentation
    • Clarify qdevice requirements.
    • Add section about ZFS pool design choices.
    • Add documentation on requirement for encrypted ZFS datasets as storage.
    • Add manpage for cpu-models.conf(5).
  • Installer
    • Reboot automatically upon successful installation.
    • Drop ext3 as supported file system.
    • Start a shell on vt3 for debugging during installation.
  • Experimental: support for Software-Defined Networking (SDN)
    • Support for IPAM with a plugin framework.
    • Add support for internal IPAM management and PowerDNS.
  • Countless bug fixes and smaller improvements

Known Issues

  • VM guests with multiple boot disk setups (e.g., mdadm, LVM, etc...) need to be configured with the new boot order config so that all required disks are marked "bootable", with the one containing the boot loader placed first - otherwise, the guest may fail to boot after being restarted with the new QEMU 5.1 version.
  • The "exclude-path" option for vzdump now supports non-anchored paths for all backup modes. Non-anchored paths do not start with a '/' and will match in any subdirectory. Previously, such paths only had an effect for "suspend" mode backups, so please ensure that you don't have any unwanted non-anchored paths configured.

Proxmox VE 6.2

Released 12. May 2020

  • Based on Debian Buster (10.4)
  • Ceph Nautilus (14.2.9)
  • Kernel 5.4 LTS
  • LXC 4.0
  • Qemu 5.0
  • ZFS 0.8.3
  • Web interface
    • Allow to deploy DNS based Let's Encrypt (ACME) certificates
    • Allow admins to see the full granted permission/privilege tree for a user
    • Add experimental Software Defined Network (SDN) GUI and documentation, based on modern mainline Linux Kernel network technology
    • Allow to collapse the Notes sub-panel in the guest summary panels. One can do this permanently or automatically if empty through the user settings.
    • Add 'Creation Date' column for storage content, allowing one, for example, to comfortably find backups from a certain date
    • Add device node creation to Container feature selection
    • Full support for up to 8 corosync links
    • Automatic guest console reload on VM/CT startup
    • Add Arabic translation
    • Allow seamlessly changing the language, without logging out and logging in again
  • Container
    • LXC 4.0.2 and lxcfs 4.0.3 with initial full cgroupv2 support by Proxmox VE
    • Improve support for modern systemd based Containers
    • Improve default settings to support hundreds to thousands* of parallel running Containers per node (* thousands only with simple distributions like Alpine Linux)
    • Allow creating templates on directory-based storage
  • Backup/Restore
    • Support for the highly efficient and fast Zstandard (zstd) compression
  • Improvements to the HA stack
    • Allow to destroy virtual guests under HA control when purge is set
  • QEMU
    • Fixed EFI disk behavior on block-based storage (see known issues)
    • VirtIO Random Number Generator (RNG) support for VMs
    • Custom CPU types with user-defined flags and QEMU/KVM settings
    • Improved machine type versioning and compatibility checks
    • Various stability fixes, especially for backups and IO-Threads
    • Migration:
      • Enable support for Live-Migration with replicated disks
      • Allow specifying target storage for offline mode
      • Allow specifying multiple source-target storage pairs (for now, CLI only)
      • Improve behavior with unused disks
      • Secure live-migration with local disks
  • General improvements for virtual guests
    • Handle ZFS volumes with non-standard mount point correctly
  • Cluster
    • Improve lock contention during high frequency config modifications
    • Add versioning for cluster join
    • Enable full support for up to 8 corosync links
  • Ceph
    • Easier uninstall process, better informing which actions need to be taken.
  • Storage
    • Storage migration: introduce allow-rename option and return new volume ID if a volume with the same ID is already allocated on the target storage
    • Support the 'snippet' content type for GlusterFS storage
    • Bandwidth limitations are now also available for SAMBA/CIFS based storage
    • Handle ZFS volumes with non-standard mount point correctly
    • Improve metadata calculation when creating a new LVM-Thin pool
    • Improve parsing of NVMe wearout metrics
  • User and permission management
    • LDAP Sync users and groups automatically into Proxmox VE
    • LDAP mode LDAP+STARTTLS
    • Allow to add and manage authentication realms through the 'pveum' CLI tool
    • Full support and integration for API Tokens
      • Shared or separated privileges
      • Token lifetime
      • Revoke it anytime without impacting your user login
  • Documentation
    • Update hypervisor migration guide in the wiki
    • Reducing number of Ceph placement groups is possible
    • Improve layout, use a left-side based table-of-content
  • Various improvements
    • Firewall: make config parsing more robust and improve ICMP-type filtering

Known Issues with OVMF/UEFI disks of VMs

A VM's EFI disk on a storage which doesn't allow for small (128 KB) images (for example: CEPH, ZFS, LVM(thin)), was not correctly mapped to the VM. While fixed now, such existing setup may need manual intervention:

  • You do not have to do anything if your EFI disks is using qcow2 or "raw" on a file based storage.
  • Before the upgrade, make sure that on your ESP, the EFI boot binary exists at \EFI\BOOT\BOOTX64.EFI (the default EFI Boot fallback).
    • Windows and some Linux VMs using systemd-boot should do that automatically
  • If you already upgraded and it does not boot, see OVMF/UEFI Boot Entries on how to recreate the boot entry via the OVMF Boot Menu

Proxmox VE 6.1

Released 4. December 2019: See Downloads, updated ISO release 2 with installer bug fixes and updated packages on 6. March 2020

  • Based on Debian Buster (10.2)
  • Ceph Nautilus (14.2.4.1)
  • Corosync 3.0
  • Kernel 5.3
  • LXC 3.2
  • Qemu 4.1.1
  • ZFS 0.8.2
  • Web interface
    • More datacenter configuration options are now editable via GUI:
      • U2F two-factor settings.
      • Cluster-wide bandwidth limit for the following traffic types: migration, backup-restore, clone, disk-move.
    • Improvements to two-factor authentication allowing to use a hardware-based TOTP key.
    • Mobile GUI: implemented login for TOTP-based two-factor authentication enabled user accounts.
    • More icons have been converted from bitmap to vectorized icons from Font Awesome.
    • The scale mode of noVNC can be changed in "My Settings".
    • Addition of Hebrew language translations.
    • New "Run now" button for cluster-wide backup jobs.
    • With ifupdown2 installed, it is now possible to change the network configuration and reload it from the GUI without a reboot.
  • Container
    • Pending changes for containers. It is possible to make changes to a running container. They will be applied after the next reboot of the container.
    • Reboot of a running container via GUI, API, and command-line interface.
    • Hot plugging of mount points by using the new mount API available with Linux Kernel 5.3.
    • Support of recent releases of GNU/Linux distributions like Fedora 31, CentOS 8, and Ubuntu 19.10.
  • SPICE
    • Audio devices can now be added via GUI (no need to edit the configuration file).
    • Folders can now be shared between SPICE client and VM (still experimental).
    • Video streaming can be enabled. This can help to improve the performance of fast-changing display areas like videos.
    • The SPICE USB device now supports USB3 (Qemu >= 4.1).
  • Backup/Restore
    • VMs with IOThreads enabled can now be backed up.
    • Run scheduled backup jobs manually from the Datacenter in the GUI.
  • Improvements to the HA stack
    • New 'migrate' shutdown policy. If enabled, running services will be migrated to another node on shutdown. Once the node comes back, and if the services have not been manually moved to another node in the meantime, those services will move back.
    • New 'crm-command stop'. Shuts down a VM/container with the specified timeout and performs a hard stop if the timeout is 0. A VM or container stop command will now be redirected to this new crm-command.
  • QEMU
    • Allow domains other than '0000' for PCI(e) passthrough.
    • New 'reboot' API call. This allows to apply pending changes without having to wait for the guest to shutdown before starting it again.
    • QEMU monitor timeout issue, which prevented successful backups in certain configurations, has been fixed.
    • PCI(e) passthrough supports up to 16 PCI(e) devices
    • Support for QEMU Guest Agents using an ISA serial port (not VirtIO) for communication, allows to use the QGA with FreeBSD - among others.
  • General improvements for virtual guests
    • Allow 'tags' in the guest configuration. This meta-information can be useful for things like configuration management (no GUI support yet).
    • VM/CT: Purge allows to remove the corresponding virtual machine or container from replication or backup jobs on destruction.
  • Cluster stability
    • A number of bugs have been identified and fixed upstream in cooperation with corosync and kronosnet.
    • The initial issues some users have experienced with MTU changes are resolved.
    • pmxcfs has undergone a stability audit using ASAN (AddressSanitizer) and UBSAN (Undefined Behavior Sanitizer), fixing various possible issues for certain edge-cases.
  • Storage
    • Allow configuration of non-default 'mountpoint' properties for ZFS.
    • Allow .img files as alternative to .iso images.
    • Various iSCSI improvements.
    • Overhaul of ZFS on iSCSI with the LIO target provider.
    • Supports all features offered by newer kernels with Ceph and KRBD.
  • Various improvements
    • Firewall: Added support for 'raw' tables and use them to allow users to enable and configure Synflood-Attack protections.
    • Automatic certificate renewal for the self-signed certificate 2 weeks before expiration.
    • Reduced certificate lifetime of newly generated certificates (2 years instead of 10 years). Avoids that some modern Browser complain about very long certificate lifetime.
  • Documentation
    • Proofreading of parts of the documentation (style and grammar).
    • Enhanced documentation for the Ceph administration.
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 6.0

Released 16. July 2019: See Downloads

  • Based on Debian Buster 10.0
  • Pre-upgrade checklist tool `pve5to6` - available on Proxmox VE 5.4 and V 6.0
    • Running `pve5to6` checks for common pitfalls known to interfere with a clean upgrade process.
  • Corosync 3.0.2 using Kronosnet as transport
    • Default transport method now uses unicast, this can simplify setups where the network had issues with multicast.
    • New Web GUI Network selection widget avoids making typos when choosing the correct link address.
    • Currently, there is no multicast support available (it's on the kronosnet roadmap).
  • LXC 3.1
  • Ceph Nautilus 14.2.x
    • Better performance monitoring for rbd images through `rbd perf image iotop` and `rbd perf image iostat`.
    • OSD creation, based on ceph-volume: integrated support for full disk encryption of OSDs.
    • More robust handling of OSDs (no more mounting and unmounting to identify the OSD).
    • ceph-disk has been removed: After upgrading it is not possible to create new OSDs without upgrading to Ceph Nautilus.
    • Support for PG split and join: The number of placement groups per pool can now be increased and decreased. There is even an optional plugin in ceph-manager to automatically scale the number of PGs.
    • New messenger v2 protocol brings support for encryption on the wire (currently this is still experimental).
    • See https://docs.ceph.com/en/nautilus/releases/nautilus/ for the complete release notes.
  • Improved Ceph administration via GUI
    • A cluster-wide overview for Ceph is now displayed in the 'Datacenter View' too.
    • The activity and state of the placement groups (PGs) is visualized.
    • The version of all Ceph services is now displayed, making detection of outdated services easier.
    • Configuration settings from the config file and database are displayed.
    • You can now select the public and cluster networks in the GUI with a new network selector.
    • Easy encryption for OSDs with a checkbox.
  • ZFS 0.8.1
    • Native encryption for datasets with comfortable key-handling by integrating the encryption directly into the `zfs` utilities. Encryption is as flexible as volume creation and adding redundancy - the gained comfort w.r.t. dm-crypt is comparable to the difference between mdadm+lvm to zfs.
    • Allocation-classes for vdevs: you can add a dedicated fast device to a pool which is used for storing often accessed data (metadata, small files).
    • TRIM-support - use `zpool trim` to notify devices about unused sectors.
    • Checkpoints on pool level.
    • See https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.0 and https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.8.1 for the complete release notes.
  • Support for ZFS on UEFI and on NVMe devices in the installer
    • You can now install Proxmox VE with its root on ZFS on UEFI booted systems.
    • You can also install ZFS on NVMe devices directly from the installer.
    • By using `systemd-boot` as bootloader all pool-level features can be enabled on the root pool.
  • Qemu 4.0.0
    • Live migration of guests with disks backed by local storage via GUI.
    • Added support for more Hyper-V enlightenments improving Windows performance in a virtual machine under Qemu/KVM.
    • Mitigations for the performance impact of recent Intel CPU vulnerabilities.
    • More VM CPU-flags can be set in the web interface.
    • Newer virtual PCIe port hardware for machine type q35 in version >= 4.0. This fixes some passthrough issues.
    • Support for custom Cloudinit configurations:
      • You can create a custom Cloudinit configuration and store it as snippet on a storage.
      • The `qm cloudinit dump` command can be used to get the current Cloudinit configuration as a starting point for extensions.
  • Firewall improvements
    • Improved detection of the local network so that all used corosync cluster networks get automatically whitelisted.
    • Improved firewall behavior during cluster filesystem restart, e.g. on package upgrade.
  • Mount options for container images
    • You can now set certain performance and security related mount options for each container mountpoint.
  • Linux Kernel
    • Updated 5.0 Kernel based off the Ubuntu 19.04 "Disco" kernel with ZFS.
    • Intel in-tree NIC drivers are used:
      • Many recent improvements to the kernel networking subsystem introduced incompatibilities with the out of tree drivers provided by Intel, which sometimes lag behind on support for new kernel versions. This can lead to a change of the predictable network interface names for Intel NICs.
  • Automatic cleanup of old kernel images
    • Old kernel images are not longer marked as NeverAutoRemove - preventing problems when /boot is mounted on a small partition.
    • By default the following images are kept installed (all others can be automatically removed with `apt autoremove`):
      • the currently running kernel
      • the version being newly installed on package updates
      • the two latest kernels
      • the latest version of each kernel series (e.g. 4.15, 5.0)
  • Guest status display in the tree view: Additional states for guests (migration, backup, snapshot, locked) are shown directly in the tree overview.
  • Improved ISO detection in the installer: The way how the installer detects the ISO was reworked to include more devices, alleviating problems of detection on certain hardware.
  • Pool level backup: It is now possible to create a backup task for backing up a whole pool. By selecting a pool as backup target instead of an explicit list of guests, new members of the pool are automatically included, and removed guests are automatically excluded from the backup task.
  • New User Settings and Logout menu.
  • Automatic rotation of authentication key every 24h: by limiting the key lifetime to 24h the impact of key leakage or a malicious administrator are reduced.
  • The nodes Syslog view in the GUI was overhauled and is now faster.
  • Sheepdog is no longer maintained, and thus not supported anymore as Storage plugin.
  • `ceph-disk` has been removed in Ceph Nautilus - use `ceph-volume` instead.
  • Improved reference documentation
  • Upgrade documentation: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 6.0 beta1

Released 04.07.2019: See Downloads

  • Based on Debian Buster 10.0
  • Kernel 5.0.15
  • ZFS on EFI and/or NVMe installation
  • Ceph v14.2.1 Nautilus
  • ZFS 0.8.1
  • QEMU 4.0.0
  • LXC 3.1.0
  • Corosync 3
  • Misc. bug fixes and improvements

Proxmox VE 5.4

Released 11.04.2019: See Downloads

  • Based on Debian Stretch 9.8
  • Kernel 4.15.18
  • QEMU 2.12.1 (QEMU 3.0.1 since May 2019 via apt)
  • LXC 3.1.0
  • ZFS 0.7.13
  • Ceph 12.2.11 (Luminous LTS, stable), packaged by Proxmox
    • Ceph can now be installed via the GUI, thus closing the gap of needing the CLI for this initial step, before configuring a complete (hyperconverged) Ceph setup via PVE's GUI.
  • QDevice support via `pvecm`
    • primarily for small 2-node clusters adding a qdevice can help mitigate the downside of not being able to reboot one node without losing quorum (and thus the ability to make any changes in the cluster) ** Can also help in clusters with a larger even number of nodes by providing a tie-break vote.
    • Integration into pvecm and PVE stack vastly simplifies adding a qdevice (it was possible manually before as well)
  • Wake On Lan support via `pvenode`
    • It is now possible to associate a MAC-Address with a PVE-node.
    • This can be used to send a WOL (wake on lan) packet to that node from any other node in the PVE-Cluster, and thus power it on on-demand
    • Provides a comfortable way of saving power by keeping only part of the nodes permanently online, and starting others on demand (e.g. increased load, or for running guests during upgrades to the PVE-nodes.)
  • GUI Container wizard creates unprivileged containers by default
  • HA improvements and added flexibility
    • It is now possible to set a datacenter wide HA policy which can change the way guests are treated upon a Node shutdown or reboot. The choices are:
      • freeze: always freeze servivces, independent of the shutdown type (reboot, poweroff)
      • failover: never freeze services, this means that a service will get recovered to another node if possible and if the current node does not comes back up in the grace period of 1 minute.
      • default: this is the current behavior, freeze on reboot but do not freeze on poweroff
  • Suspend to disk/hibernate support for Qemu/KVM guests
    • qemu guests can be 'hibernated' (have their RAM contents and internal state saved to permanent storage) and resumed on the next start.
    • This enables users to preserve the running state of the qemu-guests across most upgrades to and reboots of the PVE-node.
    • Additionally it can speed up the startup of guests running complex workloads/ workloads which take lots of resources to setup initially, but which need not run permanently.
  • Support for guest (both Qemu and LXC) hookscripts:
    • Hook-scripts are small executables which can be configured for each guest, and are called at certain steps of the guest's lifetime ('pre-start', 'post-start', 'pre-stop', 'post-stop').
    • This gives Administrators great flexibility in the way they can prepare the environment for the guest (e.g. adding necessary network resources (routes, vlans), firewall-rules, unlocking encrypted files/devices,...) and cleaning them up when the guest is shutdown or stopped.
  • Improved Qemu Guest creation wizard:
    • Certain often requested options (e.g. Machine-type (q35, pc-i440fx), Firmware (Seabios, UEFI), SCSI controller) can now be selected directly in the VM Creation Wizard, and dependent options get set to sensible values directly.
  • Clusterwide Subscription Check and Display
    • The Datacenter Summary page now gives Administrators a global overview of the subscription status of all nodes in a PVE cluster, thus helping to verify the support-status of the complete cluster on one quick glance
  • Improved UI in the installer
    • It is now possible to go back to a previous screen in the pve-installer, and adapt choices made there, without the need to restart the complete installation process
    • Before the actual installation is performed an information-page is displayed containing all relevant information
  • U2F Authentication for the WebUI
    • PVE's WebUI now can optionally use two-factor authentication for users.
    • This is generally considered good security practice, and in certain domains and environments it is required.
    • The new U2F authentication and the TOTP second factor authentication can be configured by each user for themselves (before all 2FA solutions needed to be configured by a user with User.Modify permission).
  • Improved reference documentation
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.3

Released 4.12.2018: See Downloads, updated ISO image with installer and apt bug fixes and updated packages on 23.1.2019

  • Based on Debian Stretch 9.6
  • Kernel 4.15.18
  • QEMU 2.12.1
  • LXC 3.0.2
  • ZFS 0.7.12
  • Ceph 12.2.8 (Luminous LTS, stable), packaged by Proxmox
  • Installer with ZFS: no swap space is created by default, instead an optional limit of the used space in the advanced options can be defined, thus leaving unpartitioned space at the end for a swap partition.
  • Disk Management on GUI (ZFS, LVM, LVMthin, xfs, ext4)
  • Create CephFS via GUI (MDS)
  • CephFS Storage plugin (supports backups, templates and ISO images)
  • LIO support for ZFS over iSCSI storage plugin
  • ifupdown2 package and partial GUI support
  • Delete unused disk images on the storage view
  • Enable/disable the local cursor in noVNC
  • Enable/disable autoresize in noVNC
  • Edit /etc/hosts/ via GUI, which is necessary whenever the IP or hostname is changed
  • Editable VGA Memory for VM, enables higher resolutions (UHD) for e.g. SPICE
  • VM Generation ID: Users can set a 'vmgenid' device, which enables safe snapshots, backup and clone for various software inside the VM, like Microsoft Active Directory. This vmgenid will be added to all newly created VMs automatically. Users have to manually add it to older VMs.
  • qemu-server: add ability to fstrim a guest filesystem after moving a disk (during move_disk or migrate with local disks)
  • Emulating ARM virtual machines (experimental, mostly useful for development purposes)
  • vGPU/MDev and PCI passthrough. GUI for configuring PCI passthrough and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias vGPUS.
  • pvesh rewrite to use the new CLI Formatter framework - providing a configurable output format for all API calls (formats include: json, yaml and a pleasant table output with borders). Provides unified support for json output for the PVE CLI Tools.
  • Nesting for Containers (privileged & unprivileged containers): Allows running lxc, lxd or docker inside containers, also supports using AppArmor inside containers
  • Mounting CIFS/NFS inside containers (privileged): Allows using samba or nfs shares directly from within containers
  • Improved reference documentation
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.2

Released 16.05.2018: See Downloads

  • Based on Debian Stretch 9.4
  • Kernel 4.15.17
  • QEMU 2.11.1
  • LXC 3.0.0
  • Ceph 12.2.5 (Luminous LTS, stable), packaged by Proxmox
  • ZFS 0.7.8
  • Cloudinit GUI support
  • Cluster create/join nodes via GUI
  • Certificate management including Let´s Encrypt GUI
  • SMB/CIFS Storage plugin (supports backups, images, templates, iso and containers)
  • Display IP for VM (using qemu-guest-agent)
  • LXC: templates and clones, move volume/disk
  • Create and edit new roles via GUI
  • I/O bandwith limits for restore operations (globally, per storage or per restore job)
  • new and improved xterm.js integration including reconnect logic (on container reboots or restart migrations)
  • Basic/Advanced GUI
  • ebtables support
  • Improved reference documentation
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.1

Released 24.10.2017: See Downloads, updated ISO image with installer bug fixes and updated packages on 18.12.2017

  • Based on Debian Stretch 9.2
  • Kernel 4.13.3
  • QEMU 2.9.1
  • LXC: update to 2.1
  • Ceph 12.2.1 (Luminous LTS, stable), packaged by Proxmox
  • ZFS 0.7.2
  • Improved reference documentation with screenshots
  • Countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.0

Released 04.07.2017: See Downloads, updated ISO image with installer bug fixes and updated packages on 09.08.2017

  • Based on Debian Stretch 9.0
  • Kernel 4.10.15
  • QEMU 2.9
  • LXC: update to 2.0.8
  • New asynchronous Storage Replication feature (needs ZFS, technology preview)
  • New/updated LXC templates (Debian, Ubuntu, CentOS, Fedora, OpenSUSE, Arch Linux, Gentoo and Alpine)
  • Updated/improved noVNC console]
  • Ceph v12.1.0 Luminous (technology preview), packaged by Proxmox
  • live migration with local storage
  • GUI improvements
    • USB und Host PCI address visibility
    • improved bulk and filtering options
  • Improved installation ISO
  • Importing Qemu/KVM_Virtual_Machines#_importing_virtual_machines_from_foreign_hypervisors
  • improved reference documentation with screenshots
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 5.0 beta2

Released 23.05.2017: See Downloads

  • Based on Debian Stretch
  • Kernel 4.10
  • Update to QEMU 2.9
  • Improved handling of re-used disks in installer
  • Improved handling of network devices in installer
  • Many small improvements and bug fixes

Proxmox VE 5.0 beta1

Released 22.03.2017: See Downloads

  • Based on Debian Stretch
  • Kernel 4.10
  • Many small improvements and bug fixes

Proxmox VE 4.4

Released 13.12.2016: See Downloads

  • update kernel to 4.4.35
  • KVM: update to qemu 2.7.0
  • LXC improvements
    • update to LXC 2.0.6
    • implement new restart migration
    • unprivileged containers (now the GUI)
    • updated template repository (Debian, Ubuntu, CentOS, Fedora, Arch and Alpine)
  • GUI improvements
    • new ceph dashboard
    • new cluster dashboard
    • improved disk management, smart status supporting more SSD types
    • HA improved GUI
  • ISO installer supports advanced ZFS settings
  • dedicated network for VM/CT migrations (CLI only)
  • improved reference documentation, e.g. contains now also screenshots
  • DRBD9 (removed from core, now maintained directly by Linbit, due to license change)
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.3

Released 27.09.2016: See Downloads

  • GUI improvements
    • new search (“ctrl-shift-f”)
    • vertical menus with groups and icons
    • double click to open VM/CT console
    • new status overview for host, VM and containers, signal colors for overloaded CPU, etc.
    • Task log window remembers its size
  • VM wizard proposes now optimal settings dependent to the selected operating system (for Linux the default is “virtio scsi disk” and "virtio net")
  • comprehensive reference documentation in html, pdf and epub format, including contextual help function on several places - based on asciiDoc
  • new disk management, including SMART monitoring and SSD wearout level display (for Intel, Sandisk and Samsung SSD).
  • based on Debian Jessie 8.6.0
  • Linux kernel 4.4.19
  • KVM/qemu 2.6.1
  • LXC: update to 2.0.4
  • many new and improved manual/help pages
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.2

Released 27.04.2016: See Downloads

  • GUI update to Sencha Ext JS 6, including new icon set and dynamic performance charts
  • based on Debian Jessie 8.4.0
  • Linux kernel 4.4.6
  • KVM/qemu 2.5.1
  • installer: LVM thin or ZFS plugin pre-configured
  • LVM thin support
  • Let´s Encrypt support
  • DRBD9: drbd 9.0.2 kernel module and drbdmanage 0.95
  • LXC: improved container setup
  • LXC: network rate limits
  • LXC: adding mount-points via GUI
  • LXC: pct can mount rootfs for read/write access of stopped or running containers
  • LXC: backup/snapshot improvements, including proper backup and restore of ACL
  • LXC: add support for Alpine Linux, Ubuntu 16.04
  • HA manager improvements
  • new confirmation dialog to prevent accidentally removal of a virtual machine or container
  • updated GUI translation (especially French, Italian and German)
  • many new and improved manual pages
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.1

Released 11.12.2015: See Downloads

  • based on Debian Jessie 8.2.0
  • Linux kernel 4.2.6
  • improved startup/shutdown behavior (systemd)
  • enable NTP by default
  • installer: allow up to 8 disks for ZFS
  • KVM: add qemu agent GUI option
  • KVM: update network boot ROMs
  • Improve HA GUI for users with restricted permissions
  • add Galician language to GUI
  • LXC: add rootfs resize to GUI
  • LXC: add support for Fedora 22 and Debian stretch/sid, Ubuntu 15.10
  • LXC: support unpriviledged containers (technology preview)
  • storage: added LVM thin support (technology preview)
  • Support for Turnkey Linux LXC appliances
  • added new pvereport command
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.0

Released 05.10.2015: See Downloads

  • based on Debian Jessie 8.2.0
  • Linux kernel 4.2
  • QEMU 2.4
  • new high availability manager
  • bash completion support
  • full IPv6 support
  • include LXC (and removed OpenVZ) - see Convert OpenVZ to LXC
  • added DRBD9 with drbdmanage (technology preview)
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.0 beta2

Released 10.09.2015: See Downloads

  • based on Debian Jessie 8.2.0
  • Linux kernel 4.2
  • QEMU 2.4
  • LXC improvements Linux Container
    • Containers can be created on local storage, NFS, Ceph RBD, DRBD9, ...
    • Migration (offline)
  • IPv6 support
  • DRBD9
  • Ceph Server packages
  • countless bug fixes and package updates (for all details see bugtracker and GIT)

Proxmox VE 4.0 beta1

Released 22.06.2015: See Downloads

Proxmox VE 3.4

Released 19.02.2015: See Downloads - updated ISO image with ZFS 0.6.4.1 on 27.05.2015

  • add ZFS support
    • installer supports all ZFS RAID levels.
  • pve-manager (GUI) updates
    • openvz: implement suspend/resume
    • qemu GUI: support new pending changes API
    • new 'disconnect' button on the network edit panel
    • vzdump: new option --stop to abort running backup job
    • add NUMA option to processor form
    • add GUI for local ZFS storage
    • add startall/stopall/migrateall menu buttons
    • add multiqueues field to NIC form
  • update to qemu 2.1.3
  • qemu-server updates
    • shutdown by Qemu Guest Agent if the agent flag is set
    • snapshot_create: call fsfreeze if agent flag is set
    • vzdump: use qga freeze in vzdump in snapshot mode
    • add custom NUMA topology support
    • serial: allow to pass arbitrary device names
    • add link_down flag to network config
    • support additional e1000 variants for VM machines
    • enable hotplug by default
    • hotplug config: allow to enable specific features
    • include memory hotplug patch (experimental)
  • improved hotplug support
  • based on Debian Wheezy 7.8
  • kernel updates
  • countless bug fixes and package updates, for all details see bugtracker and GIT

Proxmox VE 3.3

Released 15.09.2014: See Downloads

  • pve-manager (GUI) updates
    • new Proxmox VE Firewall
    • noVNC console
    • openvz: add bridge vlan && firewall options to gui
    • new Proxmox VE Mobile, GUI for mobile devices
    • add new 'Pool View'
    • ZFS storage can now be configured on GUI
    • glusterfs: new option to specify backup volfile server
    • add new email_from option to datacenter.cfg
    • add Persian (Farsi) translation.
    • improved Spanish translation
    • update Chinese translation
    • Countless updates and fixes
  • update to qemu 2.1.0
    • pci passthrough improvements
    • hotplug improvements
    • migration : enable auto-converge capability
    • add cpu_hotplug (and maxcpus config)
    • add virtio-net multiqueue support
    • new option smbios1 to specify SMBIOS type 1 fields
    • set uuid for newly created machines
    • support new q35 machine type
    • add Broadwell cpu model
    • compile with new libiscsi (1.12.0)
    • use glusterfs 3.5.2 libraries
    • support drive option 'discard'
    • add support for new qemu throttling burst max parameters
    • add 'vmxnet3', 'lsi53c810', and 'pvscsi' to the list of available network card models
  • improved Console support
    • HTML5 Console for shell, VM and container console (noVNC)
    • noVNC console is now the default
    • vncterm: new option -notls (for novnc, which use 'wss')
    • vncterm: updated signature for java applet to avoid warnings
  • pve-kernel-2.6.32-32-pve: 2.6.32-136
    • update aacraid, arcmsr, netxtreme2, ixgbe, igb, megaraid_sas and e1000e drivers
    • update to vzkernel-2.6.32-042stab093.4.src.rpm
    • allow to use grub-efi-ia32 boot loader
  • pve-kernel-3.10.0-4-pve: 3.10.0-17
    • enable vfio xfga
    • update arcmsr, netxtreme2, ixgbe, igb, e1000e drivers
    • update to kernel-3.10.0-123.6.3.el7.src.rpm
    • allow to use grub-efi-ia32 boot loader
    • Note: there is still no OpenVZ support with this kernel
  • update ceph packages to 'firefly' (0.80.5)
    • Note: Please upgrade ceph packages first if you run ceph server on proxmox nodes (see ceph upgrade instructions).
  • update gluster packages to 3.5.2
  • fence-agents-pve: 4.0.10
    • update to 4.0.10
    • add fence_ovh and fence_amt
    • remove baytech, bullpap, cpint, egenera, mcdata, nss_wrapper,rackswitch, vixel, xcat. Those agents are no longer included in upstream package.
    • removed fence_scsi
    • Note: This includes updates for fence_ipmilan (fence_ilo3, fence_ilo4, fence_imm, and fence_idrac), and some parameter names changed (see 'man fence_ipmilan'). Please verify that your fence device still works if you use HA.
  • based on Debian Wheezy 7.6
  • countless bug fixes and package updates, for all details see bugtracker and GIT

Proxmox VE 3.2

Released 10.03.2014: See Downloads

  • improved SPICE support
    • spiceterm: console for OpenVZ and host
    • add new console option to datacenter.cfg (java applet vs. spice)
    • add multi-monitor support
    • GUI: use split-button to easily select SPICE or VNC
    • more details on https://pve.proxmox.com/wiki/SPICE
  • update qemu to 1.7.0
    • add 'pvscsi' to the list of scsi controllers (emulate the VMware PVSCSI device)
    • add 'lsi53c810' to the list of scsi controllers (supported on some very old Windows NT versions)
    • add 'vmxnet3' to the list of available network card models (emulate VMware paravirtualized network card)
    • add drive option 'discard'
    • add support for new qemu throttling burst max parameters
    • improved live backup
  • pve-kernel-2.6.32-27-pve: 2.6.32-121
    • update to vzkernel-2.6.32-042stab084.20.src.rpm
    • update e1000, igb, ixgbe, netxtreme2, megaraid_sas
    • include latest ARECA RAID drivers
    • update Broadcom bnx2/bnx2x drivers to 7.6.62
    • update aacraid to aacraid-1.2.1-30300.src.rpm
  • Ceph Server (Technology Preview)
  • added Open vSwitch support (Technology Preview)
  • Optional 3.10 Kernel (based on RHEL7 beta, currently without OpenVZ support, for testing only)
  • storage: new ZFS plugin (Technology Preview), see https://pve.proxmox.com/wiki/Storage:_ZFS
  • storage: remove nexenta plugin (ZFS plugin is faster)
  • updated GlusterFS to 3.4.2
  • ISO installer uses now always GPT partition table
    • added 'gdisk' to manage and view partitions via CLI
  • based on Debian Wheezy 7.4
  • countless bug fixes and package updates (for all details see bugtracker and GIT

Proxmox VE 3.1

Released 21.08.2013: See Downloads

  • pve-kernel-2.6.32-23-pve (2.6.32-109)
    • update to vzkernel-2.6.32-042stab079.5.src.rpm
    • remove fix-ipoib-add-missing-lock.patch (now upstream)
    • include Highpoint 2710 RAID driver
    • update megaraid_sas to 6.600.18.00
  • pve-cluster (3.0-7)
    • suppress another warning about uninitialized value at boot time
    • avoid warning when adding a new node (can't create shared ssh key database...)
    • avoid warning if /etc/pve/priv/ already exists
    • add migration_unsecure param to datacenter.cfg
  • pve-firmware (1.0-23)
    • update for RHEL6.4 based kernels
  • pve-manager (3.1-3)
    • add new favicon.ico
    • fix perl dependencies
    • display subscription status after login
    • add GUI for GlusterFS storage
    • vzdump: wait one second before unmounting the snapshot (avoid device busy error)
    • apt: try to detect new packages
    • tell apt to use http_proxy settings
    • support to view changelogs for available pve packages
    • allow to configure cipher list for pveproxy
    • automatically add pve repository to apt sources list
    • allow to use serial terminal as display
    • avoid apt autoremoval for pve-kernel packages
    • pveproxy: use same cipher list as debian apache ssl.conf
    • avoid apt autoremoval for pve-kernel packages
    • avoid and fix several proxy loops
    • correctly initialize $rpcenv in spiceproxy
    • fix bug #437: include pve-libspice-server1 in package list
    • improved spice proxy code
    • add button to start SPICE console
    • add spiceproxy server
    • fix upgrade shell (quoting bug)
    • new pveupgrade script, used to display additional information to the user (reboot required? database up to date? ...)
    • add button to display package versions, to provide same info as 'pveversion -v' on GUI.
    • fix bug #408: reset ENV before starting shell
    • fix bug #379: allow to overwrite existing CTs if user has VM.Backup permissions
    • Allow to send notification mail about new packages.
    • package manager: run system upgrade inside VNC shell, group packages by 'Origin', remove apt/upgrade API completely.
    • package manager: improve changelog viewer, bug fixes
    • add package update manager to GUI
    • update Norwegian translations
    • rrd display : avoid 2 refresh
    • avoid gzip http response for png & gif & jar
  • pve-qemu-kvm (1.4-17)
    • enable glusterfs backend
    • add patch to use pce certs for spice by default
    • disable spice-socket.patch
    • Allow to query spice ticket
    • add usbredir support to spice
    • add experimental code to support spice
  • qemu-server (3.1-1)
    • allow pass through of usb parallel devices (--parallel0 /dev/usb/lp0)
    • fix bugs in migration code (wrong qw() usage)
    • vncproxy: load config from correct node
    • allow to use a socket for serial devices
    • implement 'qm terminal' to open terminal via serial device
    • add ability to run without graphic card ('vga: serial[n]')
    • add support for unsecure/fast migration (setting in datacenter.cfg)
    • remove spice cert paths (depend on pve-qemu-kvm >= 1.4-16)
    • implement spice seamless migration
    • support usb redirection devices for spice (usb[n]: spice)
    • disable tablet device by default for spice
    • spiceproxy API: allow client to choose proxy address
    • spiceproxy API: read cert subject name directly using Net::SSLeay
    • spice: use TLS (encrypt whole traffic)
    • allow to pass SCSI generic devices to guests, for example "scsi0: /dev/sg5"
    • cpu flags optimization
    • add support for SPICE
  • vncterm (1.1-4)
    • Allow to add intermediate certificates to /etc/pve/local/pve-ssl.pem (users previously used apache option SSLCertificateChainFile for that).
  • pve-libspice-server1 (0.12.4-1)
    • initial release and update to 0.12.4
  • ceph (0.67.1-1~bpo70+1)
    • New upstream release
  • libpve-access-control (3.0-6)
    • use shorter spiceproxy tickets
    • add code to generate tickets for SPICE
  • libpve-common-perl (3.0-6)
    • auth.conf: avoid warning if file does not exist, fix file permissions
    • add parser for "/etc/apt/auth.conf"
  • libpve-storage-perl (3.0-10)
    • add Glusterfs Plugin
    • API: add scan method for glusterfs
    • sheepdog: fixes for storeid != 'sheepdog'
  • lvm2 (2.02.98-pve4)
    • New upstream release (fix for udev rules)
  • glusterfs
  • SPICE
  • based on Debian 7.1 (Wheezy)
  • Countless bug fixes (for all details see bugtracker and GIT

Proxmox VE 3.0

Released 24.05.2013: See Downloads

  • pve-kernel-2.6.32 2.6.32-100
    • fix CVE-2013-2094
  • update ceph packages to 0.61.2
  • libpve-common-perl 3.0-4:
    • fix bug #381: use persistent reservation file for ports
    • new function PVE::Tools::next_migrate_port()
  • libpve-storage-perl 3.0-6
    • rbd : clone volume to same pool that base volume
    • extend storage list API with useful parameters for clone
    • fix iscsi session scan bug
  • pve-cluster 3.0-4
    • depend on fuse (seems most utilities moved from fuse-utils to fuse)
    • remove depencency on fuse-utils (this package is no longer required)
    • fix warning about uninitialize value
    • fix bug 383: restart pveproxy instead of apache2
  • pve-manager 3.0-20
    • fix login problem when http_proxy is set
    • updated Chinese, German and French translation
    • allow to upload files with spaces in filename (replace with '_')
    • allow to delete pools with non-existent VMs/Storage
    • use correct icons for templates
    • sort templates after regular VMs
    • StorageSelector: allow to use new target option, used by Clone
    • fix bug #385: correctly handle Accept-Encoding
    • fix bug #374: correctly remove destroyed container from pool
  • pve-qemu-kvm 1.4-12
    • vma create: only store basename of config file.
  • qemu-server 3.0-15
    • clone disk : keep source volume params
    • clone: check is we can clone to target storage
    • fix bug #381: use PVE::Tools::next_migrate_port()
    • restore: do not restore template flag
  • vncterm 1.1-3
    • re-enable javascript-events.patch (needed by migrate)
  • based on Debian 7.0 (Wheezy)
  • new VM clone feature
  • new event driven API server (pveproxy)
    • completely replace apache2
    • efficient support for HTTP keep-alive
  • support bootlogd (boot log can be viewed on the GUI)
  • update pve-qemu-kvm to 1.4.1
  • update kernel to vzkernel-2.6.32-042stab076.7.src.rpm
    • changed default IO Scheduler to 'deadline'
    • updated Intel network drivers for e1000e, igb and ixgbe
  • Countless bug fixes (for all details see bugtracker and GIT

Proxmox VE 2.3

Released 04.03.2013: See Downloads

  • update qemu-kvm to 1.4.0
  • new kvm backup implementation, see Backup and Restore
  • added RBD (ceph) support on GUI
  • update kernel to vzkernel-2.6.32-042stab072.10.src.rpm
  • include latest Broadcom bnx2/bnx2x drivers
  • include latest Adaptec aacraid driver 1.2-1[29900]
  • update e1000e to 2.2.14
  • update igb to 4.1.2
  • update ixgbe to 3.12.6
  • enable CONFIG_RT_GROUP_SCHED (also update corosync if you install this kernel)
  • extend memory GUI to support ballooning
  • implement auto-ballooning
  • add HD resize feature to expand disks
  • updated network drivers (bnx2/bnx2x/e1000e/igb/ixgbe)
  • added omping binaries (for testing multicast between nodes)
  • update to latest Debian version 6.0.7
  • qcow2 as default storage format, cache=none (previously raw)
  • KVM64 as default CPU type (previously qemu64)
  • e1000 as default NIC (previously rtl8139)
  • task history per VM
  • Node Summary: added "KSM sharing" and "CPU Socket count"
  • enable/disable tablet for VM on GUI without stop/start of VM (you can use vmmouse instead, for lower CPU usage, works on modern Linux and on all Windows VMs as long as you install the vmmouse drivers)
  • bug fixes (for all details see bugtracker and GIT

Proxmox VE 2.2

Released 24.10.2012: See Downloads

  • update kernel to vzkernel-2.6.32-042stab062.2.src.rpm
  • update Intel nics drivers (e1000e to 2.1.4, ixgbe to 3.11.33, igb to 4.0.17)
  • update qemu-kvm to 1.2.0
  • openvz: update vzctl to 4.0
  • openvz: use real console instead of 'vzctl enter'
  • add Live Snapshots support (qcow2)
  • added Slovenian translation
  • kvm: new option to select SCSI controller hardware
  • kvm: support up to 32 network devices
  • kvm: support up to 16 virtio devices
  • kvm: add SATA to GUI
  • updated cluster packages
  • update to latest Debian version 6.0.6
  • bug fixes (for all details see bugtracker and GIT

Proxmox VE 2.1

Released 27.04.2012: See Downloads

  • simplify GUI for users without permissions
  • implemented startup/shutdown ordering
  • improved task logs on node startup/shutdown
  • added Russian, Italian and Turkish translation
  • updated corosync cluster stack to 1.4.3
  • update LVM to 2.02.95 (fixes snapshot problems)
  • bug fixes

Proxmox VE 2.0

Updated ISO 5.4.2012: See Downloads

  • pve-kernel-2.6.32-11-pve (2.6.32-65)
    • rebase on vzkernel-2.6.32-042stab053.5.src.rpm
  • fix for windows timing issues ("args: -no-hpet -no-kvm-pit-reinjection" are now defaults for newer Windows guests)
  • subscription key management (commercial subscribers)
  • vzdump fixes (qmrestore)

Released 30.03.2012: See Downloads

(First public beta released on 30.09.2011, rc1 on 16.02.2012, final release on 30.03.2012)

Proxmox VE 2.0 Documentation

  • Complete new GUI
    • based on Ext JS 4 JavaScript framework
    • fast search-driven interface, capable of handling hundreds and probably thousands of VM´s
    • secure VNC console, supporting external VNC viewer with SSL support
    • role based user- and permission management for all objects (VM´s, storages, nodes, etc.)
    • Support for multiple authentication sources (MS ADS, LDAP, Linux PAM, Proxmox VE authentication)
  • High Availability Cluster support for KVM guests and OpenVZ containers
    • resource agents for KVM and OpenVZ
    • GUI for managing KVM and OpenVZ HA settings
  • Based on Debian 6.0 Squeeze
    • long-term 2.6.32 Kernel (based on RHEL6.x) with KVM and OpenVZ as default
  • New cluster communication based on corosync, including:
    • Proxmox Cluster file system (pmxcfs): Database-driven file system for storing configuration files, replicated in realtime on all nodes using corosync
    • creates multi-master clusters (no single master anymore!)
    • cluster-wide logging
    • basis for HA setup´s with KVM guests and OpenVZ containers config files
  • RESTful web API Proxmox_VE_API
    • Resource Oriented Architecture (ROA)
    • declarative API definition using JSON Schema
    • enable easy integration for third party management tools
  • Backup/Restore via GUI
    • GUI and CLI, works for OpenVZ containers and KVM VM´s
    • "Backup Now" via GUI
    • Restore via GUI
    • All jobs can be monitored as “Recent tasks”
  • OpenVZ
    • Appliance Downloader, including TurnKey Linux channel
    • Multiple storages for OpenVZ container, no limit to /var/lib/vz anymore!!!
    • vswap support
    • Improved init.log (shows start-up logs of a OpenVZ container)
  • Commitment to Free Software (FOSS): public code repository and bug tracker for the 2.x code base.
  • Licensed under GNU Affero General Public License, version 3 https://www.gnu.org/licenses/agpl-3.0.html.

Proxmox VE 1.9

Updated ISO 27.9.2011, including the following, see Downloads

Release Notes:

  • pve-kernel-2.6.32-6-pve (2.6.32-47)
    • update to vzkernel-2.6.32-042stab037.1.src.rpm
    • use bnx from upstream
    • update igb driver to 3.2.9
    • update e1000e driver to 1.6.2
    • enable GDTH driver
    • update ata_generic to support intel IDE-R
    • use BIOS default settings for ASPM (fix-aspm-policy.patch)
  • pve-firmware (1.0-14)
    • include firmware for new bnx2 drivers
  • vzctl (3.0.29-2pve1)
    • update to 3.0.29.2
  • vzdump (1.2-16) unstable; urgency=low
    • use rsync option '--sparse' on first pass (take less disk space - as suggested by John Stevens)

Original 1.9 ISO, released 13.09.2011: See Downloads

Release Notes:

  • pve-kernel-2.6.32 (2.6.32-43)
    • rebase on vzkernel-2.6.32-042stab036.1.src.rpm (new stable Openvz kernel branch), updates for drivers including e1000e to 1.5.1, ARECA RAID driver, megaraid_sas, bnx2, igb to 3.1.16, ixgbe to 3.5.14-NAPI, drbd 8.3.10, ...
  • vzctl (3.0.28-1pve5)
    • update to latest upstream
    • set defaull template to debian-6.0-standard
    • set CONFIGFILE="pve.auto" in /etc/vz/vz.conf
    • merge some fixes from upstream
  • pve-manager (1.9-24)
    • fix uptime display for 2.6.32 kernel with 1000HZ
    • support newer vzctl versions.
    • support 'maxfiles' backup option.
  • pve-qemu-kvm (0.15.0-1)
    • update to upstream 0.15.0
    • use pxe roms from upstream qemu-kvm
  • qemu-server (1.1-32)
    • small fixes for new qemu-kvm 0.15.0
  • libpve-storage-perl (1.0-19)
    • set LC_ALL instead of LANG (avoid bug when user sets LC_ environment variables)
    • iscsi: tolerate errors when not all portals are online.
  • vzdump (1.2-15)
    • run pre-restart hook after snapshot

Proxmox VE 1.8

Updated ISO 16.6.2011, includes current stable packages Downloads

  • pve-kernel-2.6.32 (2.6.32-33)
  • aacraid with support for Adaptec 6 series (e.g. 6805)
  • KVM 0.14.1
  • vzctl 3.0.27
  • vzdump 1.2-13
  • bug fixes

Released 30.03.2011: See Downloads

Release Notes:

  • update to Debian Lenny 5.0.8
  • update to kvm 0.14.0
  • updated kernels with many fixes
  • protect against Cross Site Request Forgery (added anti-CSRF tokens)
  • bug fixes

Detailed change log:

  • qemu-server (1.1-30)
    • use 'cache=none' for raw images and devices to improve performance
  • pve-qemu-kvm (0.14.0-2)
    • also install vgabios-qxl.bin vgabios-stdvga.bin vgabios-vmware.bin
    • update to 0.14.0
    • add fix for eventfd support (msix-eventfd-fix.patch)
    • removed kvmtrace (removed from upstream?)
    • add vnc keyboard fixes for fr-ca (reported by Pierre-Yves)
  • pve-kernel-2.6.32 (2.6.32-32)
    • update to Debian Squeeze kernel 2.6.32-32
    • remove patch use-init-task-credential.patch (fixed in upstream)
    • enable CONFIG_SENSORS_K10TEMP=m
  • pve-kernel-2.6.18 (2.6.18-14)
    • update to ovzkernel-2.6.18-238.5.1.el5.028stab085.3.src.rpm
    • increased kernel API version to '5-pve'
    • update to SCST 2.0.0.1
    • update to OCFS2 1.4.7
  • pve-qemu-kvm-2.6.18 (0.9.1-12)
    • update to kvm-83-224.el5.src.rpm
  • pve-kernel-2.6.35 (2.6.35-10)
    • update to Ubuntu-2.6.35-27.48
  • pve-manager (1.8-15)
    • protect against Cross Site Request Forgery (added anti-CSRF tokens)
    • correctly encode html entities inside notes (avoid persistant XSS vulnerabilities)
    • use http_proxy in cron APL download if configured

Proxmox VE 1.7

Released 30.11.2010: See Downloads

Change log:

  • libpve-storage-perl (1.0-16)
    • several fixes for multipath
  • pve-kernel-2.6.18 (2.6.18-10)
    • update to ovzkernel-2.6.18-194.26.1.el5.028stab070.14.src.rpm
  • pve-kernel-2.6.32 (2.6.32-28)
    • update to Debian Squeeze kernel 2.6.32-28 (openvz 2.6.32-dzhanibekov)
    • update igb driver to version 2.4.8
    • update ixgbe driver to version 3.0.14
  • pve-manager (1.7-10)
    • we only display the warning about missing openvz if we have no kvm.
    • fix config change detection in pvectl
    • on the create VM page, query capabilities of remote host (kvm/openvz enabled?) and select resonable defaults.
  • pve-qemu-kvm (0.13.0-2)
    • do not install unnecessary blobs (manually install blobs, use --disable-blobs)
    • update migration and vnc keymap patches for 0.13.0
  • vzdump (1.2-9)
    • fix veth numbers when restoring openvz containers (vzrestore)
  • qemu-server (1.1-25)
    • new --repeat option for qmrestore
    • use new -netdev syntax for network devices. This enables tcp offload settings, and we can use 'vhost=on' for virtio-net
  • Small bug fixes

Proxmox VE 1.6 (updated) - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5

Released 18.10.2010: See Downloads

Change log:

  • pve-kernel-2.6.32 (2.6.32-24)
    • include fix for OpenVZ bug #1603
    • ixgbe driver update (3.0.12)
    • upgrade to Debian Squeeze kernel 2.6.32-24 (OpenVZ 2.6.32-dyomin)
    • strip debug info
    • remove default initrd image (no longer needed)
    • do not set CONFIG_DMAR_DEFAULT_ON
    • enable CAPIDRV module
  • pve-qemu-kvm (0.12.5-2)
    • enable up to 32 NICs (up to 29 are usable)
  • pve-manager (1.6-5)
    • updated Japanese translations
    • minor fixes in bin/pveversion (adjust for proxmox-ve-2.6.35)
    • update Czech translations.
  • iscsi-scst (1.0.1.1-4)
    • create /var/lib/scst/pr (required by the kernel module)
  • qemu-server (1.1-22)
    • use larger block size for dd (bs=256k) to speed up restore.
    • new backup=yes|no option for drives. VZDump ignores drives with option backup=no (many thanks to draguz)
    • new qmrestore option '--unique': create unique MAC address on restore (many thanks to Seth)
  • vzdump (1.2-8)
    • correctly escape output from 'find' (escape backslash using sed). It is now possible to backup files which contain a backslash character.
    • added new hook 'log-end' (see hook-script.pl example)
  • dab (1.1-9)
    • correct update-rc.d invocation (add '.' after runlevel)
    • fix postgres startup script for squeeze
    • support dependency based startup (squeeze)
    • use postgresql 8.4 for squeeze
  • pve-kernel-2.6.35 (2.6.35-6)
    • first public release (KVM only, KSM enabled)
  • pve-kernel-2.6.18 (2.6.18-8)
    • update to ovzkernel-2.6.18-194.17.1.el5.028stab070.7.src.rpm
    • remove initrd code (no longer needed)
    • strip debug symbols (reduce kernel package size)
  • pve-qemu-kvm-2.6.18 (0.9.1-8)
    • update to kvm-83-164.el5_5.23.src.rpm
    • enable all (previously disabled) kvm features (scsi, usb, ...)
  • pve-firmware (1.0-9)
    • include firmware for new 2.6.35 kernels
  • Small bug fixes

Proxmox VE 1.6 - ISO Installer with 2.6.32 Kernel with OpenVZ including KVM 0.12.5

Released 30.08.2010: See Downloads

Change log:

  • New 2.6.32 kernel including OpenVZ support, based on latest Debian Squeeze Kernel
  • Removed KSM (OpenVZ does not support KSM)
  • DRBD tools: update to drbd 8.3.7
  • New KVM user-space tools (0.12.5)
  • New OpenVZ user-space tools (vzctl 3.0.24)
  • vzdump: use ionice to set io priority. By default, we now set "ionice -c2 -n7" to limit io (instead of using a default bwlimit)
  • vzrestore: new --force option to overwrite existing data
  • dab bug fix: only print a warning for packages installed directly with 'dpkg' (when there is no information in 'available' list)
  • dab: only mount /proc/ if not already mounted (avoid problems with vzctl 3.0.24)
  • Bug fixes

Update 20.9.2010:

2.6.32 Kernel:

  • pve-kernel-2.6.32 (2.6.32-19)
  • include latest Broadcom bnx2, bnx2x drivers
  • update to Debian kernel 2.6.32-22
  • fix CVE-2010-3081 kernel
  • include latest ixgbe driver

2.6.24 Kernel:

  • pve-kernel-2.6.24 (2.6.24-25)
  • update to Ubuntu-2.6.24-28.7 (fix CVE-2010-3081)

2.6.18 Kernel:

  • pve-kernel-2.6.18 (2.6.18-6)
  • fix CVE-2010-3081

Proxmox VE 1.5 - New Kernel 2.6.24 and 2.6.32, including KVM 0.12.4 and gPXE

Released 17.05.2010: See Downloads

Change log 2.6.24 (KVM and OpenVZ):

  • update to qemu-kvm-0.12.4
  • Seabios
  • include gPXE 1.0 network boot (WDS is working now)
  • update e1000e to e1000e-1.1.2.tar.gz
  • update igb to igb-2.1.9.tar.gz
  • update e1000 to e1000-8.0.19.tar.gz
  • update bnx2 to netxtreme2-5.2.50.tar.gz
  • update tg3 to tg3-3.105h.tar.gz
  • include alt1e (Atheros nic)
  • kernel update (latest 2.6.24)
  • Fixed drive order bug (KVM)
  • Allow up to 4094 bridges
  • update to the latest vzctl
  • Use option –sparse for qmigrate

Change log 2.6.32 (KVM only):

  • update to qemu-kvm-0.12.4
  • Seabios
  • include gPXE 1.0 network boot (WDS is working now)
  • kernel update (latest 2.6.32)
  • Fixed drive order bug (KVM)
  • Allow up to 4094 bridges
  • Use option –sparse for qmigrate

Proxmox VE 1.5

Released 20.01.2010: See Downloads

Including:

  • many Linux kernel improvements
  • support different kernel versions
  • new 2.6.18 Linux kernel - default (OpenVZ, KVM)
  • new 2.6.24 Linux kernel (OpenVZ, KVM)
  • new 2.6.32 Linux kernel (KVM only, KSM support)
  • use update-initramfs
  • DRBD: update to drbd 8.3.4
  • AOE: update to AOE6-73
  • SCST: update to latest SCST modules
  • new KVM user-space tools
  • we now use the qemu-kvm stable branch (0.11.1)
  • new virtual packages for better kernel dependency handling - proxmox-ve-2.6.18, proxmox-ve-2.6.24 and proxmox-ve-2.6.32
  • updated most appliances
  • installation CD now use/install proxmox-ve-2.6.18
  • installer uses UUID for /boot partition
  • updated vzctl, with improved init-logger patch
  • updated Russian and Japanese translation
  • many other small improvements and bug fixes

Proxmox VE 1.4

Released 19.10.2009: See Downloads

Including:

  • First release with new storage model
  • iSCSI support
  • NFS support
  • LVM (managed storage, shared LVM devices like FC)
  • DRBD storage replication with active/active mode
  • Multiple ISO storages (NFS)
  • Multiple backup locactions (including NFS)
  • Zero downtime live migration (KVM); over ssh channel - all traffic is encrypted
  • Updated web interface (pve-manager)
  • Support new storage model
  • Allow to specify MAC address (for KVM and OpenVZ in veth mode)
  • Added iSCSI Tape support, see https://scst.sourceforge.net/
  • Added SCST modules to the kernel
  • Provide SCST userspace package
  • Updated kernel; update broadcom bnx2 and tg3 drivers; updated ARCEA RAID (acrmsr) drivers; updated kvm modules (2.6.30.1); update realtek network drivers; update intel igb driver (igb-1.3.28.4.tar.gz); update intel e1000e driver (e1000e-1.0.2.5.tar.gz); cciss driver updates; ich10 updates for Nehalem (corei7) based systems
  • Updated kvm modules (2.6.30.1)
  • Update DRBD to 8.3.2
  • New KVM userspace tools
  • New feature: assign Sockets/Cores to KVM guests
  • We now use the qemu-kvm stable branch (0.11.0)
  • Rename package pve-kvm to pve-qemu-kvm
  • Qemu-server updates
  • Support up to 1000 vlans
  • New migrate_speed and migrate_downtime settings
  • New VZDump - complete rewrite to support the new storage architecture; do not waste space in 'stop' mode
  • Slightly changed naming of backup files; added date and time to file name; added VM type (Openvz/Qemu) to file name; new plugin architecture; support hook scripts


Proxmox VE 1.4 beta2

Released 29.09.2009: See Downloads

Including:

  • Updated vzdump web interface with support new storage model
  • Updated kernel: update broadcom bnx2 and tg3 drivers, updated kvm modules (2.6.30.1), updated realtek network drivers, more ich10 updates for Nehalem (corei7) based systems
  • New kvm userspace tools
  • New feature: assign Sockets/Cores to KVM guests
  • We now use the qemu-kvm stable branch (0.11.0)
  • Rename package pve-kvm to pve-qemu-kvm
  • Qemu-server updates: support up to 1000 vlans
  • New migrate_speed and migrate_downtime settings

Note: the sources.list does not point to the pvetest repo, so please adapt this accordingly if you want to update/install packages from pvetest

Proxmox VE 1.4 beta1

Released 11.09.2009: See Downloads

Including:

  • First release with new storage model
  • iSCSI support
  • NFS support
  • LVM (managed storage, shared LVM devices)
  • DRBD storage replication with active/active mode (DRBD 8.3.2)
  • Multiple ISO storages (NFS)
  • Zero downtime live migration (KVM) over ssh channel - all traffic is encrypted
  • Updated kernel:update Intel igb driver (igb-1.3.28.4.tar.gz), Intel e1000e driver (e1000e-1.0.2.5.tar.gz),cciss, ich10 updates, update DRBD to 8.3.2
  • new VZDump: complete rewrite to support the new storage architecture; do not waste space in 'stop' mode; new plugin architecture

Note: the sources.list does not point to the pvetest repo, so please adapt this accordingly if you want to update/install packages from pvetest

Proxmox VE 1.3

Released 4.06.2009: See Downloads

Including:

  • Updated Kernel
  • Support for Highpoint RR3120
  • Removed OpenVZ limit of 50 machines
  • Update to kvm-86
  • Vzdump: minor bug fixes
  • Qemu-server: added new 'tablet' option

Proxmox VE 1.2

Released 12.05.2009: See Downloads

Including:

  • Based on new Debian Lenny (Debian 5.0)
  • New KVM with many improvements (kvm-85)
  • New Kernel (still 2.6.24 based but with a lot of backported drivers)
  • Update to aoe6-71
  • Update to drbd-8.3.1
  • Include HighPoint RocketRAID 3xxx/4xxx Controller Driver
  • Update to busybox 1.14.0
  • Use busybox mdev as firmware loader
  • Compiled with gcc 4.3.2 (default on Debian Lenny)
  • Load virtio_blk (if you run Proxmox VE inside Proxmox VE as a KVM guest)
  • New OpenVZ vzctl (mainly bug fixes)
  • Vncterm: better terminal emulation
  • Everything updated to Debian Lenny
  • Many bug fixes

Proxmox VE 1.1

Released 15.01.2009: See Downloads

Including:

  • Updated Kernel - new e1000/e1000e drivers
  • New KVM with many improvements (KVM-83)
  • New Chinese translations
  • Minor pve-manager updates and bug fixes

Proxmox VE 1.0 - First stable release

Released 29.10.2008: See Downloads

Including:

  • KVM and OpenVZ live migration
  • Web interface for vzdump backups
  • Improved vzdump, now support KVM guests
  • New templates available
  • Support multiple languages (see Translations for details)
  • Improved installer (use grub as boot loader, automatic country/keyboard detection)
  • Support up to 16 CPUs
  • Based on Debian 4.0r5
  • KVM updates (kvm-75)
  • OpenVZ updates, Fedora support added
  • Improved Kernel (includes drivers for ISCSI, AoE 6-64, DRBD 8.2.6, qlogic firmware for ISCSI/FC HBAs)
  • Many bugfixes

Proxmox VE 0.9beta2

Released 22.7.2008, see Proxmox VE 0.9beta2 released!

Including:

  • Extend pre-built Virtual Appliances downloads
  • Multi-bridge support
  • Bonding
  • VLAN support
  • Extend web interface for KVM guests (add hard drives, network cards, etc.)
  • Many bug fixes

Proxmox VE 0.9

First public release: 15.4.2008