Hotplug (qemu disk,nic,cpu,memory): Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
 
(16 intermediate revisions by 6 users not shown)
Line 1: Line 1:
Proxmox support hotplugging of qemu devices
Proxmox support hotplugging of Qemu devices:


- disks (hotplug/unplug)
- Disks (hotplug/unplug)


- nics (hotplug/unplug)
- NICs (hotplug/unplug)


- cpu  (hotplug only - proxmox 3.4)
- USB devices (hotplug/unplug)


- memory (hotplug only - proxmox 3.4)
- CPU (hotplug/unplug)


== Linux Guests notes ==
- Memory (hotplug/unplug)


==== kernel modules needed ====  
== Linux Guests Notes ==
you need to have 2 modules loaded in your linux guest
 
=== Kernel Modules Needed ===
You need to have 2 modules loaded in your Linux guest:


<pre>
<pre>
#modprobe acpiphp
# modprobe acpiphp
#modprobe pci_hotplug
# modprobe pci_hotplug
</pre>
</pre>


==== cpu && memory hotplug ====  
=== CPU and Memory Hotplug ===  


you need a recent kernel > 3.10
==== Kernel older than 4.7 ====
You need a recent kernel (> 3.10) and an udev rules file, to auto-enable CPU or memory hot-plugging.


and a udev rules, to auto enable cpus or memory hotplugged
You can add the file:
 
<code>/lib/udev/rules.d/80-hotplug-cpu-mem.rules</code>
<pre>
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
SUBSYSTEM=="memory", ACTION=="add", TEST=="state", ATTR{state}=="offline", ATTR{state}="online"
</pre>


you can add a file:
==== Kernel older than 5.15 ====
For Linux kernel >= 4.7, you don't need the udev rule for memory hotplug; you only need to add this kernel parameter at boot:


memhp_default_state=online


/lib/udev/rules.d/80-hotplug-cpu-mem.rules
==== Kernel 5.15 and newer ====
For Linux kernel >=5.15 (e.g. Ubuntu 22.04) you only need the UDEV rule for CPU hotplug:
 
/lib/udev/rules.d/80-hotplug-cpu.rules
<pre>
<pre>
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
SUBSYSTEM=="memory", ACTION=="add", TEST=="state", ATTR{state}=="offline", ATTR{state}="online"
</pre>
</pre>


== Windows Guests notes ==
== Windows Guests Notes ==
 
Disk and NIC hotplug work out of the box for all Windows versions >2003
 
CPU and memory hotplug work on > 2008 enterprise/datacenter, 2012 > standard
 
== Disk and NIC Hotplug/unplug ==


- Disk and nic hotplug is working out of the box for all windows versions >2003
By default, hot-plug is already enabled for virtual network devices, virtual disks and virtual USB devices.
- cpu && memory hotplug is working on > 2008 enterprise/datacenter, 2012 > standard


Just add/remove your disks, USB devices or NICs as usual.


For disk unplug, verify that your disk is not mounted in your Linux guest or enabled in Windows Disk Management.


== Disk && Nic hotplug/unplug ==
== CPU Hotplug ==


edit your vm configuration file
# In the VM Options, edit the "Hotplug" Value and activate "CPU Hotplug".
# In the Hardware Section of your VM, specify the maximum number of Sockets/Cores.
# You can now use the "CPU Options" dialog to change the number of vCPUs, while the machine is running.


<pre>
The maximum of CPUs equals sockets * cores.
/etc/pve/qemuserver/<VMID>.conf
</pre>
and add
<pre>
hotplug: network,disk,....
</pre>


Note: Since proxmox 3.3, hotplug option is available in GUI.
== Memory Hotplug ==


Memory hotplug is available since Proxmox VE 3.4.


To enable memory hotplug you need to enable both NUMA (web UI: VM -> Hardware -> CPU) and Memory hotplug (web UI: VM -> Options -> Hotplug).


Then add/remove your disk or nics as usual.
Then you can use the VMs memory options in the web UI as usual.


For disk unplug, verify that your disk is not mounted in your linux guest or disabled in windows disk management.
== Memory Hot Unplug ==


== CPU hotplug ==
- Memory unplug currently does not work on Windows (<= win10 currently)
<pre>
/etc/pve/qemuserver/<VMID>.conf
</pre>
and add
<pre>
cores: 2
sockets: 2
vcpus: 1
hotplug: cpu,...
</pre>


You will start your vm with 1 cpu|cores, and you can hotplug later up to 4 cores.
- Memory unplug can be unstable on Linux (<= kernel 4.10 currently)
maxcpus = sockets * cores.


To hotplug a new vcpu (GUI not yet available in proxmox 3.4).
For Linux memory unplug, you need to have movable zone enabled, in the guest kernel config (not enabled by default on Debian/Ubuntu):


#qm set VMID -vcpus 3
CONFIG_MOVABLE_NODE=YES


This will hotplug 2 new vcpus, for a total of 3vcpus.
and the "movable_node" boot kernel parameter enabled.


Note :
Here are some future kernel patches to improve unplug : http://www.spinics.net/lists/linux-mm/msg125359.html
windows cpu hotplug is currently broken (proxmox 4.1 - qemu 2.5) because of hyper_v features .
It will be fixed in qemu 2.6.


== Memory hotplug ==
== Update max_mem_regions when hot-plugging more than 64 GiB ==


Memory hotplug is available in proxmox 3.4.
<pre>
For Error: a used vhost backend has no free memory slots left
</pre>


You need to enable numa and hotplug to get it working
Kernel sets how many memory DIMMs per vhost it supports, default 64.
You can increase the limit by creating a file <code>/etc/modprobe.d/vhost.conf</code>.


Containing options vhost max_mem_regions=128 (default is 64, you can try increasing it further if necessary)
and then rebooting the Proxmox machine.
<pre>
<pre>
numa: 1
echo "options vhost max_mem_regions=128" > /etc/modprobe.d/vhost.conf
hotplug: memory,...
rmmod vhost_net
rmmod vhost
modprobe vhost_net
cat /sys/module/vhost/parameters/max_mem_regions
# 128
 
</pre>
</pre>
Ref Forum link : https://forum.proxmox.com/threads/problem-with-hotplug-and-64gb-ram.73621/page-3#post-600595


Then you can use the memory GUI as usual.


[[Category:HOWTO]] [[Category:Installation]]
[[Category:HOWTO]] [[Category:Installation]]

Latest revision as of 07:29, 14 October 2024

Proxmox support hotplugging of Qemu devices:

- Disks (hotplug/unplug)

- NICs (hotplug/unplug)

- USB devices (hotplug/unplug)

- CPU (hotplug/unplug)

- Memory (hotplug/unplug)

Linux Guests Notes

Kernel Modules Needed

You need to have 2 modules loaded in your Linux guest:

# modprobe acpiphp
# modprobe pci_hotplug

CPU and Memory Hotplug

Kernel older than 4.7

You need a recent kernel (> 3.10) and an udev rules file, to auto-enable CPU or memory hot-plugging.

You can add the file:

/lib/udev/rules.d/80-hotplug-cpu-mem.rules

SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
SUBSYSTEM=="memory", ACTION=="add", TEST=="state", ATTR{state}=="offline", ATTR{state}="online"

Kernel older than 5.15

For Linux kernel >= 4.7, you don't need the udev rule for memory hotplug; you only need to add this kernel parameter at boot:

memhp_default_state=online

Kernel 5.15 and newer

For Linux kernel >=5.15 (e.g. Ubuntu 22.04) you only need the UDEV rule for CPU hotplug:

/lib/udev/rules.d/80-hotplug-cpu.rules

SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"

Windows Guests Notes

Disk and NIC hotplug work out of the box for all Windows versions >2003

CPU and memory hotplug work on > 2008 enterprise/datacenter, 2012 > standard

Disk and NIC Hotplug/unplug

By default, hot-plug is already enabled for virtual network devices, virtual disks and virtual USB devices.

Just add/remove your disks, USB devices or NICs as usual.

For disk unplug, verify that your disk is not mounted in your Linux guest or enabled in Windows Disk Management.

CPU Hotplug

  1. In the VM Options, edit the "Hotplug" Value and activate "CPU Hotplug".
  2. In the Hardware Section of your VM, specify the maximum number of Sockets/Cores.
  3. You can now use the "CPU Options" dialog to change the number of vCPUs, while the machine is running.

The maximum of CPUs equals sockets * cores.

Memory Hotplug

Memory hotplug is available since Proxmox VE 3.4.

To enable memory hotplug you need to enable both NUMA (web UI: VM -> Hardware -> CPU) and Memory hotplug (web UI: VM -> Options -> Hotplug).

Then you can use the VMs memory options in the web UI as usual.

Memory Hot Unplug

- Memory unplug currently does not work on Windows (<= win10 currently)

- Memory unplug can be unstable on Linux (<= kernel 4.10 currently)

For Linux memory unplug, you need to have movable zone enabled, in the guest kernel config (not enabled by default on Debian/Ubuntu):

CONFIG_MOVABLE_NODE=YES

and the "movable_node" boot kernel parameter enabled.

Here are some future kernel patches to improve unplug : http://www.spinics.net/lists/linux-mm/msg125359.html

Update max_mem_regions when hot-plugging more than 64 GiB

For Error: a used vhost backend has no free memory slots left 

Kernel sets how many memory DIMMs per vhost it supports, default 64. You can increase the limit by creating a file /etc/modprobe.d/vhost.conf.

Containing options vhost max_mem_regions=128 (default is 64, you can try increasing it further if necessary) and then rebooting the Proxmox machine.

echo "options vhost max_mem_regions=128" > /etc/modprobe.d/vhost.conf
rmmod vhost_net
rmmod vhost
modprobe vhost_net
cat /sys/module/vhost/parameters/max_mem_regions
# 128

Ref Forum link : https://forum.proxmox.com/threads/problem-with-hotplug-and-64gb-ram.73621/page-3#post-600595