Hotplug (qemu disk,nic,cpu,memory): Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
 
(8 intermediate revisions by 6 users not shown)
Line 1: Line 1:
Proxmox support hotplugging of qemu devices
Proxmox support hotplugging of Qemu devices:


- disks (hotplug/unplug)
- Disks (hotplug/unplug)


- nics (hotplug/unplug)
- NICs (hotplug/unplug)


- cpu  (hotplug/unplug)
- USB devices (hotplug/unplug)


- memory (hotplug/unplug)
- CPU (hotplug/unplug)


== Linux Guests notes ==
- Memory (hotplug/unplug)


==== kernel modules needed ====  
== Linux Guests Notes ==
you need to have 2 modules loaded in your linux guest
 
=== Kernel Modules Needed ===
You need to have 2 modules loaded in your Linux guest:


<pre>
<pre>
#modprobe acpiphp
# modprobe acpiphp
#modprobe pci_hotplug
# modprobe pci_hotplug
</pre>
</pre>


==== cpu && memory hotplug ====  
=== CPU and Memory Hotplug ===  


you need a recent kernel > 3.10
==== Kernel older than 4.7 ====
You need a recent kernel (> 3.10) and an udev rules file, to auto-enable CPU or memory hot-plugging.


and a udev rules, to auto enable cpus or memory hotplugged
You can add the file:


you can add a file:
<code>/lib/udev/rules.d/80-hotplug-cpu-mem.rules</code>
 
 
/lib/udev/rules.d/80-hotplug-cpu-mem.rules
<pre>
<pre>
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
Line 34: Line 34:
</pre>
</pre>


for linux kernel >= 4.7,
==== Kernel older than 5.15 ====
For Linux kernel >= 4.7, you don't need the udev rule for memory hotplug; you only need to add this kernel parameter at boot:


you don't need the udev rule for memory hotplug, you only need to add this kernel parameter at boot:
memhp_default_state=online


==== Kernel 5.15 and newer ====
For Linux kernel >=5.15 (e.g. Ubuntu 22.04) you only need the UDEV rule for CPU hotplug:
/lib/udev/rules.d/80-hotplug-cpu.rules
<pre>
SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
</pre>


memhp_default_state=online
== Windows Guests Notes ==


== Windows Guests notes ==
Disk and NIC hotplug work out of the box for all Windows versions >2003


- Disk and nic hotplug is working out of the box for all windows versions >2003
CPU and memory hotplug work on > 2008 enterprise/datacenter, 2012 > standard
- cpu && memory hotplug is working on > 2008 enterprise/datacenter, 2012 > standard


== Disk and NIC Hotplug/unplug ==


By default, hot-plug is already enabled for virtual network devices, virtual disks and virtual USB devices.


== Disk && Nic hotplug/unplug ==
Just add/remove your disks, USB devices or NICs as usual.


edit your vm configuration file
For disk unplug, verify that your disk is not mounted in your Linux guest or enabled in Windows Disk Management.


<pre>
== CPU Hotplug ==
/etc/pve/qemuserver/<VMID>.conf
</pre>
and add
<pre>
hotplug: network,disk,....
</pre>


Note: Since proxmox 3.3, hotplug option is available in GUI.
# In the VM Options, edit the "Hotplug" Value and activate "CPU Hotplug".
# In the Hardware Section of your VM, specify the maximum number of Sockets/Cores.
# You can now use the "CPU Options" dialog to change the number of vCPUs, while the machine is running.


The maximum of CPUs equals sockets * cores.


== Memory Hotplug ==


Then add/remove your disk or nics as usual.
Memory hotplug is available since Proxmox VE 3.4.


For disk unplug, verify that your disk is not mounted in your linux guest or disabled in windows disk management.
To enable memory hotplug you need to enable both NUMA (web UI: VM -> Hardware -> CPU) and Memory hotplug (web UI: VM -> Options -> Hotplug).


== CPU hotplug ==
Then you can use the VMs memory options in the web UI as usual.
<pre>
/etc/pve/qemuserver/<VMID>.conf
</pre>
and add
<pre>
cores: 2
sockets: 2
vcpus: 1
hotplug: cpu,...
</pre>


You will start your vm with 1 cpu|cores, and you can hotplug later up to 4 cores.
== Memory Hot Unplug ==
maxcpus = sockets * cores.


To hotplug a new vcpu (GUI not yet available in proxmox 3.4).
- Memory unplug currently does not work on Windows (<= win10 currently)


#qm set VMID -vcpus 3
- Memory unplug can be unstable on Linux (<= kernel 4.10 currently)


This will hotplug 2 new vcpus, for a total of 3vcpus.
For Linux memory unplug, you need to have movable zone enabled, in the guest kernel config (not enabled by default on Debian/Ubuntu):


Note :
CONFIG_MOVABLE_NODE=YES
windows cpu hotplug is working since qemu 2.6
cpu hot unplug is working since qemu 2.7


== Memory hotplug ==
and the "movable_node" boot kernel parameter enabled.


Memory hotplug is available in proxmox 3.4.
Here are some future kernel patches to improve unplug : http://www.spinics.net/lists/linux-mm/msg125359.html


You need to enable numa and hotplug to get it working
== Update max_mem_regions when hot-plugging more than 64 GiB ==


<pre>
<pre>
numa: 1
For Error: a used vhost backend has no free memory slots left
hotplug: memory,...
</pre>
</pre>


Then you can use the memory GUI as usual.
Kernel sets how many memory DIMMs per vhost it supports, default 64.
You can increase the limit by creating a file <code>/etc/modprobe.d/vhost.conf</code>.


- Memory unplug don't work windows currently (<=win10 currently)
Containing options vhost max_mem_regions=128 (default is 64, you can try increasing it further if necessary)
and then rebooting the Proxmox machine.
<pre>
echo "options vhost max_mem_regions=128" > /etc/modprobe.d/vhost.conf
rmmod vhost_net
rmmod vhost
modprobe vhost_net
cat /sys/module/vhost/parameters/max_mem_regions
# 128


- Memory unplug can't be unstable on linux (<= kernel 4.8 currently)
</pre>
 
Ref Forum link : https://forum.proxmox.com/threads/problem-with-hotplug-and-64gb-ram.73621/page-3#post-600595
for Linux memory unplug, you need have  movable zone enabled in kernel config (not enabled by default on debian/ubuntu)
 
CONFIG_MOVABLE_NODE=YES
 
and


"movable_node" boot kernel parameter enabled


[[Category:HOWTO]] [[Category:Installation]]
[[Category:HOWTO]] [[Category:Installation]]

Latest revision as of 07:29, 14 October 2024

Proxmox support hotplugging of Qemu devices:

- Disks (hotplug/unplug)

- NICs (hotplug/unplug)

- USB devices (hotplug/unplug)

- CPU (hotplug/unplug)

- Memory (hotplug/unplug)

Linux Guests Notes

Kernel Modules Needed

You need to have 2 modules loaded in your Linux guest:

# modprobe acpiphp
# modprobe pci_hotplug

CPU and Memory Hotplug

Kernel older than 4.7

You need a recent kernel (> 3.10) and an udev rules file, to auto-enable CPU or memory hot-plugging.

You can add the file:

/lib/udev/rules.d/80-hotplug-cpu-mem.rules

SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"
SUBSYSTEM=="memory", ACTION=="add", TEST=="state", ATTR{state}=="offline", ATTR{state}="online"

Kernel older than 5.15

For Linux kernel >= 4.7, you don't need the udev rule for memory hotplug; you only need to add this kernel parameter at boot:

memhp_default_state=online

Kernel 5.15 and newer

For Linux kernel >=5.15 (e.g. Ubuntu 22.04) you only need the UDEV rule for CPU hotplug:

/lib/udev/rules.d/80-hotplug-cpu.rules

SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", ATTR{online}="1"

Windows Guests Notes

Disk and NIC hotplug work out of the box for all Windows versions >2003

CPU and memory hotplug work on > 2008 enterprise/datacenter, 2012 > standard

Disk and NIC Hotplug/unplug

By default, hot-plug is already enabled for virtual network devices, virtual disks and virtual USB devices.

Just add/remove your disks, USB devices or NICs as usual.

For disk unplug, verify that your disk is not mounted in your Linux guest or enabled in Windows Disk Management.

CPU Hotplug

  1. In the VM Options, edit the "Hotplug" Value and activate "CPU Hotplug".
  2. In the Hardware Section of your VM, specify the maximum number of Sockets/Cores.
  3. You can now use the "CPU Options" dialog to change the number of vCPUs, while the machine is running.

The maximum of CPUs equals sockets * cores.

Memory Hotplug

Memory hotplug is available since Proxmox VE 3.4.

To enable memory hotplug you need to enable both NUMA (web UI: VM -> Hardware -> CPU) and Memory hotplug (web UI: VM -> Options -> Hotplug).

Then you can use the VMs memory options in the web UI as usual.

Memory Hot Unplug

- Memory unplug currently does not work on Windows (<= win10 currently)

- Memory unplug can be unstable on Linux (<= kernel 4.10 currently)

For Linux memory unplug, you need to have movable zone enabled, in the guest kernel config (not enabled by default on Debian/Ubuntu):

CONFIG_MOVABLE_NODE=YES

and the "movable_node" boot kernel parameter enabled.

Here are some future kernel patches to improve unplug : http://www.spinics.net/lists/linux-mm/msg125359.html

Update max_mem_regions when hot-plugging more than 64 GiB

For Error: a used vhost backend has no free memory slots left 

Kernel sets how many memory DIMMs per vhost it supports, default 64. You can increase the limit by creating a file /etc/modprobe.d/vhost.conf.

Containing options vhost max_mem_regions=128 (default is 64, you can try increasing it further if necessary) and then rebooting the Proxmox machine.

echo "options vhost max_mem_regions=128" > /etc/modprobe.d/vhost.conf
rmmod vhost_net
rmmod vhost
modprobe vhost_net
cat /sys/module/vhost/parameters/max_mem_regions
# 128

Ref Forum link : https://forum.proxmox.com/threads/problem-with-hotplug-and-64gb-ram.73621/page-3#post-600595