Difference between revisions of "PCI Passthrough"

From Proxmox VE
Jump to navigation Jump to search
Line 8: Line 8:
 
PCI passthrough is an experimental feature in Proxmox VE
 
PCI passthrough is an experimental feature in Proxmox VE
  
== Intel CPU ==
 
  
<br> edit: <source lang="bash">
+
== Enable IOMMU ==
#vi /etc/default/grub
 
</source> change <source lang="bash">
 
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
 
</source> to <source lang="bash">
 
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
 
</source> then <source lang="bash">
 
# update-grub
 
</source>
 
  
<br>
+
You need to enable the IOMMU, by [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline editing the kernel commandline].
  
Then run "dmesg | grep -e DMAR -e IOMMU" from the command line. &nbsp;If there is no output, then something is wrong.
+
First open your bootloader kernel command line config file, for grub:
 +
nano /etc/default/grub
  
== AMD CPU ==
+
or systemd-boot
 +
nano /etc/kernel/cmdline
  
Edit:
+
Find the line with "GRUB_CMDLINE_LINUX_DEFAULT" (for GRUB), create the file for systemd-boot (it'S format is a single line with options)
  
<source lang="bash">
+
=== Intel CPU ===
# vi /etc/default/grub
 
</source>
 
  
Change: <source lang="bash">
+
For Intel CPUs add "intel_iommu=on", for example:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
+
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"  
</source> To: <source lang="bash">
 
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"  
 
  
</source>
+
Safe the changes and update grub:
 +
update-grub
  
 +
or:
 +
pve-efiboot-tool refresh
  
==Required modules ==
+
 
 +
Then reboot, after that run "dmesg | grep -e DMAR -e IOMMU" from the command line. If there is no output, then something is wrong.
 +
 
 +
=== AMD CPU ===
 +
 
 +
For AMD CPUs add "amd_iommu=on", for example:
 +
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
 +
 
 +
Safe the changes and update grub:
 +
update-grub
 +
 
 +
or:
 +
pve-efiboot-tool refresh
 +
 
 +
Then reboot, after that run "dmesg | grep -e DMAR -e IOMMU" from the command line. If there is no output, then something is wrong.
 +
 
 +
== Required modules ==
 
add to /etc/modules
 
add to /etc/modules
  
Line 49: Line 57:
 
vfio_virqfd
 
vfio_virqfd
 
</pre>
 
</pre>
 +
 +
Note that in the 5.4 based kernel (will be used for Proxmox VE 6.2 in Q2/2020) some of those modules are already built into the kernel directly.
  
 
== IOMMU interrupt remapping ==
 
== IOMMU interrupt remapping ==

Revision as of 14:28, 23 March 2020

Introduction

PCI passthrough allows you to use a physical PCI device (graphics card, network card) inside a VM (KVM virtualization only). If you "PCI passthrough" a device, the device is not available to the host anymore.

Note:

PCI passthrough is an experimental feature in Proxmox VE


Enable IOMMU

You need to enable the IOMMU, by editing the kernel commandline.

First open your bootloader kernel command line config file, for grub:

nano /etc/default/grub

or systemd-boot

nano /etc/kernel/cmdline

Find the line with "GRUB_CMDLINE_LINUX_DEFAULT" (for GRUB), create the file for systemd-boot (it'S format is a single line with options)

Intel CPU

For Intel CPUs add "intel_iommu=on", for example:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on" 

Safe the changes and update grub:

update-grub

or: pve-efiboot-tool refresh


Then reboot, after that run "dmesg | grep -e DMAR -e IOMMU" from the command line. If there is no output, then something is wrong.

AMD CPU

For AMD CPUs add "amd_iommu=on", for example:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on" 

Safe the changes and update grub:

update-grub

or: pve-efiboot-tool refresh

Then reboot, after that run "dmesg | grep -e DMAR -e IOMMU" from the command line. If there is no output, then something is wrong.

Required modules

add to /etc/modules

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Note that in the 5.4 based kernel (will be used for Proxmox VE 6.2 in Q2/2020) some of those modules are already built into the kernel directly.

IOMMU interrupt remapping

it will not be possible to use PCI passthrough without interrupt remapping.

Device assignment will fail with a 'Failed to assign device "[device name]" : Operation not permitted' error for users of KVM, and a 'Interrupt Remapping hardware not found, passing devices to unprivileged domains is insecure.

Systems which don't support interrupt remapping:


  • All systems using an AMD processor and chipset that have AMD I/O Virtualization (AMD-Vi) support. Such hardware has interrupt remapping support; however, the software support is not yet available upstream.
  • All systems using an Intel processor and chipset that have support for Intel Virtualization Technology for Directed I/O (VT-d), but do not have support for interrupt remapping. Interrupt remapping support is provided in newer processors and chipsets. To identify if your system has support for interrupt remapping:

1) Run the "dmesg | grep ecap" command.

2) On the IOMMU lines, the hexadecimal value after "ecap" indicates whether interrupt remapping is supported. If the last character of this value is an 8, 9, a, b, c, d, e, or an f, interrupt remapping is supported. For example, "ecap 1000" indicates there is no interrupt remapping support. "ecap 10207f" indicates interrupt remapping support, as the last character is an "f".

Interrupt remapping will only be enabled if every IOMMU supports it.

Alternatively, run the following script to determine if your system has interrupt remapping support:

#!/bin/sh
if [ $(dmesg | grep ecap | wc -l) -eq 0 ]; then
  echo "No interrupt remapping support found"
  exit 1
fi

for i in $(dmesg | grep ecap | awk '{print $NF}'); do
  if [ $(( (0x$i & 0xf) >> 3 )) -ne 1 ]; then
    echo "Interrupt remapping not supported"
    exit 1
  fi
done

If your system doesn't support interrupt remapping,

you can allow unsafe interrupts with:

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf

Verify IOMMU isolation

To have pci passthrough working fine, you need dedicated iommu group for your pci devices

You should have something like

# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:16.0
/sys/kernel/iommu_groups/4/devices/0000:00:1a.0
/sys/kernel/iommu_groups/5/devices/0000:00:1b.0
/sys/kernel/iommu_groups/6/devices/0000:00:1c.0
/sys/kernel/iommu_groups/7/devices/0000:00:1c.5
/sys/kernel/iommu_groups/8/devices/0000:00:1c.6
/sys/kernel/iommu_groups/9/devices/0000:00:1c.7
/sys/kernel/iommu_groups/9/devices/0000:05:00.0
/sys/kernel/iommu_groups/10/devices/0000:00:1d.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.2
/sys/kernel/iommu_groups/11/devices/0000:00:1f.3
/sys/kernel/iommu_groups/12/devices/0000:02:00.0
/sys/kernel/iommu_groups/12/devices/0000:02:00.1
/sys/kernel/iommu_groups/13/devices/0000:03:00.0
/sys/kernel/iommu_groups/14/devices/0000:04:00.0


To have separate iommu, your processor need to have support for a feature called ACS (Access Control Services).

All Xeon processor support them (E3,E5) excluding Xeon E3-1200

For intel Core , it's different, only some processors support ACS

Haswell-E (LGA2011-v3)
i7-5960X (8-core, 3/3.5GHz)
i7-5930K (6-core, 3.2/3.8GHz)
i7-5820K (6-core, 3.3/3.6GHz)

Ivy Bridge-E (LGA2011)
i7-4960X (6-core, 3.6/4GHz)
i7-4930K (6-core, 3.4/3.6GHz)
i7-4820K (4-core, 3.7/3.9GHz)

Sandy Bridge-E (LGA2011)
i7-3960X (6-core, 3.3/3.9GHz)
i7-3970X (6-core, 3.5/4GHz)
i7-3930K (6-core, 3.2/3.8GHz)
i7-3820 (4-core, 3.6/3.8GHz)

UPDATE ME : AMD processors ?

If you don't have dedicated iommu, your can try :


1) move the card to another pci slot

2) add "pcie_acs_override=downstream" to kernel boot commandline (grub or systemd-boot) options, which can help on some setup with bad ACS implementation.

Checkout the documentation about Editing the kernel commandline

More infos :

http://vfio.blogspot.be/2015/10/intel-processors-with-acs-support.html

http://vfio.blogspot.be/2014/08/iommu-groups-inside-and-out.html

Determine your PCI card address, and configure your VM

Locate your card using "lspci".  The address should be in the form of: 01:00.0

Manually edit the node.conf file.  It can be located at: /etc/pve/qemu-server/vmid.conf.

Add this line to the end of the file:

hostpci0: 01:00.0

If you have a multi-function device (like a vga card with embedded audio chipset),

you can pass all functions manually with

hostpci0: 01:00.0;01:00.1

or

to pass all functions automatically

hostpci0: 01:00

PCI EXPRESS PASSTHROUGH

/etc/pve/qemu-server/<vmid>.conf

simple pci-express passthrough

machine: q35
hostpci0: 01:00.0,pcie=1

GPU PASSTHROUGH

Yellowpin.svg Note: See http://blog.quindorian.org/2018/03/building-a-2u-amd-ryzen-server-proxmox-gpu-passthrough.html/ if you like an article with an HOWTO approach.
  • MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working.
  • Maybe you'll need to load some specific options in grub.cfg or other tuning values,
  • Here a good forum thread of archlinux: https://bbs.archlinux.org/viewtopic.php?id=162768

For GPU, it's good that host don't try to use the GPU, and avoids issues with the host driver unbinding and re-binding to the device.

First, find the device and vendor id of your vga card

$ lspci -n -s 01:00
01:00.0 0300: 10de:1381 (rev a2)
01:00.1 0403: 10de:0fbc (rev a1)

The Vendor:Device IDs for my GPU and audio functions are therefore 10de:1381, 10de:0fbc.

Then, create a file

echo "options vfio-pci ids=10de:1381,10de:0fbc" > /etc/modprobe.d/vfio.conf

Then blacklist drivers

echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf 
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf 
echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf 

For VM configuration, They are 4 configuration possible:

GPU OVMF PCI PASSTHROUGH (recommended)

OVMF replace bios by UEFI boot. You need to install your guest OS with uefi support, (for windows, try win >=8)

using OVMF, you can also add disable_vga=1 to vfio-pci module, which try to to opt-out devices from vga arbitration if possible.

echo "options vfio-pci ids=10de:1381,10de:0fbc disable_vga=1" > /etc/modprobe.d/vfio.conf

and you need to your graphic card have a uefi bootable rom http://vfio.blogspot.fr/2014/08/does-my-graphics-card-rom-support-efi.html

bios: ovmf
scsihw: virtio-scsi-pci
bootdisk: scsi0
scsi0: .....
hostpci0: 01:00,x-vga=on

GPU OVMF PCI EXPRESS PASSTHROUGH

OVMF replace bios by UEFI boot.

You need to install your guest OS with uefi support, (for windows, try win >=8)

and you need to your graphic card have a uefi bootable rom http://vfio.blogspot.fr/2014/08/does-my-graphics-card-rom-support-efi.html

bios: ovmf
scsihw: virtio-scsi-pci
bootdisk: scsi0
scsi0: .....
machine: q35
hostpci0: 01:00,pcie=1,x-vga=on

GPU Seabios PCI PASSTHROUGH

hostpci0: 01:00,x-vga=on

GPU Seabios PCI EXPRESS PASSTHROUGH

machine: q35
hostpci0: 01:00,pcie=1,x-vga=on

How to known if card is UEFI (ovmf) compatible

Get and compile the software "rom-parser"

$ git clone https://github.com/awilliam/rom-parser
$ cd rom-parser
$ make

Then dump the rom of you vga card


# cd /sys/bus/pci/devices/0000:01:00.0/
# echo 1 > rom
# cat rom > /tmp/image.rom
# echo 0 > rom

and test it with

./rom-parser /tmp/image.rom

Valid ROM signature found @0h, PCIR offset 190h
 PCIR: type 0, vendor: 10de, device: 1280, class: 030000
 PCIR: revision 0, vendor revision: 1
Valid ROM signature found @f400h, PCIR offset 1ch
 PCIR: type 3, vendor: 10de, device: 1280, class: 030000
 PCIR: revision 3, vendor revision: 0
  EFI: Signature Valid
 Last image

To be UEFI compatible, you need a "type 3" in the result.

Nvidia tips

Some applications like geforce experience, Passmark Performance Test and SiSoftware Sandra crash can crash the vm. you need to add:

echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

User have reported that Nvidia Kepler K80 need this in vmid.conf

args: -machine pc,max-ram-below-4g=1G


romfile

http://lime-technology.com/forum/index.php?topic=43644.msg482110#msg482110

Some motherboard can't gpu passthrough on the first pci slot by default because its vbios is shadowed during bootup. So we need to capture its bios when its working "normally" then when we move the card to slot 1 we can start the vm using the dumped vbios.

to dump the bios

cd /sys/bus/pci/devices/0000:01:00.0/
echo 1 > rom
cat rom > /usr/share/kvm/vbios.bin
echo 0 > rom

then you can pass the vbios file (must be located in /usr/share/kvm/) with

hostpci0: 01:00,x-vga=on,romfile=vbios.bin

(romfile option is available in proxmox since january 2017)

BAR 3: can't reserve [mem] error

If you have this error when try to use the card used by the host

vfio-pci 0000:04:00.0: BAR 3: can't reserve [mem 0xca000000-0xcbffffff 64bit]

you can try to add to grub kernel option:

video=efifb:off

Troubleshooting

SPICE

Spice may give trouble when passing through a GPU as it presents a "virtual" PCI graphic card to the guest and some drivers have problems with that even when both cards show up. It's always worth a try to disable SPICE and check again if something fails.

Verify Operation

Start the VM from the UI.

Enter the qm monitor.  "qm monitor vmnumber"

Verify that your card is listed here: "info pci"

Then install drivers on your guest OS.  

NOTE: Card support might be limited to 2 or 3 devices.

NOTE: This process will remove the card from the proxmox host OS.  

Editorial Note: Using PCI passthrough to present drives direct to a ZFS (FreeNAS, Openfiler, OmniOS) virtual machine is OK for testing, but not recommended for production use. Specific FreeNAS warnings can be found here: http://forums.freenas.org/threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/


USB PASSTHROUGH

if you need to passthrough usb devices (keyboard,mouse), please follow this wiki:

https://pve.proxmox.com/wiki/USB_physical_port_mapping