PCI Passthrough: Difference between revisions
Line 240: | Line 240: | ||
<pre> | <pre> | ||
echo "options vfio-pci ids=10de:1381,10de:0fbc disable_vga=1" > /etc/modprobe.d/vfio.conf | echo "options vfio-pci ids=10de:1381,10de:0fbc disable_vga=1" > /etc/modprobe.d/vfio.conf | ||
<pre> | </pre> | ||
and you need to your graphic card have a uefi bootable rom | and you need to your graphic card have a uefi bootable rom |
Revision as of 12:18, 17 February 2016
Introduction
PCI passthrough allows you to use a physical PCI device (graphics card, network card) inside a VM (KVM virtualization only). If you "PCI passthrough" a device, the device is not available to the host anymore.
Note:
PCI passthrough is an experimental feature in Proxmox VE
Intel CPU
edit:
#vi /etc/default/grub
change
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
to
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
then
# update-grub
Then run "dmesg | grep -e DMAR -e IOMMU" from the command line. If there is no output, then something is wrong.
AMD CPU
Edit:
# vi /etc/default/grub
Change:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
To:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
Required modules
add to /etc/modules
vfio vfio_iommu_type1 vfio_pci vfio_virqfd
IOMMU interrupt remapping
it will not be possible to use PCI passthrough without interrupt remapping.
Device assignment will fail with a 'Failed to assign device "[device name]" : Operation not permitted' error for users of KVM, and a 'Interrupt Remapping hardware not found, passing devices to unprivileged domains is insecure.
Systems which don't support interrupt remapping:
- All systems using an AMD processor and chipset that have AMD I/O Virtualization (AMD-Vi) support. Such hardware has interrupt remapping support; however, the software support is not yet available upstream.
- All systems using an Intel processor and chipset that have support for Intel Virtualization Technology for Directed I/O (VT-d), but do not have support for interrupt remapping. Interrupt remapping support is provided in newer processors and chipsets. To identify if your system has support for interrupt remapping:
1) Run the "dmesg | grep ecap" command.
2) On the IOMMU lines, the hexadecimal value after "ecap" indicates whether interrupt remapping is supported. If the last character of this value is an 8, 9, a, b, c, d, e, or an f, interrupt remapping is supported. For example, "ecap 1000" indicates there is no interrupt remapping support. "ecap 10207f" indicates interrupt remapping support, as the last character is an "f".
Interrupt remapping will only be enabled if every IOMMU supports it.
Alternatively, run the following script to determine if your system has interrupt remapping support:
#!/bin/sh if [ $(dmesg | grep ecap | wc -l) -eq 0 ]; then echo "No interrupt remapping support found" exit 1 fi for i in $(dmesg | grep ecap | awk '{print $NF}'); do if [ $(( (0x$i & 0xf) >> 3 )) -ne 1 ]; then echo "Interrupt remapping not supported" exit 1 fi done
If your system doesn't support interrupt remapping,
you can allow unsafe interrupts with:
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
Verify IOMMU isolation
To have pci passthrough working fine, you need dedicated iommu group for your pci devices
You should have something like
# find /sys/kernel/iommu_groups/ -type l /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.1 /sys/kernel/iommu_groups/2/devices/0000:00:02.0 /sys/kernel/iommu_groups/3/devices/0000:00:16.0 /sys/kernel/iommu_groups/4/devices/0000:00:1a.0 /sys/kernel/iommu_groups/5/devices/0000:00:1b.0 /sys/kernel/iommu_groups/6/devices/0000:00:1c.0 /sys/kernel/iommu_groups/7/devices/0000:00:1c.5 /sys/kernel/iommu_groups/8/devices/0000:00:1c.6 /sys/kernel/iommu_groups/9/devices/0000:00:1c.7 /sys/kernel/iommu_groups/9/devices/0000:05:00.0 /sys/kernel/iommu_groups/10/devices/0000:00:1d.0 /sys/kernel/iommu_groups/11/devices/0000:00:1f.0 /sys/kernel/iommu_groups/11/devices/0000:00:1f.2 /sys/kernel/iommu_groups/11/devices/0000:00:1f.3 /sys/kernel/iommu_groups/12/devices/0000:02:00.0 /sys/kernel/iommu_groups/12/devices/0000:02:00.1 /sys/kernel/iommu_groups/13/devices/0000:03:00.0 /sys/kernel/iommu_groups/14/devices/0000:04:00.0
To have separate iommu, your processor need to have support for
a feature called ACS (Access Control Services).
All Xeon processor support them (E3,E5) excluding Xeon E3-1200
For intel Core , it's different, only some processors support ACS
Haswell-E (LGA2011-v3) i7-5960X (8-core, 3/3.5GHz) i7-5930K (6-core, 3.2/3.8GHz) i7-5820K (6-core, 3.3/3.6GHz) Ivy Bridge-E (LGA2011) i7-4960X (6-core, 3.6/4GHz) i7-4930K (6-core, 3.4/3.6GHz) i7-4820K (4-core, 3.7/3.9GHz) Sandy Bridge-E (LGA2011) i7-3960X (6-core, 3.3/3.9GHz) i7-3970X (6-core, 3.5/4GHz) i7-3930K (6-core, 3.2/3.8GHz) i7-3820 (4-core, 3.6/3.8GHz)
UPDATE ME : AMD processors ?
If you don't have dedicated iommu, your can try :
1) move the card to another pci slot
2) add "pcie_acs_override=downstream" to grub options, which can help on some setup with bad ACS implementation.
More infos :
http://vfio.blogspot.be/2015/10/intel-processors-with-acs-support.html
http://vfio.blogspot.be/2014/08/iommu-groups-inside-and-out.html
Determine your PCI card address, and configure your VM
Locate your card using "lspci". The address should be in the form of: 01:00.0
Manually edit the node.conf file. It can be located at: /etc/pve/qemu-server/vmid.conf.
Add this line to the end of the file:
hostpci0: 01:00.0
If you have a multi-function device (like a vga card with embedded audio chipset),
you can pass both with removing the ".0" in pci address.
hostpci0: 01:00
PCI EXPRESS PASSTHROUGH
/etc/pve/qemu-server/<vmid>.cfg
simple pci-express passthrough
machine: q35 hostpci0: 01:00.0,pcie=1
GPU PASSTHROUGH
- MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working.
- Maybe you'll need to load some specific options in grub.cfg or other tuning values,
- Here a good forum thread of archlinux: https://bbs.archlinux.org/viewtopic.php?id=162768
For GPU, it's good that host don't try to use the GPU, and avoids issues with the host driver unbinding and re-binding to the device.
First, find the device and vendor id of your vga card
$ lspci -n -s 01:00 01:00.0 0300: 10de:1381 (rev a2) 01:00.1 0403: 10de:0fbc (rev a1)
The Vendor:Device IDs for my GPU and audio functions are therefore 10de:1381, 10de:0fbc.
Then, create a file
echo "options vfio-pci ids=10de:1381,10de:0fbc" > /etc/modprobe.d/vfio.conf
Then blacklist drivers
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
For VM configuration, They are 4 configuration possible:
GPU OVMF PCI PASSTHROUGH (recommended)
OVMF replace bios by UEFI boot. You need to install your guest OS with uefi support, (for windows, try win >=8)
using OVMF, you can also add disable_vga=1 to vfio-pci module, which try to to opt-out devices from vga arbitration if possible.
echo "options vfio-pci ids=10de:1381,10de:0fbc disable_vga=1" > /etc/modprobe.d/vfio.conf
and you need to your graphic card have a uefi bootable rom http://vfio.blogspot.fr/2014/08/does-my-graphics-card-rom-support-efi.html
bios: ovmf scsihw: virtio-scsi-pci bootdisk: scsi0 scsi0: ..... hostpci0: 01:00,x-vga=on
GPU OVMF PCI EXPRESS PASSTHROUGH
OVMF replace bios by UEFI boot.
You need to install your guest OS with uefi support, (for windows, try win >=8)
and you need to your graphic card have a uefi bootable rom http://vfio.blogspot.fr/2014/08/does-my-graphics-card-rom-support-efi.html
bios: ovmf scsihw: virtio-scsi-pci bootdisk: scsi0 scsi0: ..... machine: q35 hostpci0: 01:00,pcie=1,x-vga=on
GPU Seabios PCI PASSTHROUGH
hostpci0: 01:00,x-vga=on
GPU Seabios PCI EXPRESS PASSTHROUGH
machine: q35 hostpci0: 01:00,pcie=1,x-vga=on
Troubleshooting
SPICE
Spice may give trouble when passing through a GPU as it presents a "virtual" PCI graphic card to the guest and some drivers have problems with that even when both cards show up. It's always worth a try to disable SPICE and check again if something fails.
Verify Operation
Start the VM from the UI.
Enter the qm monitor. "qm monitor vmnumber"
Verify that your card is listed here: "info pci"
Then install drivers on your guest OS.
NOTE: Card support might be limited to 2 or 3 devices.
NOTE: This process will remove the card from the proxmox host OS.
Editorial Note: Using PCI passthrough to present drives direct to a ZFS (FreeNAS, Openfiler, OmniOS) virtual machine is OK for testing, but not recommended for production use. Specific FreeNAS warnings can be found here: http://forums.freenas.org/threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/
USB PASSTHROUGH
if you need to passthrough usb devices (keyboard,mouse), please follow this wiki: