PCI Passthrough
Introduction
PCI passthrough allows you to use a physical PCI device (graphic card, network card) inside a VM (KVM virtualization only) If you "PCI passthrough" a device, the device is not available in the host anymore.
Note:
PCI passthrough is a experimental features in Proxmox VE
Intel CPU
edit:
#vi /etc/default/grub
change
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
to
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
then
# update-grub
# reboot
Then run "dmesg | grep -e DMAR -e IOMMU" from the command line. If there is no output, then something is wrong.
AMD CPU
Edit:
# vi /etc/default/grub
Change:
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
To:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
Then:
# update-grub
# echo "options kvm allow_unsafe_assigned_interrupts=1" > /etc/modprobe.d/kvm_iommu_map_guest.conf
# reboot
Determine your PCI card address, and configure your VM
Locate your card using "lspci". The address should be in the form of: 04:00.0
Manually edit the node.conf file. It can be located at: /etc/pve/qemu-server/vmid.conf.
Add this line to the end of the file: "hostpci0: 04:00.0"
Verify IOMMU
To have pci passthrough working fine, you need dedicated iommu group for your pci devices
You should have something like
# find /sys/kernel/iommu_groups/ -type l /sys/kernel/iommu_groups/0/devices/0000:00:00.0 /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.1 /sys/kernel/iommu_groups/2/devices/0000:00:02.0 /sys/kernel/iommu_groups/3/devices/0000:00:16.0 /sys/kernel/iommu_groups/4/devices/0000:00:1a.0 /sys/kernel/iommu_groups/5/devices/0000:00:1b.0 /sys/kernel/iommu_groups/6/devices/0000:00:1c.0 /sys/kernel/iommu_groups/7/devices/0000:00:1c.5 /sys/kernel/iommu_groups/8/devices/0000:00:1c.6 /sys/kernel/iommu_groups/9/devices/0000:00:1c.7 /sys/kernel/iommu_groups/9/devices/0000:05:00.0 /sys/kernel/iommu_groups/10/devices/0000:00:1d.0 /sys/kernel/iommu_groups/11/devices/0000:00:1f.0 /sys/kernel/iommu_groups/11/devices/0000:00:1f.2 /sys/kernel/iommu_groups/11/devices/0000:00:1f.3 /sys/kernel/iommu_groups/12/devices/0000:02:00.0 /sys/kernel/iommu_groups/12/devices/0000:02:00.1 /sys/kernel/iommu_groups/13/devices/0000:03:00.0 /sys/kernel/iommu_groups/14/devices/0000:04:00.0
To have separate iommu, your processor need to have support for
a feature called ACS (Access Control Services).
All Xeon processor support them (E3,E5). For intel Corei , it's different, only some processors support ACS
Haswell-E (LGA2011-v3) i7-5960X (8-core, 3/3.5GHz) i7-5930K (6-core, 3.2/3.8GHz) i7-5820K (6-core, 3.3/3.6GHz) Ivy Bridge-E (LGA2011) i7-4960X (6-core, 3.6/4GHz) i7-4930K (6-core, 3.4/3.6GHz) i7-4820K (4-core, 3.7/3.9GHz) Sandy Bridge-E (LGA2011) i7-3960X (6-core, 3.3/3.9GHz) i7-3970X (6-core, 3.5/4GHz) i7-3930K (6-core, 3.2/3.8GHz) i7-3820 (4-core, 3.6/3.8GHz)
UPDATE ME : AMD processors ?
If you don't have dedicated iommu, your can try :
1) move the card to another pci slot
2) add "pcie_acs_override=downstream" to grub options, which can help on some setup with bad ACS implementation.
3) you can also try to add this option
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
More infos :
http://vfio.blogspot.be/2015/10/intel-processors-with-acs-support.html
http://vfio.blogspot.be/2014/08/iommu-groups-inside-and-out.html
Verify Operation
Start the VM from the UI.
Enter the qm monitor. "qm monitor vmnumber"
Verify that your card is listed here: "info pci"
Then install drivers on your guest OS.
NOTE: Card support might be limited to 2 or 3 devices.
NOTE: This process will remove the card from the proxmox host OS.
Editorial Note: Using PCI passthrough to present drives direct to a ZFS (FreeNAS, Openfiler, OmniOS) virtual machine is OK for testing, but not recommended for production use. Specific FreeNAS warnings can be found here: http://forums.freenas.org/threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/
PROXMOX 4.X
Proxmox 4.X use vfio as default with kernel 4.X, so you don't need to add
,driver=vfio
Users have reported problem with kernel 4.X. It seem to be related to seabios.
Workaround is to boot with uefi + virtio-scsi, simply add:
bios: ovmf scsihw: virtio-scsi-pci bootdisk: scsi0 scsi0: .....
(You need to install your guest os with uefi support)
PCI EXPRESS PASSTHROUGH
/etc/pve/qemuserver/<vmid>.cfg
simple pci-express passthrough
machine: q35 hostpci0: 04:00.0,pcie=1
GPU PASSTHROUGH
- MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working.
- Maybe you'll need to load some specific options in grub.cfg or other tuning values,
- Here a good forum thread of archlinux: https://bbs.archlinux.org/viewtopic.php?id=162768
They are 4 configuration possible:
GPU Seabios PCI PASSTHROUGH
hostpci0: 04:00.0,x-vga=on
GPU Seabios PCI EXPRESS PASSTHROUGH
machine: q35 hostpci0: 04:00.0,pcie=1,x-vga=on
GPU OVMF PCI PASSTHROUGH
OVMF replace bios by UEFI boot. You need to install your guest OS with uefi support,
and you need to your graphic card have a uefi bootable rom
This is the recommended mode
bios: ovmf scsihw: virtio-scsi-pci bootdisk: scsi0 scsi0: ..... hostpci0: 04:00.0,x-vga=on
GPU OVMF PCI EXPRESS PASSTHROUGH
bios: ovmf scsihw: virtio-scsi-pci bootdisk: scsi0 scsi0: ..... machine: q35 hostpci0: 04:00.0,pcie=1,x-vga=on
WORKING NVIDIA SETUP
I've been able to get this working with an NVIDIA GTX 750 Ti card using driver version 344.75 (newer versions inconsistently cause Code 43 Errors) by using the following setup:
Install pve-kernel-3.10.0-5-pve with:
Add to /etc/modules: pci_stub vfio vfio_iommu_type1 vfio_pci kvm kvm_intel
Add the following options to /etc/default/grub on the GRUB_CMDLINE_LINUX_DEFAULT line:
intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 rootdelay=10 scsi_mod.scan=sync
Run: update-grub
Add the following to <tt>/etc/initramfs-tools/modules</tt> (find the PCI stub IDs for your card by running <tt>lspci -nn | grep NVIDIA</tt>): pci_stub ids=10de:0f02,10de:0bea
Run: update-initramfs -u and then reboot into the new kernel.
Boot into the VM using the Proxmox web interface, and install the OS (I had better luck with Windows 8.1 - can't remember specifics though).
Add the following options to /etc/pve/host/qemu-server/vmid.conf (get the PCI address from lspci command, and I added the USB device address for my Avocent KVM DSRIQ USB module, you can do the same for a physical keyboard and mouse):
hostpci0: 05:00,x-vga=on,pcie=1,driver=vfio machine: q35 usb0: host=0624:0307