PCI Passthrough: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
PCI pass trhough allows you to use a physical PCI device ( graphic card, network card ...) form your host directly on the VM (KVM only)  
PCI pass trhough allows you to use a physical PCI device ( graphic card, network card) inside a VM (KVM virtualization only)  
If you "PCI passthrough" a device, the device is not available in the host anymore, only in the VM.
If you "PCI passthrough" a device, the device is not available in the host anymore.


To enable PCI passthrough, you need to configure:  
To enable PCI passthrough, you need to configure:  
Line 38: Line 38:
# reboot
# reboot
</source>  
</source>  


== Determine your PCI card address, and configure your VM ==
== Determine your PCI card address, and configure your VM ==


Locate your card using "lspci". &nbsp;The address should be in the form of: 04:00.0
Locate your card using "lspci". &nbsp;The address should be in the form of: 04:00.0
Line 50: Line 46:


Add this line to the end of the file: "hostpci0: 04:00.0"
Add this line to the end of the file: "hostpci0: 04:00.0"


== Verify Operation ==
== Verify Operation ==

Revision as of 08:34, 26 August 2015

PCI pass trhough allows you to use a physical PCI device ( graphic card, network card) inside a VM (KVM virtualization only) If you "PCI passthrough" a device, the device is not available in the host anymore.

To enable PCI passthrough, you need to configure:

INTEL CPU


edit:

#vi /etc/default/grub

change

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

to

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

then

# update-grub
# reboot


Then run "dmesg | grep -e DMAR -e IOMMU" from the command line.  If there is no output, then something is wrong.

AMD CPU

Edit:

# vi /etc/default/grub

Change:

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

To:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

Then:

# update-grub
# echo "options kvm allow_unsafe_assigned_interrupts=1" > /etc/modprobe.d/kvm_iommu_map_guest.conf 
# reboot

Determine your PCI card address, and configure your VM

Locate your card using "lspci".  The address should be in the form of: 04:00.0

Manually edit the node.conf file.  It can be located at: /etc/pve/nodes/proxmox3/qemu-server/vmnumber.conf.

Add this line to the end of the file: "hostpci0: 04:00.0"

Verify Operation

Start the VM from the UI.

Enter the qm monitor.  "qm monitor vmnumber"

Verify that your card is listed here: "info pci"

Then install drivers on your guest OS.  


NOTE: Card support might be limited to 2 or 3 devices.

NOTE: This process will remove the card from the proxmox host OS.  

Editorial Note: Using PCI passthrough to present drives direct to a ZFS (FreeNAS, Openfiler, OmniOS) virtual machine is OK for testing, but not recommended for production use. Specific FreeNAS warnings can be found here: http://forums.freenas.org/threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/

PCI EXPRESS PASSTHROUGH

Since proxmox 3.3, it's possible to passthrough pci express device (including nvidia/amd graphic card)


you need to run pve-kernel 3.10


/etc/pve/qemuserver/<vmid>.cfg

simple pci-express passthrough

machine: q35
hostpci0: 04:00.0,pcie=1,driver=vfio

vga pci-express passthrough

machine: q35
hostpci0: 04:00.0,x-vga=on,pcie=1,driver=vfio

multi-function pciexpress device. (like a vga card with embedded audio chipset).

Remove the .0 in pci address.

machine: q35
hostpci0: 04:00,x-vga=on,pcie=1,driver=vfio

GPU PASSTHROUGH NOTES

MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working.

intel IGD'S WONT WORK currently with proxmox kernel 3.10, try with debian kernel > 3.16.

Maybe you'll need to load some specific options in grub.cfg or other tunning values,

here a good forum thread of archlinux:

https://bbs.archlinux.org/viewtopic.php?id=162768


AMD passthrough error

kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: vfio: error opening /dev/vfio/1: No such file or directory
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: vfio: failed to get group 1
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: Device initialization failed.
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: Device 'vfio-pci' could not be initialized

If you have this error, you need to pass "pcie_acs_override=downstream" to grub options to get iommu group working correctly


you can also try to add this option

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf


WORKING NVIDIA SETUP

I've been able to get this working with an NVIDIA GTX 750 Ti card using driver version 344.75 (newer versions inconsistently cause Code 43 Errors) by using the following setup:

Install pve-kernel-3.10.0-5-pve

Add to /etc/modules:
pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel

Add the following options to /etc/default/grub on the GRUB_CMDLINE_LINUX_DEFAULT line:

intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 rootdelay=10 scsi_mod.scan=sync
Run: update-grub
Add the following to /etc/initramfs-tools/modules (find the PCI stub IDs for your card by running lspci -nn | grep NVIDIA):
pci_stub ids=10de:0f02,10de:0bea
Run: update-initramfs -u and then reboot into the new kernel.

Boot into the VM using the Proxmox web interface, and install the OS (I had better luck with Windows 8.1 - can't remember specifics though).


Add the following options to /etc/pve/host/qemu-server/vmid.conf (get the PCI address from lspci command, and I added the USB device address for my Avocent KVM DSRIQ USB module, you can do the same for a physical keyboard and mouse):

hostpci0: 05:00,x-vga=on,pcie=1,driver=vfio
machine: q35
usb0: host=0624:0307