Difference between revisions of "Pci passthrough"

From Proxmox VE
Jump to: navigation, search
(Introduction)
(formatting)
Line 1: Line 1:
= Introduction =
+
== Introduction ==
  
 
PCI passthrough allows you to use a physical PCI device (graphic card, network card) inside a VM (KVM virtualization only)  
 
PCI passthrough allows you to use a physical PCI device (graphic card, network card) inside a VM (KVM virtualization only)  
Line 8: Line 8:
 
PCI passthrough is a experimental features in Proxmox VE
 
PCI passthrough is a experimental features in Proxmox VE
  
= Intel CPU =
+
== Intel CPU ==
  
 
<br> edit: <source lang="bash">
 
<br> edit: <source lang="bash">
Line 25: Line 25:
 
Then run "dmesg | grep -e DMAR -e IOMMU" from the command line. &nbsp;If there is no output, then something is wrong.  
 
Then run "dmesg | grep -e DMAR -e IOMMU" from the command line. &nbsp;If there is no output, then something is wrong.  
  
= AMD CPU =
+
== AMD CPU ==
  
 
Edit:  
 
Edit:  
Line 43: Line 43:
 
</source>  
 
</source>  
  
= Determine your PCI card address, and configure your VM =
+
== Determine your PCI card address, and configure your VM ==
  
 
Locate your card using "lspci". &nbsp;The address should be in the form of: 04:00.0
 
Locate your card using "lspci". &nbsp;The address should be in the form of: 04:00.0
Line 51: Line 51:
 
Add this line to the end of the file: "hostpci0: 04:00.0"
 
Add this line to the end of the file: "hostpci0: 04:00.0"
  
= Verify Operation =
+
== Verify Operation ==
  
 
Start the VM from the UI.
 
Start the VM from the UI.
Line 60: Line 60:
  
 
Then install drivers on your guest OS. &nbsp;
 
Then install drivers on your guest OS. &nbsp;
 
 
  
 
NOTE: Card support might be limited to 2 or 3 devices.
 
NOTE: Card support might be limited to 2 or 3 devices.
Line 69: Line 67:
 
Editorial Note:  Using PCI passthrough to present drives direct to a ZFS (FreeNAS, Openfiler, OmniOS) virtual machine is OK for testing, but '''not recommended''' for production use.  Specific FreeNAS warnings can be found here:  http://forums.freenas.org/threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/
 
Editorial Note:  Using PCI passthrough to present drives direct to a ZFS (FreeNAS, Openfiler, OmniOS) virtual machine is OK for testing, but '''not recommended''' for production use.  Specific FreeNAS warnings can be found here:  http://forums.freenas.org/threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/
  
= PCI EXPRESS PASSTHROUGH =
+
== PCI EXPRESS PASSTHROUGH ==
  
 
Since proxmox 3.3, it's possible to passthrough pci express device (including nvidia/amd graphic card)
 
Since proxmox 3.3, it's possible to passthrough pci express device (including nvidia/amd graphic card)
  
 
you need to run pve-kernel 3.10
 
you need to run pve-kernel 3.10
 
  
 
<pre>
 
<pre>
Line 101: Line 98:
 
</pre>
 
</pre>
  
= GPU PASSTHROUGH NOTES =
+
== GPU PASSTHROUGH NOTES ==
 
 
MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working.
 
 
 
intel IGD'S WONT WORK currently with proxmox kernel 3.10, try with debian kernel > 3.16.
 
 
 
Maybe you'll need to load some specific options in grub.cfg or other tunning values,
 
 
 
here a good forum thread of archlinux:
 
 
 
https://bbs.archlinux.org/viewtopic.php?id=162768
 
  
 +
* MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working.
 +
* Intel IGD'S WONT WORK currently with proxmox kernel 3.10, try with debian kernel > 3.16.
 +
* Maybe you'll need to load some specific options in grub.cfg or other tuning values,
 +
* Here a good forum thread of archlinux: https://bbs.archlinux.org/viewtopic.php?id=162768
  
 
AMD passthrough error
 
AMD passthrough error
Line 121: Line 112:
 
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: Device 'vfio-pci' could not be initialized
 
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: Device 'vfio-pci' could not be initialized
 
</pre>
 
</pre>
If you have this error, you need to pass "pcie_acs_override=downstream" to grub options to get iommu group working correctly
+
If you have this error, you need to pass "pcie_acs_override=downstream" to grub options to get '''iommu''' group working correctly
 
 
  
 
you can also try to add this option
 
you can also try to add this option
Line 129: Line 119:
 
</pre>
 
</pre>
  
 
+
== WORKING NVIDIA SETUP ==
 
 
= WORKING NVIDIA SETUP =
 
  
 
I've been able to get this working with an NVIDIA GTX 750 Ti card using driver version 344.75 (newer versions inconsistently cause Code 43 Errors) by using the following setup:
 
I've been able to get this working with an NVIDIA GTX 750 Ti card using driver version 344.75 (newer versions inconsistently cause Code 43 Errors) by using the following setup:
  
Install pve-kernel-3.10.0-5-pve
+
Install <tt>pve-kernel-3.10.0-5-pve</tt> with:
  
 
<pre>
 
<pre>
Line 146: Line 134:
 
kvm_intel
 
kvm_intel
 
</pre>
 
</pre>
Add the following options to /etc/default/grub on the GRUB_CMDLINE_LINUX_DEFAULT line:
+
 
 +
Add the following options to <tt>/etc/default/grub</tt> on the GRUB_CMDLINE_LINUX_DEFAULT line:
 
<pre>
 
<pre>
 
intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 rootdelay=10 scsi_mod.scan=sync
 
intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 rootdelay=10 scsi_mod.scan=sync
Line 156: Line 145:
  
 
<pre>
 
<pre>
Add the following to /etc/initramfs-tools/modules (find the PCI stub IDs for your card by running lspci -nn | grep NVIDIA):
+
Add the following to <tt>/etc/initramfs-tools/modules</tt> (find the PCI stub IDs for your card by running <tt>lspci -nn | grep NVIDIA</tt>):
 
pci_stub ids=10de:0f02,10de:0bea
 
pci_stub ids=10de:0f02,10de:0bea
 
</pre>
 
</pre>
 +
 
<pre>
 
<pre>
 
Run: update-initramfs -u and then reboot into the new kernel.
 
Run: update-initramfs -u and then reboot into the new kernel.
Line 165: Line 155:
 
Boot into the VM using the Proxmox web interface, and install the OS (I had better luck with Windows 8.1 - can't remember specifics though).
 
Boot into the VM using the Proxmox web interface, and install the OS (I had better luck with Windows 8.1 - can't remember specifics though).
  
 
+
Add the following options to <tt>/etc/pve/host/qemu-server/vmid.conf</tt> (get the PCI address from <tt>lspci</tt> command, and I added the USB device address for my Avocent KVM DSRIQ USB module, you can do the same for a physical keyboard and mouse):
Add the following options to /etc/pve/host/qemu-server/vmid.conf (get the PCI address from lspci command, and I added the USB device address for my Avocent KVM DSRIQ USB module, you can do the same for a physical keyboard and mouse):
 
 
<pre>
 
<pre>
 
hostpci0: 05:00,x-vga=on,pcie=1,driver=vfio
 
hostpci0: 05:00,x-vga=on,pcie=1,driver=vfio

Revision as of 12:02, 31 October 2015

Introduction

PCI passthrough allows you to use a physical PCI device (graphic card, network card) inside a VM (KVM virtualization only) If you "PCI passthrough" a device, the device is not available in the host anymore.

Note:

PCI passthrough is a experimental features in Proxmox VE

Intel CPU


edit:

#vi /etc/default/grub

change

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

to

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

then

# update-grub
# reboot


Then run "dmesg | grep -e DMAR -e IOMMU" from the command line.  If there is no output, then something is wrong.

AMD CPU

Edit:

# vi /etc/default/grub

Change:

GRUB_CMDLINE_LINUX_DEFAULT="quiet"

To:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"

Then:

# update-grub
# echo "options kvm allow_unsafe_assigned_interrupts=1" > /etc/modprobe.d/kvm_iommu_map_guest.conf 
# reboot

Determine your PCI card address, and configure your VM

Locate your card using "lspci".  The address should be in the form of: 04:00.0

Manually edit the node.conf file.  It can be located at: /etc/pve/nodes/proxmox3/qemu-server/vmnumber.conf.

Add this line to the end of the file: "hostpci0: 04:00.0"

Verify Operation

Start the VM from the UI.

Enter the qm monitor.  "qm monitor vmnumber"

Verify that your card is listed here: "info pci"

Then install drivers on your guest OS.  

NOTE: Card support might be limited to 2 or 3 devices.

NOTE: This process will remove the card from the proxmox host OS.  

Editorial Note: Using PCI passthrough to present drives direct to a ZFS (FreeNAS, Openfiler, OmniOS) virtual machine is OK for testing, but not recommended for production use. Specific FreeNAS warnings can be found here: http://forums.freenas.org/threads/absolutely-must-virtualize-freenas-a-guide-to-not-completely-losing-your-data.12714/

PCI EXPRESS PASSTHROUGH

Since proxmox 3.3, it's possible to passthrough pci express device (including nvidia/amd graphic card)

you need to run pve-kernel 3.10

/etc/pve/qemuserver/<vmid>.cfg

simple pci-express passthrough

machine: q35
hostpci0: 04:00.0,pcie=1,driver=vfio

vga pci-express passthrough

machine: q35
hostpci0: 04:00.0,x-vga=on,pcie=1,driver=vfio

multi-function pciexpress device. (like a vga card with embedded audio chipset).

Remove the .0 in pci address.

machine: q35
hostpci0: 04:00,x-vga=on,pcie=1,driver=vfio

GPU PASSTHROUGH NOTES

  • MD RADEON 5xxx, 6xxx, 7xxx and NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx, 7xx have been reported working.
  • Intel IGD'S WONT WORK currently with proxmox kernel 3.10, try with debian kernel > 3.16.
  • Maybe you'll need to load some specific options in grub.cfg or other tuning values,
  • Here a good forum thread of archlinux: https://bbs.archlinux.org/viewtopic.php?id=162768

AMD passthrough error

kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: vfio: error opening /dev/vfio/1: No such file or directory
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: vfio: failed to get group 1
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: Device initialization failed.
kvm: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-vga=on,multifunction=on: Device 'vfio-pci' could not be initialized

If you have this error, you need to pass "pcie_acs_override=downstream" to grub options to get iommu group working correctly

you can also try to add this option

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf

WORKING NVIDIA SETUP

I've been able to get this working with an NVIDIA GTX 750 Ti card using driver version 344.75 (newer versions inconsistently cause Code 43 Errors) by using the following setup:

Install pve-kernel-3.10.0-5-pve with:

Add to /etc/modules:
pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel

Add the following options to /etc/default/grub on the GRUB_CMDLINE_LINUX_DEFAULT line:

intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1 rootdelay=10 scsi_mod.scan=sync
Run: update-grub
Add the following to <tt>/etc/initramfs-tools/modules</tt> (find the PCI stub IDs for your card by running <tt>lspci -nn | grep NVIDIA</tt>):
pci_stub ids=10de:0f02,10de:0bea
Run: update-initramfs -u and then reboot into the new kernel.

Boot into the VM using the Proxmox web interface, and install the OS (I had better luck with Windows 8.1 - can't remember specifics though).

Add the following options to /etc/pve/host/qemu-server/vmid.conf (get the PCI address from lspci command, and I added the USB device address for my Avocent KVM DSRIQ USB module, you can do the same for a physical keyboard and mouse):

hostpci0: 05:00,x-vga=on,pcie=1,driver=vfio
machine: q35
usb0: host=0624:0307