Qemu trim/discard and virtio scsi

From Proxmox VE
Jump to: navigation, search


This will describe how to set up discard on a qemu virtual disk, so the guest may use fstrim or the discard option.

The storage does not need to be an SSD. It can also be thinly provisioned storage, and fstrim will then 'hole punch' the storage, freeing up the unused space from the underlying storage system.


  • PVE 4.x
  • storage with discard support (such as a raw disk file or Ceph RBD)
  • guest driver: virtio_scsi

Instructions for Web GUI (new disk only)

  • In the proxmox web gui, go to the "Options" section of your VM.
  • Click "SCSI Controller Type" and "Edit".
  • Select Type "VirtIO SCSI".
  • Click "OK".
  • Go to "Hardware" section of your VM.
  • click "Add", and then for "Bus/Device" drop down, select "SCSI", and optionally click the "Discard" checkbox.

Instructions for Command Line (new or existing disk)

for example node "testnode" and vmid "100" and storage named "ceph" (discard=on is optional and not in the example)

  • sudo vim /etc/pve/nodes/testnode/qemu-server/100.conf
scsi0: ceph:vm-109-disk-1,size=4G
scsihw: virtio-scsi-pci

Instructions for converting a Guest (Debian/Ubuntu)

  • sudo vim /etc/initramfs-tools/modules
  • sudo update-initramfs -u


Testing the host (details may be specific to qemu 2.6 - 2.7 and pve 4.3):

 you@testnode:~ $ vmid=100
 you@testnode:~ $ ps -ef | grep "kvm -id $vmid" | sed -r 's/ -/\n-/g' | grep scsi
 -device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5
 -drive file=rbd:proxmox/vm-109-disk-1:[...],if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=unmap
 -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100

Above you see device "virtio-scsi-pci" exists, and the drive says "if=none" and has an "id=drive-scsi0", and then there is also a "scsi-hd" device with "drive=drive-scsi0" too. So if you see this but are having issues, it's probably the guest rather than qemu.

Testing the guest:

 you@vm100:~ # fstrim /
 you@vm100:~ #