[PVE-User] fstrim reports trimmed space but qcow2 not shrunk

Markos Vakondios mvakondios at gmail.com
Fri Sep 21 11:10:15 CEST 2018


Hello list,

I have a the situation on two different PVE installations (hosts) where
fstrim inside the guest reports that it trims several TB of data but the
qcow2 file on the host is not shrunk.

Although the setup is the same on two different PVE hosts (no cluster), the
behaviour is slightly different: vstrim on the first guest reports that it
frees space and when run again it reports that space was freed again (a
little more that in the previous run) - without any reduction to the backed
qcow2 file on the first host.

On the second guest, fstrim reports freed space at the first run and no
more space is reported trimmed in subsequent runs. In this case again, the
qcow2 file on the host is not shrunk at all.

Guest filesystems are XFS.

Both hosts are Dell R710 with the following storage setup
/ ZFS on hardware RAID1 (I know it is not recommeneded)
/12TB XFS on harware RAID10 (mounted as dir)

One guest on each host having discard enabled on a 10TB qcow2 backed
virtio-scsi volume each.

Trying to trim the 10TB qcow2 file mounted under /10TB on both guests:

host#1: uname -a
Linux nikiscctvpve 4.15.18-4-pve #1 SMP PVE 4.15.18-23 (Thu, 30 Aug 2018
13:04:08 +0200) x86_64 GNU/Linux

host1#df -h
Filesystem        Size  Used Avail Use% Mounted on
udev               16G     0   16G   0% /dev
tmpfs             3,2G   11M  3,2G   1% /run
rpool/ROOT/pve-1  302G   29G  273G  10% /
tmpfs              16G   40M   16G   1% /dev/shm
tmpfs             5,0M     0  5,0M   0% /run/lock
tmpfs              16G     0   16G   0% /sys/fs/cgroup
/dev/sdb1          11T   11T   50G 100% /12TB
rpool             273G     0  273G   0% /rpool
rpool/ROOT        273G     0  273G   0% /rpool/ROOT
rpool/data        273G     0  273G   0% /rpool/data
/dev/fuse          30M   20K   30M   1% /etc/pve
tmpfs             3,2G     0  3,2G   0% /run/user/0

host1:/12TB/images/101# mount
....
/dev/sdb1 on /12TB type xfs (rw,relatime,attr2,inode64,noquota)
....

host1:/12TB/images/101# file vm-101-disk-1.qcow2
vm-101-disk-1.qcow2: QEMU QCOW Image (v3), 10737418240000 bytes

host1:/12TB/images/101# du -sh vm-101-disk-1.qcow2
11T     vm-101-disk-1.qcow2

On guest 1:

guest1#uname -a
Linux cctvnikisbc 4.4.0-135-generic #161-Ubuntu SMP Mon Aug 27 10:45:01 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux

guest1:/#mount
...
/dev/sda1 on /10TB type xfs (rw,relatime,attr2,discard,inode64,noquota)
...

guest1:/10TB# lsblk --discard
NAME   DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda           0        4K       1G         0
└─sda1        0        4K       1G         0
sr0           0        0B       0B         0
vda           0        0B       0B         0
├─vda1        0        0B       0B         0
├─vda2        0        0B       0B         0
└─vda5        0        0B       0B         0
vdb           0        0B       0B         0


guest1:/10TB# fstrim -v /10TB
/10TB: 3,9 TiB (4294376734720 bytes) trimmed
guest1:/10TB# fstrim -v /10TB
/10TB: 3,9 TiB (4294376734720 bytes) trimmed
guest1:/10TB# fstrim -v /10TB
/10TB: 3,9 TiB (4294376734720 bytes) trimmed

But on host1 the qcow2 has the same (even bigger size because it is
costantly written to):
host1:/12TB/images/101# du -sh vm-101-disk-1.qcow2
11T     vm-101-disk-1.qcow2
host1:/12TB/images/101# du -sk vm-101-disk-1.qcow2
11025071168     vm-101-disk-1.qcow2

On host2:

host2#uname -a
Linux rivieracctvpve 4.15.18-4-pve #1 SMP PVE 4.15.18-23 (Thu, 30 Aug 2018
13:04:08 +0200) x86_64 GNU/Linux

host2:#mount
...
/dev/sdb on /10TB type xfs (rw,relatime,attr2,inode64,noquota)

host2#:/10TB/images/112# du -hs vm-112-disk-1.qcow2
9.6T    vm-112-disk-1.qcow2

On guest 2:

guest2#uname -a
Linux rivieracctvbcvm.internal.net 4.4.0-135-generic #161-Ubuntu SMP Mon
Aug 27 10:45:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

guest2:#mount
...
/dev/sda1 on /10TB type ext4 (rw,relatime,discard,data=ordered)
...

guest2:#~# lsblk --discard
NAME   DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda           0        4K       1G         0
└─sda1        0        4K       1G         0
sr0           0        0B       0B         0
vda           0        0B       0B         0
├─vda1        0        0B       0B         0
├─vda2        0        0B       0B         0
└─vda5        0        0B       0B         0


guest2:~# fstrim -v /10TB
/10TB: 1,9 TiB (2070317821952 bytes) trimmed
guest2:~# fstrim -v /10TB
/10TB: 0 B (0 bytes) trimmed
guest2:~# fstrim -v /10TB
/10TB: 0 B (0 bytes) trimmed
guest2:~# fstrim -v /10TB
/10TB: 0 B (0 bytes) trimmed
guest2:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7,9G     0  7,9G   0% /dev
tmpfs           1,6G  8,7M  1,6G   1% /run
/dev/vda1        16G  3,2G   12G  22% /
tmpfs           7,9G     0  7,9G   0% /dev/shm
tmpfs           5,0M     0  5,0M   0% /run/lock
tmpfs           7,9G     0  7,9G   0% /sys/fs/cgroup
/dev/sda1       9,4T  7,6T  1,9T  80% /10TB
tmpfs           1,6G     0  1,6G   0% /run/user/0

But again on host2:
host2:/12TB/images/112# du -hs vm-112-disk-1.qcow2
9.6T    vm-112-disk-1.qcow2

Tried to unmount/remount, restart all guests and hosts but no luck.

My questions: since XFS and qcow2 support trimming and the guests are also
trim capable, plus that fstrim reports freed space, WHY qcow2 is not
shrunk? Maybe because of the hosts' XFS on whch the qcow2 resides?

And also why multiple fstrims report differently in two guests with the
same underlying config?

Thanks a lot and sorry for the length of the post. I tried to include as
much information as possible to answer in advance questions that may arise.

Markos



More information about the pve-user mailing list