[PVE-User] Understanding write caching in proxmox

Chris Murray chrismurray84 at gmail.com
Tue Feb 4 14:43:39 CET 2014


Hi,

I'm experimenting with a single VM which isn't exhibiting behaviour I'd
expect from writethrough cache. Writes appear to be getting buffered in
some way. I may have stumbled upon the cause of this after taking the VM
out of the loop, but I may need help in understanding where write cache
plays a part here.

This single VM is running on an otherwise quiet proxmox host. Its disk
is on some NFS storage presented from FreeNAS, which currently isn't
shared with anything else. This has helped troubleshoot and I hope given
some clarity to my results.

On FreeNAS, I "cat /dev/urandom" into a file on the ZFS pool. Writes
start almost immediately (verified with "zpool iostat").
If I then control+c, writes can continue for some time afterward.

Changing "sync=always" corrects this behaviour; writes start immediately
and then stop as soon as I control+c.

So far, so good. I figure at this point, writes will be as 'synchronous'
as they can be. Performance is terrible, of course, but I'm more
concerned with guaranteeing that writes truly are being written to disk
at this stage.

On the proxmox host, "cat /dev/urandom" into a file within the mount
point and there is no write activity for about 30 seconds. During this
time, it would appear that the cached figure in "free -m" has increased
to around 400MB or so. Writes then start. Control+c the cat and minutes
later it will return to prompt and the writes will stop.

Is it possible to turn off write-caching? I'm conscious that I've set my
VM disks to writethrough and yet that's not what's happening further
down the stack. I've read about a mount option called "forcedirectio".
Would this help? Is there somewhere I can set this? I'm wondering if
this also explains why moving a disk within the same 'Target Storage'
produces a 'bursty' pattern of read..write..read..write rather than
doing both concurrently, if that makes sense?

Thank you,
Chris



FreeNAS-9.2.0-RELEASE-x64


root at pm02:/mnt/pve/foxpool# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-24-pve: 2.6.32-111
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1



More information about the pve-user mailing list