[PVE-User] Understanding write caching in proxmox

Chris Murray chrismurray84 at gmail.com
Tue Feb 4 15:14:51 CET 2014


Sorry, I might have misled you with my question. It might have the appearance of another general and possibly quite common one. 

My troubleshooting was performed on the proxmox *host*, therefore I was under the impression that cache=[whatever] wouldn't make a difference since that's a property of a virtual machine disk.

For the *VM*, I'm using writethrough .. however I'm observing exactly what you explained for 'none' or 'writeback'. My VM is Windows Server 2008 R2, and I'm using sdelete to fill it with data. 'sdelete -z' won't appear on the 'zpool iostat' for quite some time, while the cached figure on the proxmox host increases. That's despite the disk being set to writethrough.

Apologies if I'm not being clear.

-----Original Message-----
From: Alexandre DERUMIER [mailto:aderumier at odiso.com] 
Sent: 04 February 2014 14:02
To: Chris Murray
Cc: pve-user at pve.proxmox.com
Subject: Re: [PVE-User] Understanding write caching in proxmox

>>Is it possible to turn off write-caching?

if you set cache=directsync or cache=writethrough, each write will be commited to disk.

if you use cache=none or cache=writeback, vm will cache some ios until a flush (FUA) is send by the guest os filesystem.
(you can't break the filesystem, but you can loose last X seconds of datas)



----- Mail original ----- 

De: "Chris Murray" <chrismurray84 at gmail.com>
À: pve-user at pve.proxmox.com
Envoyé: Mardi 4 Février 2014 14:43:39
Objet: [PVE-User] Understanding write caching in proxmox 

Hi, 

I'm experimenting with a single VM which isn't exhibiting behaviour I'd expect from writethrough cache. Writes appear to be getting buffered in some way. I may have stumbled upon the cause of this after taking the VM out of the loop, but I may need help in understanding where write cache plays a part here. 

This single VM is running on an otherwise quiet proxmox host. Its disk is on some NFS storage presented from FreeNAS, which currently isn't shared with anything else. This has helped troubleshoot and I hope given some clarity to my results. 

On FreeNAS, I "cat /dev/urandom" into a file on the ZFS pool. Writes start almost immediately (verified with "zpool iostat"). 
If I then control+c, writes can continue for some time afterward. 

Changing "sync=always" corrects this behaviour; writes start immediately and then stop as soon as I control+c. 

So far, so good. I figure at this point, writes will be as 'synchronous' 
as they can be. Performance is terrible, of course, but I'm more concerned with guaranteeing that writes truly are being written to disk at this stage. 

On the proxmox host, "cat /dev/urandom" into a file within the mount point and there is no write activity for about 30 seconds. During this time, it would appear that the cached figure in "free -m" has increased to around 400MB or so. Writes then start. Control+c the cat and minutes later it will return to prompt and the writes will stop. 

Is it possible to turn off write-caching? I'm conscious that I've set my VM disks to writethrough and yet that's not what's happening further down the stack. I've read about a mount option called "forcedirectio". 
Would this help? Is there somewhere I can set this? I'm wondering if this also explains why moving a disk within the same 'Target Storage' 
produces a 'bursty' pattern of read..write..read..write rather than doing both concurrently, if that makes sense? 

Thank you,
Chris 



FreeNAS-9.2.0-RELEASE-x64 


root at pm02:/mnt/pve/foxpool# pveversion -v 
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve) 
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6) 
pve-kernel-2.6.32-20-pve: 2.6.32-100 
pve-kernel-2.6.32-24-pve: 2.6.32-111 
pve-kernel-2.6.32-25-pve: 2.6.32-113 
pve-kernel-2.6.32-22-pve: 2.6.32-107 
pve-kernel-2.6.32-26-pve: 2.6.32-114 
pve-kernel-2.6.32-23-pve: 2.6.32-109 
lvm2: 2.02.98-pve4 
clvm: 2.02.98-pve4 
corosync-pve: 1.4.5-1 
openais-pve: 1.1.4-3 
libqb0: 0.11.1-2 
redhat-cluster-pve: 3.2.0-2 
resource-agents-pve: 3.9.2-4 
fence-agents-pve: 4.0.0-2 
pve-cluster: 3.0-8 
qemu-server: 3.1-8 
pve-firmware: 1.0-23 
libpve-common-perl: 3.0-9 
libpve-access-control: 3.0-8 
libpve-storage-perl: 3.0-18 
pve-libspice-server1: 0.12.4-2 
vncterm: 1.1-6 
vzctl: 4.0-1pve4 
vzprocps: 2.0.11-2 
vzquota: 3.1-2 
pve-qemu-kvm: 1.4-17 
ksm-control-daemon: 1.1-1 
glusterfs-client: 3.4.1-1 
_______________________________________________ 
pve-user mailing list 
pve-user at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2014.0.4259 / Virus Database: 3684/7058 - Release Date: 02/03/14


More information about the pve-user mailing list