[PVE-User] Understanding write caching in proxmox

Pongrácz István pongracz.istvan at gmail.com
Tue Feb 4 16:31:12 CET 2014


You should:
- force your server to export in sync mode
- force your client to mount in sync mode

>From the nfs manpage:

The NFS client treats the sync mount option differently than some other file systems (refer to mount(8) for a description of the generic sync and async mount options). If neither sync
 nor async is specified (or if the async option is specified), the NFS client delays sending application writes to the server until any of these events occur:

 Memory pressure forces reclamation of system memory resources.
 An application flushes file data explicitly with sync(2), msync(2), or fsync(3).
 An application closes a file with close(2).
 The file is locked/unlocked via fcntl(2).

 In other words, under normal circumstances, data written by an application may not immediately appear on the server that hosts the file.

 If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call
 returns control to user space. This provides greater data cache coherence among clients, but at a significant performance cost.

Good luck!

 
----------------eredeti üzenet-----------------
Feladó: "Chris Murray" chrismurray84 at gmail.com 
Címzett: "Pongrácz István" 
, "Alexandre DERUMIER" aderumier at odiso.com 
CC: "pve-user pve.proxmox.com" 
Dátum: Tue, 4 Feb 2014 14:58:06 -0000
-------------------------------------------------
 
 
> I'm not sure I get a chance to specify?
> 
> root at pm02 :/# mount -l | grep 231
> 192.168.0.231:/mnt/foxpool on /mnt/pve/foxpool type nfs 
> (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tc
> p,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.231,mountvers=3,mo
> untport=918,mountproto=udp,local_lock=none,addr=192.168.0.231)
> 
> It was set up using the web interface, where I can't see an option for this.
> 
> NFS client certainly feels asynchronous to me. Suppose I "sar -n DEV 1 | grep eth0" 
> while I observe this 'buffering' effect ... I see next to no activity until the 
> instant that "zpool iostat" does. i.e. the proxmox host is holding onto data and 
> choosing to flush it at a later stage.
> 
> Thanks again,
> Chris
> 
> -----Original Message-----
> From: Pongrácz István [mailto:pongracz.istvan at gmail.com ] 
> Sent: 04 February 2014 14:39
> To: Chris Murray; Alexandre DERUMIER
> Cc: pve-user pve.proxmox.com
> Subject: Re: [PVE-User] Understanding write caching in proxmox
> 
> 
> How about your NFS setup? Sync, async?
> 
> István 
> 
> ----------------eredeti üzenet-----------------
> Feladó: "Chris Murray" chrismurray84 at gmail.com 
> Címzett: "Alexandre DERUMIER" aderumier at odiso.com 
> CC: pve-user at pve.proxmox.com 
> Dátum: Tue, 4 Feb 2014 14:14:51 -0000
> -------------------------------------------------
> 
> 
>> Sorry, I might have misled you with my question. It might have the 
>> appearance of another general and possibly quite common one.
>> 
>> My troubleshooting was performed on the proxmox *host*, therefore I 
>> was under the impression that cache=[whatever] wouldn't make a 
>> difference since that's a property of a virtual machine disk.
>> 
>> For the *VM*, I'm using writethrough .. however I'm observing exactly 
>> what you explained for 'none' or 'writeback'. My VM is Windows Server 
>> 2008 R2, and I'm using sdelete to fill it with data. 'sdelete -z' 
>> won't appear on the 'zpool iostat' for quite some time, while the 
>> cached figure on the proxmox host increases. That's despite the disk being 
>> set to writethrough.
>> 
>> Apologies if I'm not being clear.
>> 
> 
> 
> 
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2014.0.4259 / Virus Database: 3684/7058 - Release Date: 02/03/14
> 





More information about the pve-user mailing list