[pve-devel] Default cache mode for VM hard drives

Stanislav German-Evtushenko ginermail at gmail.com
Thu Nov 20 08:02:16 CET 2014


On Nov 20, 2014 8:39 AM, "Dietmar Maurer" <dietmar at proxmox.com> wrote:
>
> > Again, migration code flush all changes to disk, so there are no "out
of sync"
> > blocks
> > after migration. What do I miss?
> >
> > I'll try to explain within more details.
> > When write cache is enabled then KVM process write, write and write. It
doesn't
> > care about what really happen to this data after it goes to buffer.
>
> Sorry, but we use "cache=none", so cache is not enabled (what cache do
you talk exactly?).

Cache=none means no host cache but backend cache is still in use. In case
of DRBD this is a buffer in DRBD. So O_DIRECT return OK when data reaches
this buffer and not RAID cache.

> > What DRBD does it is writing data from buffer to both nodes but DRBD
can't do
> > this simultaneously. So it writes data to local node first and then to
another
> > node. Between these 'writes' data in buffer can be changed and nobody
know it
> > was changed. So every time data is written DRBD can't be sure that data
written
> > locally and data written remotely are identical.
> > Why barriers usually helps? Because OS inside VM doesn't write anything
until
> > 'data is committed' message received from DRBD. 'Data
> >  is committed' comes from DRBD when real data committed to both nodes,
local
> > and remote.
>
> Right. Both, the VM kernel, and the KVM migration code issue a flush when
required.
> So you never end up in inconsistent state.
>
> Migration would never work if above theory is correct. But it works
perfectly
> with iSCSI, NFS, glusterfs, ceph, ...

The difference is that iSCSI and NFS have only one backend device and
buffer is not local but remote, but DRBD has at least two backends
connected by network. I don't know how ceph and glusterfs work in details,
never used/learned.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20141120/54fa63bd/attachment.htm>


More information about the pve-devel mailing list