[pve-devel] Default cache mode for VM hard drives

Stanislav German-Evtushenko ginermail at gmail.com
Thu Dec 4 16:56:26 CET 2014


Hi Dietmar, hi Alexandre,

I did manage to reproduce the issue without hardware dependency and without
LVM to reduce any possible influence from other components.

My environment
- two virtual Proxmox hosts installed on the same physical Proxmox host
-- pxtest1, 4 virtual cores, two virtual disks /dev/sda and /dev/sdb
-- pxtest2, 4 virtual cores, two virtual disks /dev/sda and /dev/sdb
- DRBD r0 master-master backed by /dev/sdb1 on both pxtest1 and pxtest2
- vm1 (Ubuntu Server 14.04) that is installed on /deb/drbd0

Steps to reproduce:
- Create a VM with 512 RAM (KVM is disabled as nested KVM is not yet
stable, so VM installation is slow, sorry)
- Configure ide0 to /dev/drbd0
- Install Ubuntu Server 14.04
- Stop Ubuntu, reduce RAM from 512 to 128 (in order to make it use swap
more intense)
- Configure running "drbdadm verify" once a day
- Start VM and wait for about a week, and finally we get:
root at pxtest1:~# zgrep "Out of sync" /var/log/kern.log* | grep drbd0
/var/log/kern.log:Dec  3 00:59:15 pxtest1 kernel: block drbd0: Out of sync:
start=18232672, size=8 (sectors)
/var/log/kern.log:Dec  4 00:59:01 pxtest1 kernel: block drbd0: Out of sync:
start=18232672, size=8 (sectors)

Config files

1. DRBD

root at pxtest1:/etc/drbd.d# cat global_common.conf
global { usage-count no; }
common {
    syncer { rate 20M; verify-alg md5; }
    handlers { out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh '
admins at company.ru -- -f notifications at company.ru'"; }
}

root at pxtest1:/etc/drbd.d# cat r0.res
resource r0 {
    protocol C;
    startup {
        wfc-timeout 0;                # non-zero might be dangerous
        degr-wfc-timeout 600;
        become-primary-on both;
    }
    net {
        cram-hmac-alg sha1;
        shared-secret "ijaGush9aeShait6keeyaivo9uh5thae2eixee";    # Up to
64 characters
        allow-two-primaries;
        after-sb-0pri discard-zero-changes;
        after-sb-1pri discard-secondary;
        after-sb-2pri disconnect;
        #data-integrity-alg crc32c;        # has to be enabled only for
test and disabled for production use
    }
    on pxtest1 {
        device /dev/drbd0;
        disk /dev/sdb1;
        address 10.133.27.81:7790;
        meta-disk internal;
    }
    on pxtest2 {
        device /dev/drbd0;
        disk /dev/sdb1;
        address 10.133.27.82:7790;
        meta-disk internal;
    }
    disk {
        no-disk-barrier;
        no-disk-flushes;
    }
}

root at pxtest1:/etc/pve/qemu-server# modinfo drbd | grep ^version:
version:        8.3.13

root at pxtest1:/etc/pve/qemu-server# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


2. VM1

root at pxtest1:/etc/pve/qemu-server# cat 101.conf
bootdisk: ide0
cores: 2
ide0: /dev/drbd0
ide2: none,media=cdrom
kvm: 0
memory: 128
name: vm1
net0: e1000=F2:8E:67:5E:BA:CC,bridge=vmbr0
onboot: 1
ostype: l26
sockets: 1


Best regards,
Stanislav German-Evtushenko
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20141204/5f4d0f00/attachment.htm>


More information about the pve-devel mailing list