[PVE-User] NFS-Performance

Holger Hampel | RA Consulting h.hampel at rac.de
Sun Jul 19 18:17:16 CEST 2015


Hello,

Last weekend, the backup stalled (also the backups before, but nobody was there to care for it):

657: Jul 12 10:37:13 INFO: status: 77% (57894961152/75161927680), sparse 3% (2493112320), duration 14059, 15/15 MB/s 657: Jul 12 10:45:29 INFO: status: 78% (58630930432/75161927680), sparse 3% (2497314816), duration 14555, 1/1 MB/s 657: Jul 12 10:50:34 INFO: status: 79% (59411005440/75161927680), sparse 3% (2497314816), duration 14860, 2/2 MB/s 657: Jul 12 10:54:53 INFO: status: 80% (60162506752/75161927680), sparse 3% (2509971456), duration 15119, 2/2 MB/s 657: Jul 12 10:55:15 INFO: status: 81% (60890480640/75161927680), sparse 3% (2518409216), duration 15141, 33/32 MB/s 657: Jul 12 11:03:05 INFO: status: 82% (61646045184/75161927680), sparse 3% (2518618112), duration 15611, 1/1 MB/s 657: Jul 12 11:06:07 INFO: status: 83% (62387257344/75161927680), sparse 3% (2518822912), duration 15793, 4/4 MB/s 657: Jul 12 11:09:22 INFO: status: 84% (63160647680/75161927680), sparse 3% (2519183360), duration 15988, 3/3 MB/s 657: Jul 12 11:13:33 INFO: status: 85% (63918243840/75161927680), sparse 3% (2519449600), duration 16239, 3/3 MB/s 657: Jul 12 11:19:26 INFO: status: 86% (64648904704/75161927680), sparse 3% (2524684288), duration 16592, 2/2 MB/s 657: Jul 12 11:23:00 INFO: status: 87% (65413185536/75161927680), sparse 3% (2542366720), duration 16806, 3/3 MB/s 657: Jul 12 11:25:41 INFO: status: 88% (66164162560/75161927680), sparse 3% (2704044032), duration 16967, 4/3 MB/s 657: Jul 12 11:28:49 INFO: status: 89% (66912124928/75161927680), sparse 3% (2730418176), duration 17155, 3/3 MB/s 657: Jul 12 11:32:25 INFO: status: 90% (67659366400/75161927680), sparse 3% (2752782336), duration 17371, 3/3 MB/s 657: Jul 12 11:34:40 INFO: status: 91% (68420632576/75161927680), sparse 3% (2766213120), duration 17506, 5/5 MB/s 657: Jul 12 11:39:33 INFO: status: 92% (69151490048/75161927680), sparse 3% (2789863424), duration 17799, 2/2 MB/s 657: Jul 12 11:42:29 INFO: status: 93% (69910659072/75161927680), sparse 3% (2809716736), duration 17975, 4/4 MB/s 657: Jul 12 11:52:13 INFO: status: 94% (70690799616/75161927680), sparse 3% (2822275072), duration 18559, 1/1 MB/s

After backup completion I tested the speed of NFS access (also using different blocksizes without siginificant changes):

root at gaia:~# dd if=/dev/sda of=/mnt/pve/Backup-Chaos/test bs=8192 count=10000
10000+0 Datensätze ein
10000+0 Datensätze aus
81920000 Bytes (82 MB) kopiert, 16,086 s, 3,1 MB/s

On another node (same hardware and config) I'm testing the 3.10 kernel:

root at tartaros:~# dd if=/dev/sda of=/mnt/pve/Backup-Chaos/test bs=8192 count=10000
10000+0 Datensätze ein
10000+0 Datensätze aus
81920000 Bytes (82 MB) kopiert, 0,950729 s, 86,2 MB/s

So I upgraded the kernel on the first node and:

root at nyx:~# dd if=/dev/sda of=/mnt/pve/Backup-Chaos/test bs=8192 count=10000
10000+0 Datensätze ein
10000+0 Datensätze aus
81920000 Bytes (82 MB) kopiert, 0,912615 s, 89,8 MB/s


It seems independend of using IPv4/v6 and another machine (on pve-test, other hardware) without bonding has no problems accessing the same NFS-Server.

Has anyone similar expirience or some explanation?

Regards
Holger Hampel

-----------------------------------------------

root at nyx:~# pveversion -v
proxmox-ve-2.6.32: 3.4-157 (running kernel: 3.10.0-10-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-39-pve: 2.6.32-157
pve-kernel-3.10.0-10-pve: 3.10.0-34
pve-kernel-2.6.32-37-pve: 2.6.32-150
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Network:
iface eth0 inet manual
        mtu 9000

iface eth1 inet manual
        mtu 9000

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer2+3
        mtu 9000

auto vmbr0
iface vmbr0 inet static
        address  x.x.x.x
        netmask  y.y.y.y
        gateway  z.z.z.z
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        mtu 9000






More information about the pve-user mailing list