[PVE-User] LVM configuration and blocksizes

Shain Miley SMiley at npr.org
Sat Sep 1 18:13:46 CEST 2012


I was doing some digging around into our lvm configuration, trying to find out if there was any room for improvement. 

This system has 6 x 300GB SAS drives, setup in 3 x RAID1 configuration.


I used the following the commands to get some more information on the setup.


root at proxmox14:~# blockdev --report
RO RA SSZ BSZ StartSec Size Device
rw 256 512 4096 0 299439751168 /dev/sda
rw 256 512 1024 2048 535822336 /dev/sda1
rw 256 512 4096 1048576 298902880256 /dev/sda2
rw 256 512 4096 0 299439751168 /dev/sdb
rw 256 512 512 63 299433060864 /dev/sdb1
rw 256 512 4096 0 299439751168 /dev/sdc
rw 256 512 512 63 299433060864 /dev/sdc1
rw 256 512 4096 0 32212254720 /dev/dm-0
rw 256 512 4096 0 37312528384 /dev/dm-1
rw 256 512 4096 0 817788616704 /dev/dm-2



root at proxmox14:~# pvs -o+pe_start --units s
PV VG Fmt Attr PSize PFree 1st PE 
/dev/sda2 pve lvm2 a-- 583786496S 0S 2048S
/dev/sdb1 pve lvm2 a-- 584826880S 0S 2048S
/dev/sdc1 pve lvm2 a-- 584826880S 20406272S 2048S


root at proxmox14:~# pveperf /var/lib/vz
CPU BOGOMIPS: 185574.40
REGEX/SECOND: 1581609
HD SIZE: 749.67 GB (/dev/mapper/pve-data)
BUFFERED READS: 191.78 MB/sec
AVERAGE SEEK TIME: 4.35 ms
FSYNCS/SECOND: 6225.49
DNS EXT: 39.06 ms
DNS INT: 0.83 ms (npr.org)



I noticed that the blocksizes (BSZ) listed in the output above for several of the devices are different (4096 vs 512):

a) Is this difference something I should be concerned about...and looking to change?
b) If so...how should I go about setting this up correctly (using 4096) when I am adding lvm disks to a system in the first place? 

I noticed that in several posts around the internet people are setting also the RA values to something like '16384':

c) Has anyone else had success with increased performance making such as change?


Thanks in advance,

Shain



More information about the pve-user mailing list