[PVE-User] LVM configuration and blocksizes

Shain Miley SMiley at npr.org
Sat Sep 1 19:48:30 CEST 2012


Thank you very much...I was actually just running some benchmarks after making the changes suggested by your site.

I went from pveperf reporting:

BUFFERED READS:    192.70 MB/sec

to

BUFFERED READS:    272.48 MB/sec

after these changes...so...so far so good.

However what I really need to know now is...is there any reason to be concerned about the fact that the block size is '512' for sdb1 and sdc1.

I see that the block sizes ARE set correctly for 'sdb' and 'sdc' as well as 'dm-2' (/var/lib/vz)....so maybe since they are correct for the other values...I don't have to worry afterall....however I would like to confirm this before I put these into production next week. 

                  BSZ
rw 256 512 4096 0 299439751168 /dev/sdb
rw 256 512 512 63 299433060864 /dev/sdb1

rw 256 512 4096 0 299439751168 /dev/sdc
rw 256 512 512 63 299433060864 /dev/sdc1

rw 256 512 4096 0 32212254720 /dev/dm-0
rw 256 512 4096 0 37312528384 /dev/dm-1
rw 256 512 4096 0 817788616704 /dev/dm-2


Thanks again,

Shain
________________________________________
From: pve-user-bounces at pve.proxmox.com [pve-user-bounces at pve.proxmox.com] On Behalf Of admin extremeshok.com [admin at extremeshok.com]
Sent: Saturday, September 01, 2012 1:32 PM
To: pve-user at pve.proxmox.com
Subject: Re: [PVE-User] LVM configuration and blocksizes

http://extremeshok.com/blog/benchmarks/lsi-megaraid-9260-on-debian-6-squeeze-proxmox-2-megacli-fimware-update-performance-tuning/

^^ might help you.

On 2012/09/01 06:13 PM, Shain Miley wrote:
> I was doing some digging around into our lvm configuration, trying to find out if there was any room for improvement.
>
> This system has 6 x 300GB SAS drives, setup in 3 x RAID1 configuration.
>
>
> I used the following the commands to get some more information on the setup.
>
>
> root at proxmox14:~# blockdev --report
> RO RA SSZ BSZ StartSec Size Device
> rw 256 512 4096 0 299439751168 /dev/sda
> rw 256 512 1024 2048 535822336 /dev/sda1
> rw 256 512 4096 1048576 298902880256 /dev/sda2
> rw 256 512 4096 0 299439751168 /dev/sdb
> rw 256 512 512 63 299433060864 /dev/sdb1
> rw 256 512 4096 0 299439751168 /dev/sdc
> rw 256 512 512 63 299433060864 /dev/sdc1
> rw 256 512 4096 0 32212254720 /dev/dm-0
> rw 256 512 4096 0 37312528384 /dev/dm-1
> rw 256 512 4096 0 817788616704 /dev/dm-2
>
>
>
> root at proxmox14:~# pvs -o+pe_start --units s
> PV VG Fmt Attr PSize PFree 1st PE
> /dev/sda2 pve lvm2 a-- 583786496S 0S 2048S
> /dev/sdb1 pve lvm2 a-- 584826880S 0S 2048S
> /dev/sdc1 pve lvm2 a-- 584826880S 20406272S 2048S
>
>
> root at proxmox14:~# pveperf /var/lib/vz
> CPU BOGOMIPS: 185574.40
> REGEX/SECOND: 1581609
> HD SIZE: 749.67 GB (/dev/mapper/pve-data)
> BUFFERED READS: 191.78 MB/sec
> AVERAGE SEEK TIME: 4.35 ms
> FSYNCS/SECOND: 6225.49
> DNS EXT: 39.06 ms
> DNS INT: 0.83 ms (npr.org)
>
>
>
> I noticed that the blocksizes (BSZ) listed in the output above for several of the devices are different (4096 vs 512):
>
> a) Is this difference something I should be concerned about...and looking to change?
> b) If so...how should I go about setting this up correctly (using 4096) when I am adding lvm disks to a system in the first place?
>
> I noticed that in several posts around the internet people are setting also the RA values to something like '16384':
>
> c) Has anyone else had success with increased performance making such as change?
>
>
> Thanks in advance,
>
> Shain
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




More information about the pve-user mailing list