[PVE-User] LVM configuration and blocksizes

admin extremeshok.com admin at extremeshok.com
Sun Sep 2 01:12:17 CEST 2012


Its not going to cause any problems.

root at prox:~# blockdev --report
RO    RA   SSZ   BSZ   StartSec            Size   Device
rw 16384   512  4096          0   1199168290816   /dev/sda
rw 16384   512  1024       2048       535822336   /dev/sda1
rw 16384   512  4096    1048576   1198631419904   /dev/sda2
rw  1024   512  4096          0   3000592982016   /dev/sdb
rw  1024   512  4096       2048   3000591450112   /dev/sdb1
rw 16384   512  4096          0    103079215104   /dev/dm-0
rw   256   512  4096          0    107374182400   /dev/dm-1
rw   256   512  4096          0   2893214121984   /dev/dm-2
rw   256   512  4096          0     68719476736   /dev/dm-3
rw 16384   512  4096          0   1009652858880   /dev/dm-4
rw 16384   512  4096          0   1009652858880   /dev/dm-5
rw 16384   512  4096          0   1009652858880   /dev/dm-6
rw 16384   512  4096          0      1073741824   /dev/dm-7

Remember to increase the ra value.

Are you using a raid controller ? If so, spend the little bit extra on a
BBU, otherwise you risk corrupted writes.

I suggest you check out these 2 guides on optimizing the proxmox host a
little further.

http://extremeshok.com/blog/debian/proxmox-2-1-debian-6-0-squeeze-server-preparation-post-installation/

http://extremeshok.com/blog/debian/proxmox-2-1-debian-6-0-squeeze-server-security-post-installation-fail2ban-sysctl-hosts-ip-spoofing/

Node Optimization:

http://extremeshok.com/blog/ubuntu/proxmox-openvz-ubuntu-12-04-tuning-optimising-optimizing-for-reduced-memory-usage-ssh-getty/

http://extremeshok.com/blog/centos/proxmox-openvz-container-centos-6-server-preparation/
On 2012/09/01 07:48 PM, Shain Miley wrote:
> Thank you very much...I was actually just running some benchmarks after making the changes suggested by your site.
>
> I went from pveperf reporting:
>
> BUFFERED READS:    192.70 MB/sec
>
> to
>
> BUFFERED READS:    272.48 MB/sec
>
> after these changes...so...so far so good.
>
> However what I really need to know now is...is there any reason to be concerned about the fact that the block size is '512' for sdb1 and sdc1.
>
> I see that the block sizes ARE set correctly for 'sdb' and 'sdc' as well as 'dm-2' (/var/lib/vz)....so maybe since they are correct for the other values...I don't have to worry afterall....however I would like to confirm this before I put these into production next week. 
>
>                   BSZ
> rw 256 512 4096 0 299439751168 /dev/sdb
> rw 256 512 512 63 299433060864 /dev/sdb1
>
> rw 256 512 4096 0 299439751168 /dev/sdc
> rw 256 512 512 63 299433060864 /dev/sdc1
>
> rw 256 512 4096 0 32212254720 /dev/dm-0
> rw 256 512 4096 0 37312528384 /dev/dm-1
> rw 256 512 4096 0 817788616704 /dev/dm-2
>
>
> Thanks again,
>
> Shain
> ________________________________________
> From: pve-user-bounces at pve.proxmox.com [pve-user-bounces at pve.proxmox.com] On Behalf Of admin extremeshok.com [admin at extremeshok.com]
> Sent: Saturday, September 01, 2012 1:32 PM
> To: pve-user at pve.proxmox.com
> Subject: Re: [PVE-User] LVM configuration and blocksizes
>
> http://extremeshok.com/blog/benchmarks/lsi-megaraid-9260-on-debian-6-squeeze-proxmox-2-megacli-fimware-update-performance-tuning/
>
> ^^ might help you.
>
> On 2012/09/01 06:13 PM, Shain Miley wrote:
>> I was doing some digging around into our lvm configuration, trying to find out if there was any room for improvement.
>>
>> This system has 6 x 300GB SAS drives, setup in 3 x RAID1 configuration.
>>
>>
>> I used the following the commands to get some more information on the setup.
>>
>>
>> root at proxmox14:~# blockdev --report
>> RO RA SSZ BSZ StartSec Size Device
>> rw 256 512 4096 0 299439751168 /dev/sda
>> rw 256 512 1024 2048 535822336 /dev/sda1
>> rw 256 512 4096 1048576 298902880256 /dev/sda2
>> rw 256 512 4096 0 299439751168 /dev/sdb
>> rw 256 512 512 63 299433060864 /dev/sdb1
>> rw 256 512 4096 0 299439751168 /dev/sdc
>> rw 256 512 512 63 299433060864 /dev/sdc1
>> rw 256 512 4096 0 32212254720 /dev/dm-0
>> rw 256 512 4096 0 37312528384 /dev/dm-1
>> rw 256 512 4096 0 817788616704 /dev/dm-2
>>
>>
>>
>> root at proxmox14:~# pvs -o+pe_start --units s
>> PV VG Fmt Attr PSize PFree 1st PE
>> /dev/sda2 pve lvm2 a-- 583786496S 0S 2048S
>> /dev/sdb1 pve lvm2 a-- 584826880S 0S 2048S
>> /dev/sdc1 pve lvm2 a-- 584826880S 20406272S 2048S
>>
>>
>> root at proxmox14:~# pveperf /var/lib/vz
>> CPU BOGOMIPS: 185574.40
>> REGEX/SECOND: 1581609
>> HD SIZE: 749.67 GB (/dev/mapper/pve-data)
>> BUFFERED READS: 191.78 MB/sec
>> AVERAGE SEEK TIME: 4.35 ms
>> FSYNCS/SECOND: 6225.49
>> DNS EXT: 39.06 ms
>> DNS INT: 0.83 ms (npr.org)
>>
>>
>>
>> I noticed that the blocksizes (BSZ) listed in the output above for several of the devices are different (4096 vs 512):
>>
>> a) Is this difference something I should be concerned about...and looking to change?
>> b) If so...how should I go about setting this up correctly (using 4096) when I am adding lvm disks to a system in the first place?
>>
>> I noticed that in several posts around the internet people are setting also the RA values to something like '16384':
>>
>> c) Has anyone else had success with increased performance making such as change?
>>
>>
>> Thanks in advance,
>>
>> Shain
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>




More information about the pve-user mailing list