[pve-devel] gluster patch

FinnTux finntux at gmail.com
Wed Jan 8 21:16:44 CET 2014


Are
you using xfs for your bricks? I had those timeouts too but mounting xfs
bricks with allocsize=64k helped.




2014/1/8 Alexandre DERUMIER <aderumier at odiso.com>

> >>Even after a total reboot of the server, the same lock/timeout occurred
> for both new VM creation and restore.
>
> I'll do test with qemu 1.4 tomorrow to see if I can reproduce.
>
> what is your glusterfs setup ? (hardware , raid , replication level)
>
>
> ----- Mail original -----
>
> De: "Ferrara" <aixsyd at gmail.com>
> À: "Alexandre DERUMIER" <aderumier at odiso.com>
> Cc: "pve-devel" <pve-devel at pve.proxmox.com>
> Envoyé: Mercredi 8 Janvier 2014 03:58:05
> Objet: Re: gluster patch
>
>
> Both.
>
>
> Even after a total reboot of the server, the same lock/timeout occurred
> for both new VM creation and restore.
>
>
> Sam
>
>
>
> On Tue, Jan 7, 2014 at 9:56 PM, Alexandre DERUMIER < aderumier at odiso.com> wrote:
>
>
>
> >>I think I may have found a solution. Its shockingly simple. Theres a
> line that reads:
> >>push @$cmd, '-o', 'preallocation=metadata' if $fmt eq 'qcow2';
>
> Sure,
>
> but I would like to get it working with preallocation=metadatas ;)
> (performance is better)
>
> I have done test, on my test cluster (3 servers with simple raid1 with
> baterry cache),
> it take around 10s to create a 500GB qcow2 file with preallocate metadatas.
>
> it take 0,5s to create 500qcow2 without preallocate.
>
> (But I have tested with qemu 1.7)
>
>
> Do you have problem only on backup restore ? or also when you create a new
> disk ?
>
>
> ----- Mail original -----
>
> De: "Ferrara" < aixsyd at gmail.com >
> À: "Alexandre DERUMIER" < aderumier at odiso.com >
> Envoyé: Mercredi 8 Janvier 2014 03:51:39
> Objet: Re: gluster patch
>
>
>
> I think I may have found a solution. Its shockingly simple. Theres a line
> that reads:
>
>
> push @$cmd, '-o', 'preallocation=metadata' if $fmt eq 'qcow2';
>
>
>
> I changed it to:
>
>
> # push @$cmd, '-o', 'preallocation=metadata' if $fmt eq 'qcow2';
>
>
>
> And it seems to have worked for VM creation. the qcow2 image it made is
> 194KB. I'm going to try to boot/install to it. I'll letcha know what I find.
>
>
> Sam
>
>
>
> On Tue, Jan 7, 2014 at 9:27 PM, Ferrara < aixsyd at gmail.com > wrote:
>
>
>
>
> Not sure what I did, but I tried it again, and it seems to be doing the
> same thing - lock timeout, aborting. It didnt throw the previous error.
>
>
>
> Sam
>
>
>
>
>
> On Tue, Jan 7, 2014 at 9:23 PM, Ferrara < aixsyd at gmail.com > wrote:
>
> <blockquote>
>
>
> Restoring from a snapshot warrants the following:
>
>
> Parameter verification failed. (400)
>
> storage : invalid format - storage ID '' contains illegal characters
>
>
>
>
>
>
> On Tue, Jan 7, 2014 at 9:10 PM, Alexandre DERUMIER < aderumier at odiso.com> wrote:
>
> <blockquote>
>
> Hi,
>
> Here the glusterplugin with patch include.
>
> just replace /usr/share/perl5/PVE/Storage/GlusterfsPlugin.pm
>
> and restart
>
> /etc/init.d/pvedaemon restart
> /etc/init.d/pveproxy restart
> /etc/init.d/pvestatd restart
>
>
>
> (The plugin create the volume with qemu-img gluster:// , instead using
> fuse mount point.
>
> So, this apply also on backup restore, when the file is created.
>
> Tell me if it's works better for you.
>
>
>
>
>
>
>
>
> </blockquote>
>
> </blockquote>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20140108/3a530556/attachment.htm>


More information about the pve-devel mailing list