[PVE-User] qmrestore to external iscsi storage fails

Alain Péan alain.pean at lpp.polytechnique.fr
Fri May 6 21:26:58 CEST 2011


Hi Sascha,

I already did such things, and it worked fine for me, even if I don't 
remember the exact way I did it. But I wonder if using the logical 
volume name is the correct way to do it. If I remember correctly, I did 
it with the storage name appearing in the web interface (under storage). 
I don't use the LUN directly but a logical volume for each VM, as stated 
in the link you give...
What name for your iSCSI storage do you have in your web interface ?

Hopes that helps...

Alain

Le 06/05/2011 19:22, Sascha Zucca a écrit :
> Hi list,
> i'd most desperately would need your help on a very odd issue i have here.
> I would like to qmrestore a backupfile to an ext. storage.
> but no matter what i try, it'll always fail.
>
> First of all:
> I have a MSA2312i  with 12 x 300 GB SAS (full enclosure) configured as 
> Raid10.
> In the Management Utility of the MSA2312i it is configured as follows:
>
> *Vdisks*
>
>    *
>       *vd01* (RAID10)
>          o
>             Volume *vd01_v001* (1005.9GB)
>          o
>             Volume *vd01_v002* (792.0GB)
>
>
>
> Here's what i do next :
>
> 1. install multipath-tools as described in 
> http://pve.proxmox.com/wiki/ISCSI_Multipath
>
> 2. Configure iscsi storage as described in 
> http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
> MSA2312i    ISCSI    Yes    Yes    Yes    n/a    n/a    n/a
> vd01_v001    LVM    Yes    Yes    Yes    600.00    936.91    64.04%
> vg01_v002    LVM    Yes    Yes    Yes    0.00    737.70    0.00%
>
> 3. multipath -ll shows
> /3600c0ff0001118d861fac34d01000000dm-3 HP      ,MSA2312i
> [size=937G][features=0][hwhandler=0]
> \_ round-robin 0 [prio=1][active]
>  \_ 17:0:0:1 sdh 8:112 [active][ready]
> \_ round-robin 0 [prio=1][enabled]
>  \_ 14:0:0:1 sdb 8:16  [active][ready]
> \_ round-robin 0 [prio=1][enabled]
>  \_ 15:0:0:1 sdd 8:48  [active][ready]
> \_ round-robin 0 [prio=1][enabled]
>  \_ 16:0:0:1 sdf 8:80  [active][ready]
> 3600c0ff0001118d86bfac34d01000000dm-4 HP      ,MSA2312i
> [size=738G][features=0][hwhandler=0]
> \_ round-robin 0 [prio=1][active]
>  \_ 17:0:0:2 sdi 8:128 [active][ready]
> \_ round-robin 0 [prio=1][enabled]
>  \_ 14:0:0:2 sdc 8:32  [active][ready]
> \_ round-robin 0 [prio=1][enabled]
>  \_ 15:0:0:2 sde 8:64  [active][ready]
> \_ round-robin 0 [prio=1][enabled]
>  \_ 16:0:0:2 sdg 8:96  [active][ready]/
> looks good...
>
> 4. now i start the command
> qmrestore --storage vd01_v001 vzdump-qemu-101-2011_05_05-20_05_02.tgz 
> 101 &
> the dump consists of two vds (200GB and 600GB)
>
> 5. the first vd is being restored to the external storage, i can see 
> it in the PM-storage-gui and through lvdisplay
>
> 6. after a while i get the message that vd1 has been restored allright 
> and restore of vd2 has started
> i can then see both vds (PM-storage-gui and through lvdisplay)
>
> 7. after another while qmrestore stops, the onle message i can find 
> ist this in syslog:
> vzdump[8439]: ERROR: restore QemuServer backup 
> 'vzdump-qemu-101-2011_05_05-20_05_02.tgz' failed - interrupted by signal
>
> 8. the vd1 has vanished, only (an obviously broken) vd2 can be seen 
> (PM-storage-gui and through lvdisplay)
>
> 9. no VM is shown in the virtual machine part of the PM-GUI
>
> I tried barely everything to get it working...
> In the test phase of this environment it worked absolutely flawlessly...
> Now that we built everything in place no more...
>
>
> Please also consider that qmrestore --storage local 
> vzdump-qemu-101-2011_05_05-20_05_02.tgz 101 &
> works like a charm!
>
>
> What do i do that is so wrong???
>
> Please help me...
>
>
> Thank you very, very much!
> Sascha
>
>
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20110506/aa390d38/attachment.htm>


More information about the pve-user mailing list