[pve-devel] [PATCH] add with-local-disks option for live storage migration

Alexandre DERUMIER aderumier at odiso.com
Tue Jan 10 08:19:29 CET 2017


Thanks Wolfgang,


>>Basically storage plugins should have to define 
>>- a set of formats they can export as and import from 
>>- whether these formats can include snapshots 
>>- a priority 

that's exactly what I have in mind.

>>- possibly other things? (sparse/zero-detection/local sepcial cases to 
>>use 'cp'...?) 

I think we could add also special case, like ceph rbd copy for example.


>>(I'll start polishing up the documentation and push some code into a 
>>git repo soon-ish...) 

Ok Great. I'll wait a little bit for your patches .

Thanks for the hard work on this !


----- Mail original -----
De: "Wolfgang Bumiller" <w.bumiller at proxmox.com>
À: "aderumier" <aderumier at odiso.com>
Cc: "dietmar" <dietmar at proxmox.com>, "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 9 Janvier 2017 12:36:12
Objet: Re: [pve-devel] [PATCH] add with-local-disks option for live storage migration

On Sat, Jan 07, 2017 at 03:16:22PM +0100, Alexandre DERUMIER wrote: 
> >>I think wolfgang has some experimental code to implement 
> >>kind of send/receive for most storage types .. I guess this 
> >>could be useful here. 
> 
> maybe it could be great to have something in pve-storage plugins. 
> 
> for example, qcow2-> qcow2 use rsync to keep snapshot, zfs->zfs use zfs send|receive to keep snapshot, qcow2 ->zfs use qemu-img + nbd (and loose snasphot), .... 
> 
> Currently we have a big PVE::Storage::storage_migrate, with a lot of conditions for differents plugins, 
> I think it could be better to move code in each plugin. 

Yes, this function should die ;-) 

If you're interested in working on this: 
The idea of a generic 'import/export' (or send/receive) interface has 
been floating around and I think we should start working on this as it 
will not only untangle that huge spaghetti if/else function but also 
allow easier maintenance and improvements: 

Basically storage plugins should have to define 
- a set of formats they can export as and import from 
- whether these formats can include snapshots 
- a priority 
- possibly other things? (sparse/zero-detection/local sepcial cases to 
use 'cp'...?) 

When a disk is to be migrated, the source storage's 'export' formats is 
matched against the destination storage's 'import' formats, the "best" 
one they both have in common will be used, taking into account whether 
snapshots should be included or not. 

Every storage would have to at least support the 'raw' type - a simple 
'dd' stream without snapshots, where the receiving end would probably 
use a 4k block size with sparse detection. 

Naturally zfs would define the 'zfs' type which would be the best choice 
if both storages are zfs - and would use zfs-send/receive. (Likewise 
btrfs would have the 'btrfs' type...) 

As for the experimental code Dietmar metnioned: 
I'm currently work on an experimental tool implementing a sort of 
send/receive - or more accurately a copy-on-write/clone/dedup and 
sparse aware streaming archive. 
In theory it can already restore *to* every type of storage we have, and 
I can currently read *from* qcow2 files, lvm thin volumes and raw files 
from btrfs/xfs into such a stream *with* snapshots. (For qcow2 I have a 
qemu-img patch, for lvm-thin I read the thin-pool's metadata to get the 
allocated blocks.) 
I've done some successful tests lately moving VMs + snapshots between 
qcow2, lvm-thin around, I've also moved them to ZFS zvols. 
The big question is how many storage types I'll be able to cover, we'll 
just have to wait and see ;-). 
(I'll start polishing up the documentation and push some code into a 
git repo soon-ish...) 




More information about the pve-devel mailing list