[pve-devel] savevm tests

Alexandre DERUMIER aderumier at odiso.com
Fri Aug 31 09:43:01 CEST 2012


>>Why not possible? We can simply allocate a normal volume, and store state there?

I want to say, without hacking qemu code ;) But maybe it's possible, but is really difficult.

It need to code a nexenta block driver for qemu (forking libiscsi), and a volume management to do space,save the vmstate,...
Seem really difficult to do, and maintain in time


>>We have VM which use qcow2, sheepdog, rdb and nexenta at the same time. How should that work?
yes indeed,
But I really don't think that users will mix all the storage on same vm. (and have enough money to build and maintain all the storage clusters ;)
But, if they really want to do it, 

I see 2 kinds of storages for snapshots:

- qemu "internal" storage, using qemu block drivers.
    - qcow2
    - rbd
    - sheepdog



- external storage
 like -nexenta, 
      and maybe other futur storage like:
      -lvm (dm-thinp, for local snapshot by exemple)
      - why not btrfs in the future
      -netapp storage(I say that because I'm planning to build a big big netapp cluster for a customer in the coming months)
      -...


qemu internal storage, must have same features (missing rbd vmstate, but it should come soon).
They also have the same snapshots/rollback capabilities. (multi branch tree, read/write snapshots)
So we can mix them on the same vm, and use snapshots.
But we can't mix them with other storages. (savevm exit if it detect a storage without savevm capability, like raw file,blockdevice,iscsi device)


external storage:
they maybe have not the same snapshots capabilities, no vmstate.
So theses storage could not be mixed with other storages, if we want to use snapshot.


We just need to check the differents mixed storage used by the vm before allow the snapshots.

What do you think ?


----- Mail original -----

De: "Dietmar Maurer" <dietmar at proxmox.com>
À: "Alexandre DERUMIER" <aderumier at odiso.com>
Cc: pve-devel at pve.proxmox.com
Envoyé: Jeudi 30 Août 2012 17:30:55
Objet: RE: [pve-devel] savevm tests

> here how I see things:
>
>
> for nexenta:
> no vmstate(no possible)

Why not possible? We can simply allocate a normal volume, and store state there?

> ----------
> - send freeze-fs via qemu-ga if available
> - use nexenta api to snapshot all disks
> - unfreeze freeze-fs via qemu-ga if available
>
> for rbd:
> no vmstate (not available)
> ----------
> - send freeze-fs via qemu-ga if available
> - savevm. (or rbd command, it's works too)
> - unfreeze freeze-fs via qemu-ga if available
>
> for sheepdog:
> vmstate
> -------
> savevm
>
> novmstate
> ---------
> - send freeze-fs via qemu-ga if available and if we don't don't
> - savevm. (or rbd command, it's works too)
> - unfreeze freeze-fs via qemu-ga if available
>
>
> for qcow2:
> vmstate
> -------
> savevm
>
> novmstate
> ---------
> - send freeze-fs via qemu-ga if available and if we don't don't
> - savevm. (or rbd command, it's works too)
> - unfreeze freeze-fs via qemu-ga if available


We have VM which use qcow2, sheepdog, rdb and nexenta at the same time. How should that work?



--

--





Alexandre D e rumier

Ingénieur Systèmes et Réseaux


Fixe : 03 20 68 88 85

Fax : 03 20 68 90 88


45 Bvd du Général Leclerc 59100 Roubaix
12 rue Marivaux 75002 Paris



More information about the pve-devel mailing list