[pve-devel] [PATCH v5 0/3] RFC: cloud-init update

Wolfgang Bumiller w.bumiller at proxmox.com
Tue Jun 30 10:18:22 CEST 2015


> 1)-does it work with windows ? (as we expose the config drive as drive and not cdrom)
Well, it shows up, and according to the code it should, but my
experience with administrating windows systems is limited, and my
patience has hit a limit with all that lengthy point&click work...
You don't happen to have a working cloudinit-base config file for
windows I could copy? The installer doesn't let me choose the network
device, and finding configuration variables and bringing them into the
VM is hurting my brain :-P

> 2)-as we put it as ide3, it could break some guest disk drive order, if we don't use disk uuid in /etc/fstab
Personally I'm a fan of always using UUID= or LABEL=, but you're right,
this could be an issue.
However I found we could easily turn it into a CDROM drive again with
this little change:

-	    my $drive = parse_drive($ds, "$storeid:$vmid/vm-$vmid-cloudinit.qcow2");
+	    my $drive = parse_drive($ds, "$storeid:$vmid/vm-$vmid-cloudinit.qcow2,media=cdrom");

This is the only change required to make it use 'ide-cd' and thus show
up as /dev/srX.

> 3) do we really to define a special storage for hosting qcow2 ? maybe always store it in local storage and rsync it on live migration.
>    (everybody don't have a nfs shared storage)

You can simply use `cloudinit: local`. AFAIK we already need shared
storage for migration, and more importantly, using storage-backed qcow2
images automagically adds snapshot support. Otherwise we need to
separately implement some way of storing the image when making a
snapshot (which becomes a pain when we add template support, since we
need to include the template and cannot simply backup the configuration
variables in the vmid.conf).

On Tue, Jun 30, 2015 at 08:44:17AM +0200, Alexandre DERUMIER wrote:
> Hi Wolfgang,
> 
> I begin to test your patches,
> seem to works fine here.
> 
> I have some questions:
> 
> 1)-does it work with windows ? (as we expose the config drive as drive and not cdrom)
> 
> 2)-as we put it as ide3, it could break some guest disk drive order, if we don't use disk uuid in /etc/fstab
>  for example : 
>  user have 2 virtio-scsi disk
>   /dev/sda
>   /dev/sdb
>  
>  with /etc/fstab
>  /dev/sda  /
>  /dev/sdb  /var
> 
> now, with ide3, it's going to /dev/sda , as AFAIK, they are assigned by pci slots order.
> 
> Maybe create a sata controller on the lastest pcislot/bridge could avoid that. (need to test qemu 2.4 sata migration)
> 
> 3) do we really to define a special storage for hosting qcow2 ? maybe always store it in local storage and rsync it on live migration.
>    (everybody don't have a nfs shared storage)
> 
> 
> ----- Mail original -----
> De: "aderumier" <aderumier at odiso.com>
> À: "Wolfgang Bumiller" <w.bumiller at proxmox.com>
> Cc: "pve-devel" <pve-devel at pve.proxmox.com>
> Envoyé: Vendredi 26 Juin 2015 14:17:38
> Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update
> 
> Hi, 
> 
> >>For the UI enabling and disabling cloudinit would then be 
> >>adding/removing a cloudinit device. It would then also not have to be 
> >>hardcoded to ide3 but be configurable to any block device like when 
> >>adding a hard disk. 
> 
> I'm not sure but I think than sata/ahci controller are now migratable in qemu master (so for qemu 2.4) 
> 
> http://git.qemu.org/?p=qemu.git;a=commit;h=04329029a8c539eb5f75dcb6d8b016f0c53a031a 
> 
> maybe we could add a dedicated sata controller for cloudinit drive ? 
> 
> 
> 
> ----- Mail original ----- 
> De: "Wolfgang Bumiller" <w.bumiller at proxmox.com> 
> À: "pve-devel" <pve-devel at pve.proxmox.com> 
> Envoyé: Vendredi 26 Juin 2015 12:36:52 
> Objet: Re: [pve-devel] [PATCH v5 0/3] RFC: cloud-init update 
> 
> We just talked this over a bit again. 
> 
> If we keep going with this approach we could actually remove the 
> cloudinit config parameter and, similar to what Alexandre did in the 
> first patches, have an `ideX: cloudinit,storage=STOREID` parameter 
> enable cloudinit (but have it fix in the config rather than added after 
> doing `cloudinit: 1`). 
> For the UI enabling and disabling cloudinit would then be 
> adding/removing a cloudinit device. It would then also not have to be 
> hardcoded to ide3 but be configurable to any block device like when 
> adding a hard disk. 
> 
> We'd then still need a parameter for templates. (Either a new one like 
> `cloudinit-template: xyz` or if we plan on adding more cloud-init 
> parameters we could keep `cloudinit: variouts,comma=separated,args`. 
> 
> Another TODO before I forget about it again: physical cdrom drives 
> probably don't need `media=cdrom` in the code. should check that. 
> 
> On Fri, Jun 26, 2015 at 12:06:31PM +0200, Wolfgang Bumiller wrote: 
> > Changes since [PATCH v4]: 
> > 
> > Instead of generating a separate ISO image file we now generate the 
> > ISO into a qcow2 device which is storage-managed. 
> > This does not only mean we don't need to rsync the file for 
> > live-migrations, but we can also use the live-snapshot feature out of 
> > the box. 
> > 
> > It also allowed me to remove the code to generate the commandline 
> > parameters by simply making foreach_drive include the cloud-init drive 
> > (if it exists). 
> > In order to do that I had to add a $vmid parameter to it. Since it 
> > already takes the VM's config as parameter this seemed like a sane 
> > thing to do. I grepped the rest of the repositories for code affected 
> > by this change. It seemed to be all isolated in qemu-server. 
> > 
> > Please test and comment. 
> > 
> > Alexandre Derumier (1): 
> > implement cloudinit v2 
> > 
> > Wolfgang Bumiller (2): 
> > cloud-init changes 
> > cloudinit: use qcow2 for future snapshot support 
> > 
> > PVE/API2/Qemu.pm | 16 +-- 
> > PVE/QemuMigrate.pm | 8 +- 
> > PVE/QemuServer.pm | 364 +++++++++++++++++++++++++++++++++++++++++++---- 
> > PVE/VZDump/QemuServer.pm | 2 +- 
> > control.in | 2 +- 
> > 5 files changed, 353 insertions(+), 39 deletions(-) 
> > 
> > -- 
> > 2.1.4 
> > 
> > 
> > _______________________________________________ 
> > pve-devel mailing list 
> > pve-devel at pve.proxmox.com 
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> > 
> 
> _______________________________________________ 
> pve-devel mailing list 
> pve-devel at pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> _______________________________________________ 
> pve-devel mailing list 
> pve-devel at pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> 
> 




More information about the pve-devel mailing list