https://pve.proxmox.com/mediawiki/api.php?action=feedcontributions&user=Ga&feedformat=atomProxmox VE - User contributions [en]2024-03-28T11:44:29ZUser contributionsMediaWiki 1.35.11https://pve.proxmox.com/mediawiki/index.php?title=Cloud-Init_Support&diff=10253Cloud-Init Support2019-01-26T00:42:26Z<p>Ga: fixed the command for attaching the disk to the vm</p>
<hr />
<div><!--PVE_IMPORT_START_MARKER--><br />
<!-- Do not edit - this is autogenerated content --><br />
{{#pvedocs:qm-cloud-init-plain.html}}<br />
[[Category:Reference Documentation]]<br />
<pvehide><br />
Cloud-Init is the defacto<br />
multi-distribution package that handles early initialization of a<br />
virtual machine instance. Using Cloud-Init, configuration of network<br />
devices and ssh keys on the hypervisor side is possible. When the VM<br />
starts for the first time, the Cloud-Init software inside the VM will<br />
apply those settings.<br />
Many Linux distributions provide ready-to-use Cloud-Init images, mostly<br />
designed for OpenStack. These images will also work with Proxmox VE. While<br />
it may seem convenient to get such ready-to-use images, we usually<br />
recommended to prepare the images by yourself. The advantage is that you<br />
will know exactly what you have installed, and this helps you later to<br />
easily customize the image for your needs.<br />
Once you have created such a Cloud-Init image we recommend to convert it<br />
into a VM template. From a VM template you can quickly create linked<br />
clones, so this is a fast method to roll out new VM instances. You just<br />
need to configure the network (and maybe the ssh keys) before you start<br />
the new VM.<br />
We recommend using SSH key-based authentication to login to the VMs<br />
provisioned by Cloud-Init. It is also possible to set a password, but<br />
this is not as safe as using SSH key-based authentication because Proxmox VE<br />
needs to store an encrypted version of that password inside the<br />
Cloud-Init data.<br />
Proxmox VE generates an ISO image to pass the Cloud-Init data to the VM. For<br />
that purpose all Cloud-Init VMs need to have an assigned CDROM drive.<br />
Also many Cloud-Init images assume to have a serial console, so it is<br />
recommended to add a serial console and use it as display for those VMs.<br />
Preparing Cloud-Init Templates<br />
The first step is to prepare your VM. Basically you can use any VM.<br />
Simply install the Cloud-Init packages inside the VM that you want to<br />
prepare. On Debian/Ubuntu based systems this is as simple as:<br />
apt-get install cloud-init<br />
Already many distributions provide ready-to-use Cloud-Init images (provided<br />
as .qcow2 files), so alternatively you can simply download and<br />
import such images. For the following example, we will use the cloud<br />
image provided by Ubuntu at https://cloud-images.ubuntu.com.<br />
# download the image<br />
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img<br />
# create a new VM<br />
qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0<br />
# import the downloaded disk to local-lvm storage<br />
qm importdisk 9000 bionic-server-cloudimg-amd64.img local-lvm<br />
# finally attach the new disk to the VM as scsi drive<br />
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:9000/vm-9000-disk-0.raw<br />
Ubuntu Cloud-Init images require the virtio-scsi-pci<br />
controller type for SCSI drives.<br />
Add Cloud-Init CDROM drive<br />
The next step is to configure a CDROM drive which will be used to pass<br />
the Cloud-Init data to the VM.<br />
qm set 9000 --ide2 local-lvm:cloudinit<br />
To be able to boot directly from the Cloud-Init image, set the<br />
bootdisk parameter to scsi0, and restrict BIOS to boot from disk<br />
only. This will speed up booting, because VM BIOS skips the testing for<br />
a bootable CDROM.<br />
qm set 9000 --boot c --bootdisk scsi0<br />
Also configure a serial console and use it as a display. Many Cloud-Init<br />
images rely on this, as it is an requirement for OpenStack images.<br />
qm set 9000 --serial0 socket --vga serial0<br />
In a last step, it is helpful to convert the VM into a template. From<br />
this template you can then quickly create linked clones.<br />
The deployment from VM templates is much faster than creating a full<br />
clone (copy).<br />
qm template 9000<br />
Deploying Cloud-Init Templates<br />
You can easily deploy such a template by cloning:<br />
qm clone 9000 123 --name ubuntu2<br />
Then configure the SSH public key used for authentication, and configure<br />
the IP setup:<br />
qm set 123 --sshkey ~/.ssh/id_rsa.pub<br />
qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1<br />
You can also configure all the Cloud-Init options using a single command<br />
only. We have simply splitted the above example to separate the<br />
commands for reducing the line length. Also make sure to adopt the IP<br />
setup for your specific environment.<br />
Cloud-Init specific Options<br />
cipassword: &lt;string&gt;<br />
Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords.<br />
citype: &lt;configdrive2 | nocloud&gt;<br />
Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows.<br />
ciuser: &lt;string&gt;<br />
User name to change ssh keys and password for instead of the image&#8217;s configured default user.<br />
ipconfig[n]: [gw=&lt;GatewayIPv4&gt;] [,gw6=&lt;GatewayIPv6&gt;] [,ip=&lt;IPv4Format/CIDR&gt;] [,ip6=&lt;IPv6Format/CIDR&gt;]<br />
Specify IP addresses and gateways for the corresponding interface.<br />
IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.<br />
The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit gateway should be provided.<br />
For IPv6 the special string auto can be used to use stateless autoconfiguration.<br />
If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using dhcp on IPv4.<br />
gw=&lt;GatewayIPv4&gt;<br />
Default gateway for IPv4 traffic.<br />
Requires option(s): ip<br />
gw6=&lt;GatewayIPv6&gt;<br />
Default gateway for IPv6 traffic.<br />
Requires option(s): ip6<br />
ip=&lt;IPv4Format/CIDR&gt; (default = dhcp)<br />
IPv4 address in CIDR format.<br />
ip6=&lt;IPv6Format/CIDR&gt; (default = dhcp)<br />
IPv6 address in CIDR format.<br />
nameserver: &lt;string&gt;<br />
Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.<br />
searchdomain: &lt;string&gt;<br />
Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.<br />
sshkeys: &lt;string&gt;<br />
Setup public SSH keys (one key per line, OpenSSH format).<br />
See Also<br />
Qemu/KVM Virtual Machines<br />
</pvehide><br />
<!--PVE_IMPORT_END_MARKER--><br />
* [[Cloud-Init_FAQ]]</div>Gahttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_3.x_to_4.0&diff=8288Upgrade from 3.x to 4.02016-03-09T11:55:12Z<p>Ga: /* Install Proxmox VE 4.0 */</p>
<hr />
<div>== Introduction ==<br />
<br />
Proxmox VE 4.0 introduces major new features, therefore the upgrade must be carefully planned and tested. Depending on your existing configuration, several manual steps are required, including some downtime. NEVER start the upgrade process without a valid backup and without testing the same in a test lab setup.<br />
<br />
Major upgrades for V4.0:<br />
*OpenVZ is removed, a conversion via backup/restore to LXC is needed <br />
*New corosync version, therefore clusters has to be re-established<br />
*New HA manager (replacing RGmanager, involving a complete HA re-configuration)<br />
<br />
If you run a customized installation and/or you installed additional packages, for example for distributed storage like Ceph or sheepdog, DRBD or any other third party packages, you need to make sure that you also upgrade these package to Debian Jessie. <br />
<br />
V4.0 supports only the new '''DRBD9 which is not backwards compatible with the 8.x version''' and is considered only a technology preview.<br />
<br />
Generally speaking there are two possibilities to move from 3.x to 4.0<br />
<br />
*In-place upgrade via apt, step by step <br />
*New installation on new hardware (and restore VM´s from backup) - safest way.<br />
<br />
In both cases you'd better empty the browser's cache after upgrade and reload the GUI page or there is the possibility that you see a lot of glitches.<br />
<br />
== In-place upgrade ==<br />
<br />
In-place upgrades are done with apt, so make sure that you are familiar with apt before you start here.<br />
<br />
=== Preconditions ===<br />
<br />
* upgraded to latest V3.4 version<br />
* reliable access to all configured storages<br />
* healthy cluster<br />
* no VM or CT running (note: VM live migration from 3.4 to 4.0 node or vice versa NOT possible)<br />
* valid backup of all OpenVZ containers (needed for the conversion to LXC)<br />
* valid backup of all VM (only needed if something goes wrong)<br />
* Correct repository configuration (accessible both wheezy and jessie)<br />
* at least 1GB free disk space at root mount point<br />
<br />
=== Actions Step by Step ===<br />
<br />
All has to be done on each Proxmox node's command line (via console or ssh; preferably via console in order to exclude interrupted ssh connections) , some of the steps are optional. If a whole cluster should be upgraded, keep in mind the cluster name and HA configuration like failoverdomains, fencing etc since these have to be restored after upgrade by the new WEB GUI. Again, make sure that you have a valid backup of all CT and VM before you start.<br />
<br />
'''Tip''': ''It is advisable to perform a dry-run of the upgrade first. Install the PVE 3.4 ISO on testing hardware, then upgrade this installation to the latest minor version of PVE 3.4 using the test repo (see [[Package repositories]]) then copy/create relevant configurations to the test machine to replicate your production setup as closely as possible.''<br />
<br />
==== Remove Proxmox VE 3.x packages in order to avoid dependency errors ====<br />
<br />
First make sure that your actual installation is "clean", tentatively run<br />
<br />
apt-get update && apt-get dist-upgrade<br />
<br />
Then start the removal:<br />
<br />
apt-get remove proxmox-ve-2.6.32 pve-manager corosync-pve openais-pve redhat-cluster-pve pve-cluster pve-firmware <br />
<br />
Adapt repository locations and update the apt database, point all to jessie, e.g.:<br />
<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list<br />
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list<br />
apt-get update<br />
'''If there is a backports line then remove it.'''<br />
Currently, ''pve-manager'' and ''ceph-common'' have unmet dependencies with regards to package versions in the jessie ''backports'' repo.<br />
<br />
In case Ceph server is used: Ceph repositories for jessie can be found at http://download.ceph.com, therefore etc/apt/sources.list.d/ceph.list will contain e.g.:<br />
<br />
deb http://download.ceph.com/debian-hammer jessie main<br />
<br />
<br />
You also need to install the Ceph repository key to apt, for details, check the wiki on ceph.com.<br />
<br />
==== Install the new kernel ====<br />
<br />
Check first what the current new kernel's version is <br />
<br />
apt-cache search pve-kernel | sort<br />
<br />
- at the moment (February 2016) it is 4.2.8-1 - and install it:<br />
<br />
apt-get install pve-kernel-4.2.8-1-pve pve-firmware<br />
<br />
==== Upgrade the basic system to Debian Jessie ====<br />
<br />
This action will consume some time - depending on the systems performance, this can take up to 60 min or even more. If you run on SSD, the dist-upgrade can be finished in 5 minutes.<br />
<br />
apt-get dist-upgrade<br />
<br />
Reboot the system in order to activate the new kernel.<br />
<br />
==== Install Proxmox VE 4.0 ====<br />
Finally, install the new Proxmox VE 4.0 packages with one single command:<br />
apt-get install proxmox-ve<br />
<br />
Then you should purge configuration files from packages which are no longer needed (assuming you already saved your OpenVZ containers)<br />
<br />
dpkg --purge vzctl<br />
dpkg --purge redhat-cluster-pve<br />
<br />
'''Remove the old kernel''' (not a must, but recommended), e.g. (the kernel version has to be adapted to the currently installed one - there can be more old kernels too. Use dpkg --list | grep pve-kernel to find any 2.6.* kernels to remove):<br />
<br />
apt-get remove pve-kernel-2.6.*<br />
<br />
Finally, reboot and test if all is working as expected.<br />
<br />
==== Optional: OpenVZ conversion ====<br />
<br />
Convert the previously backed up containers to LXC, following the HowTo on [[Convert OpenVZ to LXC]]<br />
<br />
You can also remove the obsolete OpenVZ container data from your local storage.<br />
<br />
rm -f /etc/pve/openvz/<ct-id>.conf<br />
rm -R <storage-path>/private/*<br />
<br />
==== Cluster upgrade ====<br />
'''It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.0 cluster '''<br />
<br />
Due to the new corosync 2.x, the cluster has to be re-established again. Please use the same clustername.<br />
<br />
* at the first node<br />
<br />
pvecm create <clustername><br />
<br />
* at all other nodes:<br />
<br />
pvecm add <first-node´s-IP> -force<br />
<br />
The HA configuration (fail-over, fencing etc.) has to be re-configured manually, now supported from WEB GUI, see [[High Availability Cluster 4.x]]<br />
<br />
After upgrading the last node remove the V3.x cluster data:<br />
<br />
rm /etc/pve/cluster.conf<br />
<br />
=== Troubleshooting ===<br />
<br />
* Failing upgrade to latest Proxmox VE 3.x or removal of old packages:<br />
<br />
Make sure that the original repository configuration (for wheezy) is correct. The change to "jessie" repositories has to be done '''after''' the removal of old Proxmox VE.<br />
<br />
In case of Ceph is used: note that recently the repository url has changed to http://download.ceph.com/<br />
<br />
* Failing upgrade to "jessie"<br />
<br />
Make the sure that the repository configuration for jessie is correct.<br />
<br />
If there was a network failure and the upgrade has been made partially try to repair the situation with <br />
<br />
apt-get -fy install<br />
<br />
=== External links ===<br />
<br />
*[https://www.debian.org/releases/jessie/amd64/release-notes/ Release Notes for Debian 8.0 (jessie), 64-bit PC]<br />
<br />
== New installation ==<br />
<br />
* Backup all VMs and containers to external media (see [[Backup and Restore]])<br />
* Backup all files in /etc Yo will need various files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf and others depending on what has been configured from the defaults.<br />
* Install Proxmox VE from ISO (this will wipe all data on the existing host)<br />
* Rebuild the cluster if you had any<br />
* Restore the file /etc/pve/storage.cfg (this will re-map and make available any external media you used for backup) <br />
* Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes/<node>/host.fw (if relevant)<br />
* Restore full VMs from Backups (see [[Backup and Restore]])<br />
* Restore/Convert containers (see [[Convert OpenVZ to LXC]])<br />
<br />
[[Category:HOWTO]] [[Category:Installation]]</div>Ga