https://pve.proxmox.com/mediawiki/api.php?action=feedcontributions&user=Simon+Smith&feedformat=atomProxmox VE - User contributions [en]2024-03-28T14:47:16ZUser contributionsMediaWiki 1.35.11https://pve.proxmox.com/mediawiki/index.php?title=Cloud-Init_Support&diff=10692Cloud-Init Support2020-04-02T08:47:52Z<p>Simon Smith: </p>
<hr />
<div><!--PVE_IMPORT_START_MARKER--><br />
<!-- Do not edit - this is autogenerated content --><br />
{{#pvedocs:qm-cloud-init-plain.html}}<br />
[[Category:Reference Documentation]]<br />
<pvehide><br />
Cloud-Init is the defacto<br />
multi-distribution package that handles early initialization of a<br />
virtual machine instance. Using Cloud-Init, configuration of network<br />
devices and ssh keys on the hypervisor side is possible. When the VM<br />
starts for the first time, the Cloud-Init software inside the VM will<br />
apply those settings.<br />
Many Linux distributions provide ready-to-use Cloud-Init images, mostly<br />
designed for OpenStack. These images will also work with Proxmox VE. While<br />
it may seem convenient to get such ready-to-use images, we usually<br />
recommended to prepare the images by yourself. The advantage is that you<br />
will know exactly what you have installed, and this helps you later to<br />
easily customize the image for your needs.<br />
Once you have created such a Cloud-Init image we recommend to convert it<br />
into a VM template. From a VM template you can quickly create linked<br />
clones, so this is a fast method to roll out new VM instances. You just<br />
need to configure the network (and maybe the ssh keys) before you start<br />
the new VM.<br />
We recommend using SSH key-based authentication to login to the VMs<br />
provisioned by Cloud-Init. It is also possible to set a password, but<br />
this is not as safe as using SSH key-based authentication because Proxmox VE<br />
needs to store an encrypted version of that password inside the<br />
Cloud-Init data.<br />
Proxmox VE generates an ISO image to pass the Cloud-Init data to the VM. For<br />
that purpose all Cloud-Init VMs need to have an assigned CDROM drive.<br />
Also many Cloud-Init images assume to have a serial console, so it is<br />
recommended to add a serial console and use it as display for those VMs.<br />
Preparing Cloud-Init Templates<br />
The first step is to prepare your VM. Basically you can use any VM.<br />
Simply install the Cloud-Init packages inside the VM that you want to<br />
prepare. On Debian/Ubuntu based systems this is as simple as:<br />
apt-get install cloud-init<br />
Already many distributions provide ready-to-use Cloud-Init images (provided<br />
as .qcow2 files), so alternatively you can simply download and<br />
import such images. For the following example, we will use the cloud<br />
image provided by Ubuntu at https://cloud-images.ubuntu.com.<br />
# download the image<br />
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img<br />
# create a new VM<br />
qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0<br />
# import the downloaded disk to local-lvm storage<br />
qm importdisk 9000 bionic-server-cloudimg-amd64.img local-lvm<br />
# finally attach the new disk to the VM as scsi drive<br />
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0<br />
Ubuntu Cloud-Init images require the virtio-scsi-pci<br />
controller type for SCSI drives.<br />
Add Cloud-Init CDROM drive<br />
The next step is to configure a CDROM drive which will be used to pass<br />
the Cloud-Init data to the VM.<br />
qm set 9000 --ide2 local-lvm:cloudinit<br />
To be able to boot directly from the Cloud-Init image, set the<br />
bootdisk parameter to scsi0, and restrict BIOS to boot from disk<br />
only. This will speed up booting, because VM BIOS skips the testing for<br />
a bootable CDROM.<br />
qm set 9000 --boot c --bootdisk scsi0<br />
Also configure a serial console and use it as a display. Many Cloud-Init<br />
images rely on this, as it is an requirement for OpenStack images.<br />
qm set 9000 --serial0 socket --vga serial0<br />
In a last step, it is helpful to convert the VM into a template. From<br />
this template you can then quickly create linked clones.<br />
The deployment from VM templates is much faster than creating a full<br />
clone (copy).<br />
qm template 9000<br />
Deploying Cloud-Init Templates<br />
You can easily deploy such a template by cloning:<br />
qm clone 9000 123 --name ubuntu2<br />
Then configure the SSH public key used for authentication, and configure<br />
the IP setup:<br />
qm set 123 --sshkey ~/.ssh/id_rsa.pub<br />
qm set 123 --ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1<br />
You can also configure all the Cloud-Init options using a single command<br />
only. We have simply split the above example to separate the<br />
commands for reducing the line length. Also make sure to adopt the IP<br />
setup for your specific environment.<br />
Custom Cloud-Init Configuration<br />
The Cloud-Init integration also allows custom config files to be used instead<br />
of the automatically generated configs. This is done via the cicustom<br />
option on the command line:<br />
qm set 9000 --cicustom "user=&lt;volume&gt;,network=&lt;volume&gt;,meta=&lt;volume&gt;"<br />
The custom config files have to be on a storage that supports snippets and have<br />
to be available on all nodes the VM is going to be migrated to. Otherwise the<br />
VM won&#8217;t be able to start.<br />
For example:<br />
qm set 9000 --cicustom "user=local:snippets/userconfig.yaml"<br />
There are three kinds of configs for Cloud-Init. The first one is the user<br />
config as seen in the example above. The second is the network config and<br />
the third the meta config. They can all be specified together or mixed<br />
and matched however needed.<br />
The automatically generated config will be used for any that don&#8217;t have a<br />
custom config file specified.<br />
The generated config can be dumped to serve as a base for custom configs:<br />
qm cloudinit dump 9000 user<br />
The same command exists for network and meta.<br />
Cloud-Init specific Options<br />
cicustom: [meta=&lt;volume&gt;] [,network=&lt;volume&gt;] [,user=&lt;volume&gt;]<br />
Specify custom files to replace the automatically generated ones at start.<br />
meta=&lt;volume&gt;<br />
Specify a custom file containing all meta data passed to the VM via cloud-init. This is provider specific meaning configdrive2 and nocloud differ.<br />
network=&lt;volume&gt;<br />
Specify a custom file containing all network data passed to the VM via cloud-init.<br />
user=&lt;volume&gt;<br />
Specify a custom file containing all user data passed to the VM via cloud-init.<br />
cipassword: &lt;string&gt;<br />
Password to assign the user. Using this is generally not recommended. Use ssh keys instead. Also note that older cloud-init versions do not support hashed passwords.<br />
citype: &lt;configdrive2 | nocloud&gt;<br />
Specifies the cloud-init configuration format. The default depends on the configured operating system type (ostype. We use the nocloud format for Linux, and configdrive2 for windows.<br />
ciuser: &lt;string&gt;<br />
User name to change ssh keys and password for instead of the image&#8217;s configured default user.<br />
ipconfig[n]: [gw=&lt;GatewayIPv4&gt;] [,gw6=&lt;GatewayIPv6&gt;] [,ip=&lt;IPv4Format/CIDR&gt;] [,ip6=&lt;IPv6Format/CIDR&gt;]<br />
Specify IP addresses and gateways for the corresponding interface.<br />
IP addresses use CIDR notation, gateways are optional but need an IP of the same type specified.<br />
The special string dhcp can be used for IP addresses to use DHCP, in which case no explicit gateway should be provided.<br />
For IPv6 the special string auto can be used to use stateless autoconfiguration.<br />
If cloud-init is enabled and neither an IPv4 nor an IPv6 address is specified, it defaults to using dhcp on IPv4.<br />
gw=&lt;GatewayIPv4&gt;<br />
Default gateway for IPv4 traffic.<br />
Requires option(s): ip<br />
gw6=&lt;GatewayIPv6&gt;<br />
Default gateway for IPv6 traffic.<br />
Requires option(s): ip6<br />
ip=&lt;IPv4Format/CIDR&gt; (default = dhcp)<br />
IPv4 address in CIDR format.<br />
ip6=&lt;IPv6Format/CIDR&gt; (default = dhcp)<br />
IPv6 address in CIDR format.<br />
nameserver: &lt;string&gt;<br />
Sets DNS server IP address for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.<br />
searchdomain: &lt;string&gt;<br />
Sets DNS search domains for a container. Create will automatically use the setting from the host if neither searchdomain nor nameserver are set.<br />
sshkeys: &lt;string&gt;<br />
Setup public SSH keys (one key per line, OpenSSH format).<br />
See Also<br />
Qemu/KVM Virtual Machines<br />
</pvehide><br />
<!--PVE_IMPORT_END_MARKER--><br />
* [[Cloud-Init_FAQ]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=LVM2&diff=10016LVM22017-11-29T16:24:27Z<p>Simon Smith: /* Create a extra LV for /var/lib/vz */</p>
<hr />
<div>== Introduction ==<br />
storage pool type: lvm<br />
<br />
LVM is a thin software layer on top of hard disks and partitions. It can be used to split available disk space<br />
into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier.<br />
<br />
Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on<br />
that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a<br />
management interface for space allocation<br />
<br />
=== Configuration ===<br />
The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:<br />
* vgname<br />
** LVM volume group name. This must point to an existing volume group.<br />
* base<br />
** Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server.<br />
* saferemove<br />
**Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased.<br />
* saferemove_throughput<br />
**Wipe throughput (cstream -t parameter value).<br />
<br />
=== Configuration Example (/etc/pve/storage.cfg)===<br />
lvm: myspace<br />
vgname myspace<br />
content rootdir,images<br />
<br />
=== General LVM advantages ===<br />
<br />
LVM is a typical block storage, but this backend does not support snapshot and clones. Unfortunately,<br />
normal LVM snapshots are quite inefficient, because they interfere all writes on the whole volume group<br />
during snapshot time.<br />
<br />
One big advantage is that you can use it on top of a shared storage, for example an iSCSI LUN. The backend<br />
itself implement proper cluster wide locking.<br />
<br />
{{Note|The newer LVM-thin backend allows snapshot and clones, but does not support shared storage.}}<br />
<br />
== Standard installation ==<br />
On a default installation Proxmox VE will use lvm.<br />
The layout looks like followed.<br />
{| class="wikitable"<br />
|-<br />
! VG !! LV !! Mountpoint !! Note<br />
|-<br />
| pve || swap || || will used as swap partition <br />
|-<br />
| pve || root || / || Example <br />
|-<br />
| pve || data || /var/lib/vz/ || Proxmox VE < 4.2<br />
|-<br />
| pve || data || || Proxmox VE >= 4.2<br />
|}<br />
In Proxmox VE 4.2 we changed the LV data to a thin pool, to provide snapshots and native performance of<br />
the disk. The /var/lib/vz is now included in the LV root.<br />
<br />
==LVM-Thin==<br />
storage pool type: lvmthin<br />
<br />
LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when<br />
they are written. This behavior is called thin-provisioning, because volumes can be much larger than<br />
physically available space.<br />
<br />
You can use the normal LVM command line tools to manage and create LVM thin pools (see man lvmthin<br />
for details). Assuming you already have a LVM volume group called pve, the following commands create<br />
a new LVM thin pool (size 100G) called data:<br />
<br />
lvcreate -L 100G -n data pve<br />
lvconvert --type thin-pool pve/data<br />
<br />
Caution:<br />
<br />
Under certain circumstances, LVM does not correctly calculate the metadatapool/chunk size.<br />
Please check if the metadatapool is big enough.<br />
The formula which has to be satisfied is:<br />
<br />
PoolSize/ChunkSize * 64b = MetadataPoolSize<br />
<br />
you can get this information via<br />
<br />
lvs -a -o name,size,chunk_size<br />
<br />
=== Configuration ===<br />
The LVM thin backend supports the common storage properties content, nodes, disable, and the<br />
following LVM specific properties:<br />
* vgname<br />
** LVM volume group name. This must point to an existing volume group.<br />
* thinpool<br />
** The name of the LVM thin pool.<br />
<br />
=== Configuration Example (/etc/pve/storage.cfg) ===<br />
<br />
lvmthin: local-lvm<br />
thinpool data<br />
vgname pve<br />
content rootdir,images<br />
<br />
=== General LVM-Thin advantages ===<br />
<br />
LVM thin is a block storage, but fully supports snapshots and clones efficiently. New volumes are automatically initialized with zero.<br />
<br />
It must be mentioned that LVM thin pools cannot be shared across multiple nodes, so you can only use<br />
them as local storage.<br />
<br />
===Create a extra LV for /var/lib/vz===<br />
<br />
This can be easily done by create a new thin LV. It is thin provisioned.<br />
<br />
lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool><br />
<br />
A real world example it looks like<br />
<br />
lvcreate -n vz -V 10G pve/data<br />
<br />
Now a filesystem must be create on the LV.<br />
<br />
mkfs.ext4 /dev/pve/vz<br />
<br />
And at last step this have to be mounted.<br />
<br />
{{Note|Be sure that /var/lib/vz is empty. On a default installation it isn’t.}}<br />
<br />
To make it always accessible add the following line at /etc/fstab and then '''mount -a''' to reload the mount point<br />
<br />
echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab<br />
<br />
===Resize metadata pool===<br />
<br />
{{Note|If the pool will extend, then it is be necessary to extent also the metadata pool.<br />
It can be achieved with the following command.}}<br />
<br />
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool><br />
<br />
== LVM vs LVM-Thin ==<br />
{| class="wikitable"<br />
|-<br />
! Type !! Content types !! Image formats !! Shared !! Snapshots !! Clones<br />
|-<br />
| LVM || images,rootdir || raw || possible || no || no<br />
|-<br />
| LVM-Thin || images,rootdir || raw || no || yes || yes<br />
|}<br />
<br />
== Administration ==<br />
==== Create a Volume Group ====<br />
Let's assume we have a empty disk /dev/sdb, where we want to make a Volume Group named vmdata.<br />
<br />
First create a partition.<br />
<br />
sgdisk -N 1 /dev/sdb<br />
<br />
pvcreate --metadatasize 250k -y -ff /dev/sdb1<br />
vgcreate vmdata /dev/sdb1<br />
<br />
== Troubleshooting and known issues ==<br />
<br />
<br />
=== Thin Overprovisioning ===<br />
<br />
<br />
In a LVM_thin pool there is no space limit when defining LVM volumes in it - regardless if these volumes are virtual disks for containers or virtual machines or just volumes for any other purpose defined by "lvcreate". If the total amount of all defined logical volume size within a thin pool exceeds the physical size of the pool it is called "overprovisioning".<br />
<br />
Attention! You can never use more space for data than physically is available! Unfortunately in case of reaching the space limit no direct warning or error message occurs. At the user interface - e.g. inside a virtual machine - it looks like all logical space can be used; but when exceeding the physical limit the data are corrupt!<br />
<br />
Therefore it is recommended <br />
<br />
- to avoid overprovisioning; or at least, if not possible <br />
<br />
- to check regularly via <br />
lvs <br />
the actual physical usage.<br />
<br />
See also "Automatically extend thin pool LV" in <br />
<br />
man lvmthin<br />
<br />
[[Category: HOWTO]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=LVM2&diff=10015LVM22017-11-29T16:24:11Z<p>Simon Smith: /* Create a extra LV for /var/lib/vz */</p>
<hr />
<div>== Introduction ==<br />
storage pool type: lvm<br />
<br />
LVM is a thin software layer on top of hard disks and partitions. It can be used to split available disk space<br />
into smaller logical volumes. LVM is widely used on Linux and makes managing hard drives easier.<br />
<br />
Another use case is to put LVM on top of a big iSCSI LUN. That way you can easily manage space on<br />
that iSCSI LUN, which would not be possible otherwise, because the iSCSI specification does not define a<br />
management interface for space allocation<br />
<br />
=== Configuration ===<br />
The LVM backend supports the common storage properties content, nodes, disable, and the following LVM specific properties:<br />
* vgname<br />
** LVM volume group name. This must point to an existing volume group.<br />
* base<br />
** Base volume. This volume is automatically activated before accessing the storage. This is mostly useful when the LVM volume group resides on a remote iSCSI server.<br />
* saferemove<br />
**Zero-out data when removing LVs. When removing a volume, this makes sure that all data gets erased.<br />
* saferemove_throughput<br />
**Wipe throughput (cstream -t parameter value).<br />
<br />
=== Configuration Example (/etc/pve/storage.cfg)===<br />
lvm: myspace<br />
vgname myspace<br />
content rootdir,images<br />
<br />
=== General LVM advantages ===<br />
<br />
LVM is a typical block storage, but this backend does not support snapshot and clones. Unfortunately,<br />
normal LVM snapshots are quite inefficient, because they interfere all writes on the whole volume group<br />
during snapshot time.<br />
<br />
One big advantage is that you can use it on top of a shared storage, for example an iSCSI LUN. The backend<br />
itself implement proper cluster wide locking.<br />
<br />
{{Note|The newer LVM-thin backend allows snapshot and clones, but does not support shared storage.}}<br />
<br />
== Standard installation ==<br />
On a default installation Proxmox VE will use lvm.<br />
The layout looks like followed.<br />
{| class="wikitable"<br />
|-<br />
! VG !! LV !! Mountpoint !! Note<br />
|-<br />
| pve || swap || || will used as swap partition <br />
|-<br />
| pve || root || / || Example <br />
|-<br />
| pve || data || /var/lib/vz/ || Proxmox VE < 4.2<br />
|-<br />
| pve || data || || Proxmox VE >= 4.2<br />
|}<br />
In Proxmox VE 4.2 we changed the LV data to a thin pool, to provide snapshots and native performance of<br />
the disk. The /var/lib/vz is now included in the LV root.<br />
<br />
==LVM-Thin==<br />
storage pool type: lvmthin<br />
<br />
LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when<br />
they are written. This behavior is called thin-provisioning, because volumes can be much larger than<br />
physically available space.<br />
<br />
You can use the normal LVM command line tools to manage and create LVM thin pools (see man lvmthin<br />
for details). Assuming you already have a LVM volume group called pve, the following commands create<br />
a new LVM thin pool (size 100G) called data:<br />
<br />
lvcreate -L 100G -n data pve<br />
lvconvert --type thin-pool pve/data<br />
<br />
Caution:<br />
<br />
Under certain circumstances, LVM does not correctly calculate the metadatapool/chunk size.<br />
Please check if the metadatapool is big enough.<br />
The formula which has to be satisfied is:<br />
<br />
PoolSize/ChunkSize * 64b = MetadataPoolSize<br />
<br />
you can get this information via<br />
<br />
lvs -a -o name,size,chunk_size<br />
<br />
=== Configuration ===<br />
The LVM thin backend supports the common storage properties content, nodes, disable, and the<br />
following LVM specific properties:<br />
* vgname<br />
** LVM volume group name. This must point to an existing volume group.<br />
* thinpool<br />
** The name of the LVM thin pool.<br />
<br />
=== Configuration Example (/etc/pve/storage.cfg) ===<br />
<br />
lvmthin: local-lvm<br />
thinpool data<br />
vgname pve<br />
content rootdir,images<br />
<br />
=== General LVM-Thin advantages ===<br />
<br />
LVM thin is a block storage, but fully supports snapshots and clones efficiently. New volumes are automatically initialized with zero.<br />
<br />
It must be mentioned that LVM thin pools cannot be shared across multiple nodes, so you can only use<br />
them as local storage.<br />
<br />
===Create a extra LV for /var/lib/vz===<br />
<br />
This can be easily done by create a new thin LV. It is thin provisioned.<br />
<br />
lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool><br />
<br />
A real world example it looks like<br />
<br />
lvcreate -n vz -V 10G pve/data<br />
<br />
Now a filesystem must be create on the LV.<br />
<br />
mkfs.ext4 /dev/pve/vz<br />
<br />
And at last step this have to be mounted.<br />
<br />
{{Note|Be sure that /var/lib/vz is empty. On a default installation it isn’t.}}<br />
<br />
To make it always accessible add the following line at /etc/fstab. and then '''mount -a''' to reload the mount point<br />
<br />
echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab<br />
<br />
===Resize metadata pool===<br />
<br />
{{Note|If the pool will extend, then it is be necessary to extent also the metadata pool.<br />
It can be achieved with the following command.}}<br />
<br />
lvresize --poolmetadatasize +<size[M,G]> <VG>/<LVThin_pool><br />
<br />
== LVM vs LVM-Thin ==<br />
{| class="wikitable"<br />
|-<br />
! Type !! Content types !! Image formats !! Shared !! Snapshots !! Clones<br />
|-<br />
| LVM || images,rootdir || raw || possible || no || no<br />
|-<br />
| LVM-Thin || images,rootdir || raw || no || yes || yes<br />
|}<br />
<br />
== Administration ==<br />
==== Create a Volume Group ====<br />
Let's assume we have a empty disk /dev/sdb, where we want to make a Volume Group named vmdata.<br />
<br />
First create a partition.<br />
<br />
sgdisk -N 1 /dev/sdb<br />
<br />
pvcreate --metadatasize 250k -y -ff /dev/sdb1<br />
vgcreate vmdata /dev/sdb1<br />
<br />
== Troubleshooting and known issues ==<br />
<br />
<br />
=== Thin Overprovisioning ===<br />
<br />
<br />
In a LVM_thin pool there is no space limit when defining LVM volumes in it - regardless if these volumes are virtual disks for containers or virtual machines or just volumes for any other purpose defined by "lvcreate". If the total amount of all defined logical volume size within a thin pool exceeds the physical size of the pool it is called "overprovisioning".<br />
<br />
Attention! You can never use more space for data than physically is available! Unfortunately in case of reaching the space limit no direct warning or error message occurs. At the user interface - e.g. inside a virtual machine - it looks like all logical space can be used; but when exceeding the physical limit the data are corrupt!<br />
<br />
Therefore it is recommended <br />
<br />
- to avoid overprovisioning; or at least, if not possible <br />
<br />
- to check regularly via <br />
lvs <br />
the actual physical usage.<br />
<br />
See also "Automatically extend thin pool LV" in <br />
<br />
man lvmthin<br />
<br />
[[Category: HOWTO]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=HTTPS_Certificate_Configuration_(Version_4.x,_5.0_and_5.1)&diff=8730HTTPS Certificate Configuration (Version 4.x, 5.0 and 5.1)2016-06-28T13:18:41Z<p>Simon Smith: </p>
<hr />
<div>== Introduction ==<br />
This is a howto for changing the web server certificate used by Proxmox VE, in order to enable the usage of publicly trusted certificates issued by a CA of your choice (like Let's Encrypt or a commercial CA).<br />
It has been tested on a Proxmox VE 4.1 installation, using certificates from https://www.letsencrypt.org.<br />
<br />
''Note: the previous, outdated version of this HowTo is archived at [[HTTPSCertificateConfigurationOld]]''<br />
<br />
== Revert to default configuration ==<br />
If you have used the previous HowTo and replaced any of the certificate or key files generated by PVE, you need to revert to the default state before proceeding.<br />
<br />
Delete or move the following files:<br />
<br />
* /etc/pve/pve-root-ca.pem<br />
* /etc/pve/priv/pve-root-ca.key<br />
* /etc/pve/nodes/<node>/pve-ssl.pem<br />
* /etc/pve/nodes/<node>/pve-ssl.key<br />
<br />
The latter two need to be repeated for all nodes if you have a cluster.<br />
<br />
Afterwards, run the following command on each node of the cluster to re-generate the certificates and keys:<br />
<br />
pvecm updatecerts -f<br />
<br />
== CAs other than Let's Encrypt ==<br />
<br />
=== Install certificate chain and key ===<br />
<br />
Since pve-manager 4.1-20, it is possible to provide alternative SSL files for each node's web interface. The following steps need to be repeated for each node where you want to use alternative certificate files.<br />
<br />
First check your version of pve-manager and upgrade if necessary:<br />
<br />
pveversion<br />
<br />
You will need the following two files provided by your CA:<br />
<br />
* fullchain.pem (your certificate and all intermediate certificates, excluding the root certificate, in PEM format)<br />
* private-key.pem (your private key, in PEM format, without a password)<br />
<br />
Now copy those files to the override locations in /etc/pve/nodes/<node> (make sure to use the correct certificate files and node!):<br />
<br />
cp fullchain.pem /etc/pve/nodes/<node>/pveproxy-ssl.pem<br />
cp private-key.pem /etc/pve/nodes/<node>/pveproxy-ssl.key<br />
<br />
and restart the web interface:<br />
<br />
systemctl restart pveproxy<br />
<br />
The system log should inform you about the usage of the alternative SSL certificate ("Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface."):<br />
<br />
journalctl -b -u pveproxy.service<br />
<br />
When accessing the web interface on this node, you should be presented with the new certificate. Note that the alternative certificate is only used by the web interface (including noVNC), but not by the Spice Console/Shell.<br />
<br />
== Let's Encrypt using acme.sh ==<br />
<br />
=== Prerequisites ===<br />
<br />
Let's Encrypt enables everyone with a publicly resolvable domain name to be issued SSL certificates for free.<br />
<br />
Your domain name needs to be publicly resolvable both ways. (Check with `<tt>$ drill -x Your.Ip.Address</tt>` or `<tt>$ dig -x Your.Ip.Address</tt>`)<br />
<br />
The following steps show how to achieve this using the acme.sh bash script and standalone HTTP authentication.<br />
<br />
These steps need to be repeated on each node where you want to use Let's Encrypt certificates.<br />
<br />
You need at least <tt>pve-manager >= 4.1-20</tt> (see `<tt>$ pveversion</tt>`), so upgrade if necesasry.<br />
<br />
=== Install certificate chain and key ===<br />
<br />
==== 0) Upgrade from le.sh to acme.sh ====<br />
<br />
If you followed a previous version of this HowTo using le.sh, please uninstall le.sh and proceed with "Install acme.sh":<br />
<br />
le.sh uninstall<br />
<br />
acme.sh is the 2.X release of le.sh, the existing configuration should be migrated automatically when installing acme.sh.<br />
<br />
==== 1) Install acme.sh ====<br />
<br />
Install the acme.sh script from https://github.com/Neilpang/acme.sh (this howto was tested with commit 2d39b3df8893cd256257fe1f32ca6b0485a90dcf):<br />
<br />
Via git:<br />
<br />
git clone https://github.com/Neilpang/acme.sh.git acme.sh-master<br />
<br />
Or direct download:<br />
<br />
wget 'https://github.com/Neilpang/acme.sh/archive/master.zip'<br />
unzip master.zip<br />
<br />
==== 2) Run the install script ====<br />
<br />
You must do this from within the script's directory, otherwise it won't find <tt>acme.sh</tt>! Take care to replace <tt>$EMAIL</tt> with the address that you want to register with at Let's Encrypt. Let's Encrypt will send automatic expiration reminders to this address!<br />
<br />
mkdir /etc/pve/.le<br />
cd /root/acme.sh-master<br />
./acme.sh --install --accountconf /etc/pve/.le/account.conf --accountkey /etc/pve/.le/account.key --accountemail "$EMAIL"<br />
<br />
==== 3) Check the account config ====<br />
<br />
Check the config file in <tt>/etc/pve/.le/account.conf</tt> and verify:<br />
* the <tt>ACCOUNT_EMAIL</tt> variable should be set to your email address<br />
* the <tt>ACCOUNT_KEY_PATH</tt> variable should be set to "<tt>/etc/pve/.le/account.key</tt>"<br />
<br />
You can edit this file with your favourite text editor if either of those is incorrect.<br />
<br />
==== 4) Make sure port 80 is open from the public ====<br />
<br />
As part of the certificate creation process, acme.sh will listen for a confirmation from LetsEncrypt's servers on port 80. Check that this port is therefore not blocked by any firewall between the machine you are certifying and the public internet.<br />
<br />
You can close the port once you're done issuing all certificates for your cluster. However, be aware that as part of the certificate renewal process (managed with a cron job that acme.sh installs), port 80 must also be open. You may therefore need to work out an automated way (not covered in this guide) of opening up port 80 for the renewal process.<br />
<br />
==== 5) Issue your first certificate ====<br />
<br />
Now you can issue your first certificate, replacing <tt>$DOMAIN</tt> with your node's fully qualified domain:<br />
<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN<br />
<br />
Warnings like "cp: preserving permissions for ‘/etc/pve/local/pveproxy-ssl.pem.bak’: Function not implemented" can be safely ignored. <br />
<br />
By appending the previous command with <tt>--test</tt> you can issue a certificate using the staging (i.e., testing) CA instead of the production CA:<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN --test<br />
<br />
To "upgrade" to a production certificate, you need to rerun the issue command with an appended <tt>--force</tt> instead of <tt>--test</tt>, in order to replace the existing (test) certificate even though it is not yet expired. This can also be used to force a premature renewal in case the node's domain name has changed:<br />
acme.sh --issue --standalone --keypath /etc/pve/local/pveproxy-ssl.key --fullchainpath /etc/pve/local/pveproxy-ssl.pem --reloadcmd "systemctl restart pveproxy" -d $DOMAIN --force<br />
<br />
==== 6. Check it's working ====<br />
<br />
If necessary, close the firewall port again.<br />
<br />
The system log should inform you about the usage of the alternative SSL certificate ("Using '/etc/pve/local/pveproxy-ssl.pem' as certificate for the web interface."):<br />
<br />
journalctl -b -u pveproxy.service<br />
<br />
When accessing the web interface on this node, you should be presented with the new certificate. Note that the alternative certificate is only used by the web interface (including noVNC), but not by the Spice Console/Shell.<br />
<br />
==== 7. Set up automatic renewal ====<br />
<br />
acme.sh installs a cron job that checks the installed certificate(s) and automatically renews them before they expire. <br />
<br />
The crontab entry should look like this (<tt>crontab -l</tt>):<br />
<br />
0 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/null<br />
<br />
It's a good idea to test the cron entry by running it manually from the command line to check that it's working OK:<br />
"/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh"<br />
<br />
NOTE: The requirements for issuing certificates apply for renewals as well: the configured domain name '''must be resolvable and reachable on port 80 from the public internet when the renewal cron job runs'''.<br />
<br />
=== Updating acme.sh ===<br />
<br />
acme.sh can be updated with the following commands when installed from the git repository:<br />
<br />
cd /root/le-master<br />
git pull<br />
./acme.sh --install --accountconf /etc/pve/.le/account.conf --accountkey /etc/pve/.le/account.key --accountemail "YOUR@EMAIL.ADDRESS"<br />
<br />
=== Account key ===<br />
<br />
It is recommended to do an off-site/offline backup of the account key file in <tt>/etc/pve/.le/account.key</tt>, in case one of your certificate private key files is lost or compromised, it can be used to revoke the associated certificate.<br />
<br />
== Let's Encrypt using other clients ==<br />
<br />
It should also be possible to use other Let's Encrypt clients, as long as care is taken that the issued as well as renewed certificates and the associated keys are copied to the correct locations, and the pveproxy service is restarted afterwards. <br />
<br />
[[Category: HOWTO]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=OpenVZ_Console&diff=8063OpenVZ Console2015-12-31T11:23:03Z<p>Simon Smith: </p>
<hr />
<div>== Introduction ==<br />
Beginning with Proxmox VE 2.2, we introduced a new console view (with login capability). Especially for beginners it is not that easy to understand and manage containers but with the new console this is big step forward. OpenVZ and KVM console now look quite similar.<br />
<br />
But as most OpenVZ templates have disabled terminals, you need to enable it first. This article describes the changes needed for an already running OpenVZ container.<br />
<br />
'''Note:'''<br />
<br />
All Debian templates created with latest [[Debian Appliance Builder]] have already got this set of changes, just download them via GUI to your Proxmox VE storage (Debian 6 and 7 templates are up2date, 32 and 64 bit)<br />
<br />
== Debian 5,6,7 ==<br />
This will also work in debian8 if you use sysv-init instead of systemd<br />
<br />
=== Editing the config file via the host ===<br />
You can do this on the host without entering CT (but the CT must be running). Just log in to the Proxmox VE host and:<br />
<br />
edit all inittabs under /var/lib/vz/root/ :<br />
<pre><br />
nano /var/lib/vz/root/*/etc/inittab<br />
<br />
# add this<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
</pre><br />
<br />
=== Editing the configuration file inside the container ===<br />
[[Image:Screen-Debian-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 122<br />
<br />
root@debian:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Ubuntu ==<br />
=== Ubuntu 10.04, 12.04, 14.04 ===<br />
[[Image:Screen-Ubuntu-12.04-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1404.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 108<br />
<br />
root@ubuntu-1404:/# nano /etc/init/tty1.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
<br />
# tty1 - getty<br />
#<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/getty -8 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Centos ==<br />
=== Centos 5 / 7 ===<br />
[[Image:Screen-Centos-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 111<br />
<br />
root@centos5-64:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/agetty tty1 38400 linux<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
=== Centos 6 ===<br />
[[Image:Screen-Centos-6-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 109<br />
<br />
root@centos63-64:/# nano /etc/init/tty.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
<br />
Either run "start tty" without rebooting the container, or save the changes and shutdown/start the container via Console.<br />
<br />
== Troubleshooting ==<br />
If you still want to use the previous method (vzctl enter CTID) you can open the host "Shell" and just type 'vzctl enter CTID" to manage your containers.<br />
==Java browser plugin==<br />
The console is using a Java applet, therefore you need latest Oracle (Sun) Java browser plugin installed and enabled in your browser (Google Chrome and Firefox preferred). If you are on Windows desktop, just go to java.com, if you run a Linux desktop you need to make sure that you run Oracle (Sun) Java plugin instead of the default openjdk. For Debian/Ubuntu based desktops, see [[Java_Console_(Ubuntu)]] <br />
<br />
== Modifying your templates ==<br />
<br />
If you don't want to commit the changes above for every single CT you create, you can simply update the templates accordingly. For this, simply place the file you want to insert into your template (like etc/inittab for debian containers) into your template folder and update the template. <br />
The following is specific to CentOS 6, just replace filename/path and contents with the appropriate contents found above.<br />
<pre>cd [TEMPLATE LOCATION] #Modify this<br />
<br />
mkdir -p etc/init<br />
<br />
cat <<EOF >etc/init/tty.conf<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
EOF<br />
<br />
gunzip centos-6-standard_6.3-1_amd64.tar.gz<br />
tar -rf centos-6-standard_6.3-1_amd64.tar etc<br />
gzip centos-6-standard_6.3-1_amd64.tar<br />
<br />
rm etc/init/tty.conf<br />
rmdir -p etc/init</pre><br />
<br />
[[Category: HOWTO]] [[Category: Technology]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=Windows_VirtIO_Drivers&diff=7034Windows VirtIO Drivers2015-02-19T19:58:14Z<p>Simon Smith: </p>
<hr />
<div>==Introduction==<br />
VirtIO Drivers are paravirtualized drivers for [[KVM|kvm]]/Linux (see http://www.linux-kvm.org/page/Virtio). In short, they enable direct (paravirtualized) access to device and peripherals to virtual machines using them, instead of slower, emulated, ones. <br><br />
A quite extended explanation about VirtIO drivers can be found here http://www.ibm.com/developerworks/library/l-virtio.<br />
<br />
At the moment this kind of devices are supported:<br />
* block (disks drives), see [[Paravirtualized Block Drivers for Windows]]<br />
* network (ethernet cards), see [[Paravirtualized Network Drivers for Windows|Paravirtualized Network Drivers for Windows]]<br />
* baloon (dynamic memory management), see [[Dynamic Memory Management]]<br />
<br />
Usually using VirtIO drivers you can maximize performances, but this depends on the availability and status of guest VirtIO drivers for your guest OS and platform.<br />
<br />
== Windows OS support ==<br />
<br />
While recent Linux kernels already have those drivers so any distribution, running in a kvm VM, should recognize virtio devices exposed by the kvm hypervisor, all current Windows OS need special drivers to use virtio devices. Microsoft does not provide them, so someone kindly managed to make virtio drivers available also for windows systems. <br />
<br />
See <br />
<br />
*http://www.linux-kvm.org/page/WindowsGuestDrivers <br />
*http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers<br />
<br />
Following info on those page you can find: <br />
<br />
*a git repository: https://github.com/YanVugenfirer/kvm-guest-drivers-windows <br />
*:this is the source for the Windows drivers and is hosted in a repository on GIT hub. Anonymous users can clone the repository <br />
<br />
=== Packaged sets of drivers ===<br />
<br />
Each of those "packaged" sets of drivers available is labelled with a numeric release, and differs by features &amp; bugs as it improves through the time. <br />
*Most recent set is virtio-win-0.1-100, with updates to virtio drivers as of 13 Jan 2015. (see [[Windows_VirtIO_Drivers/Changelog|changelog]])<br />
*Previous versions could still be useful when, as it happens, some Windows VM shows instability or incompatibility with latest drivers set.<br />
*a web repository http://alt.fedoraproject.org/pub/alt/virtio-win/ <br />
*:here you can find both stable and latest sets of drivers <br />
*:*in source format (.zip) <br />
*:*in compiled format (.iso) <br />
*:*'''Those binary drivers are digitally signed, and will work on 64-bit versions of Windows'''<br />
<br />
==== Choose the right driver ====<br />
<br />
{{Template:VirtIOFedoraISOFolderNames}}<br />
<br />
==See also==<br />
* [[Windows_VirtIO_Drivers/Changelog]]<br />
* [[Paravirtualized Block Drivers for Windows]]<br />
* [[Paravirtualized Network Drivers for Windows]]<br />
* [[Dynamic Memory Management]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=Upgrade_from_2.3_to_3.0&diff=5590Upgrade from 2.3 to 3.02013-05-15T19:07:58Z<p>Simon Smith: </p>
<hr />
<div>'''Note''': 3.0 is currently in RC2 status<br />
<br />
=Introduction=<br />
<br />
There are two possibilities to move from 2.3 to 3.0:<br />
<br />
*In-place upgrade via script (recommended)<br />
*New installation on new hardware (and restore VM´s from backup)<br />
<br />
=In-place upgrade via script=<br />
Before you start, make sure you have a valid backup of all your settings, VM´s and CT´s. If the upgrade fails, you should be able to do a clean ISO installations and restore all VM´s and CT´s from backup.<br />
<br />
We provide an upgrade script which does the following:<br />
*Dist-upgrade from Squeeze to Wheezy<br />
*Installation of Proxmox VE 3.0<br />
*Optional: Purge obsolete packages<br />
<br />
==Requirements==<br />
*Up2date Proxmox VE 2.3<br />
*Backup of all VM´s and CT´s<br />
*No running VM´s or CT´s<br />
*Enough free space in your /root file-system<br />
<br />
==Start the upgrade==<br />
*make sure you have up to date 2.3 system<br />
<br />
*Login to your Proxmox VE host with SSH and download the script:<br />
<br />
wget http://download.proxmox.com/debian/dists/wheezy/pve-upgrade-2.3-to-3.0<br />
<br />
*make the script executable:<br />
<br />
chmod +x pve-upgrade-2.3-to-3.0<br />
<br />
*Stop all your VMs and Containers<br />
<br />
Depending on your Internet connection and hardware the upgrade will take some time (if everything is fast it will take about 10 minutes). The script is idempotent, so you can safely restart it if something goes wrong. <br />
<br />
The script can download the packages (around 265 MB) before the packages are installed, so this is a good method to minimize downtime, especially if your internet connection is slow. <br />
<br />
./pve-upgrade-2.3-to-3.0 --download-only<br />
<br />
If you are really ready for the upgrade, run the script:<br />
<br />
./pve-upgrade-2.3-to-3.0<br />
<br />
The script writes a detailed log to 'pve-upgrade.log'.<br />
<br />
*Reboot.<br />
<br />
*Optional: Purge obsolete packages to save disk space (this removes all non default packages, so run this ONLY if you did not installed anything else. Do not run this on customized installations like OVH or similar)<br />
<br />
./pve-upgrade-2.3-to-3.0 --purge<br />
<br />
=Cluster upgrades=<br />
<br />
*stop or migrate all VM/CT to another node (live or offline)<br />
*run the upgrade and reboot<br />
*migrate or start the VM/CT on this 3.0 node and check if everything is working as expect. Also live-migrate should work.<br />
*do the same with all other nodes, step by step<br />
<br />
=Troubleshooting=<br />
==Multicast==<br />
If you run multicast, you need to adapt your multipath.conf - 'selector' is now called 'path_selector'<br />
<br />
==GUI/Browser shows old version==<br />
Reload the page and/or empty browser cache.<br />
<br />
==Minor Grub2 issue==<br />
After running the upgrade, the grub menu list still shows on reboot the 1.98 grub version from Squeeze. To upgrade this to 1.99, just run:<br />
grub-install '(hd0)'<br />
<br />
==apache2 warnings==<br />
Apache2 is not needed anymore and can be removed.<br />
<br />
apt-get purge apache2*<br />
<br />
=New installation on new hardware=<br />
Install 3.0 on new servers and move your VM´s step by step via backup/restore. If you choose this method you can do the move step by step and with minimum risk.<br />
=External links=<br />
*[http://www.debian.org/releases/stable/amd64/release-notes/ Release Notes for Debian 7.0 (wheezy), 64-bit PC]<br />
[[Category: HOWTO]][[Category: Installation]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=Win7_Guest_DHCP_not_working_fix&diff=5150Win7 Guest DHCP not working fix2013-02-19T14:06:56Z<p>Simon Smith: </p>
<hr />
<div>{{Note|Article about Proxmox VE 2.0}} <br />
<br />
If you have problem with Win7 or Win2012 KVM guest with dhcp not working, you can try to disable checksum offloading <br />
<pre>Disabling the Checksum Offloading the following way on Your Windows PC solved the Problem.<br />
<br />
1. Click Start - Search and type “regedit”.<br />
<br />
2. Go to the following registry key<br />
<br />
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters<br />
<br />
HKEY_LOCAL_MACHINE<br />
\SYSTEM<br />
\CurrentControlSet<br />
\Services<br />
\Tcpip<br />
\Parameters<br />
<br />
3. Add a DWORD(32bit) Value named "DisableTaskOffload" and set it to "1".<br />
4. Restart the Windows PC to make the changes happen.<br />
</pre> <br />
[[Category:Proxmox_VE_2.0]] [[Category:HOWTO]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=Win7_Guest_DHCP_not_working_fix&diff=5149Win7 Guest DHCP not working fix2013-02-19T14:06:19Z<p>Simon Smith: </p>
<hr />
<div>{{Note|Article about Proxmox VE 2.0}} <br />
<br />
If you have problem with Win7 or Win2012 KVM guest with dhcp not working, you can try to disable checksum offloading<br />
<br />
<pre><br />
Disabling the Checksum Offloading the following way on Your Windows PC solved the Problem.<br />
<br />
1. Click Start - Search and type “regedit”.<br />
<br />
2. Go to the following registry key<br />
<br />
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters<br />
<br />
HKEY_LOCAL_MACHINE<br />
\SYSTEM<br />
\CurrentControlSet<br />
\Services<br />
\Tcpip<br />
\Parameters<br />
<br />
3. Add a DWORD(32bit) Value named "DisableTaskOffload" and set it to "1".<br />
4. Restart the Windows PC to make the changes happen.<br />
</pre><br />
<br />
[[Category:Proxmox_VE_2.0]] [[Category:HOWTO]]</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=User:Simon_Smith&diff=5136User:Simon Smith2013-02-13T08:11:16Z<p>Simon Smith: </p>
<hr />
<div>I work in the uk and have been using virtualisation for many years now<br />
All our servers are virtual and soon our work pcs will be :)<br />
<br />
I currently develop applications in php, JavaScript, CSS, HTML, java, iOS, android and many more...<br />
<br />
Proxmox has been the heart of my development and testing...<br />
<br />
With the virtualisation of Windows machines, it saves times and money...</div>Simon Smithhttps://pve.proxmox.com/mediawiki/index.php?title=OpenVZ_Console&diff=4877OpenVZ Console2012-11-01T12:13:05Z<p>Simon Smith: </p>
<hr />
<div>=Introduction=<br />
Beginning with Proxmox VE 2.2, we introduced a new console view (with login capability). Especially for beginners it is not that easy to understand and manage containers but with the new console this is big step forward. OpenVZ and KVM console looks now quite similar.<br />
<br />
But as most OpenVZ templates have disabled terminals, you need to enable it first. This article describes for the needed changes for already running OpenVZ container.<br />
<br />
'''Note:'''<br />
<br />
All Debian templates created with latest [[Debian Appliance Builder]] got this changes already, just download them vie GUI to your Proxmox VE storage (Debian 6 and 7 templates are up2date, 32 and 64 bit)<br />
<br />
=Debian=<br />
* this will work for all Debian releases:<br />
log in to the Proxmox host.<br />
<br />
edit all inittabs under /var/lib/vz/root/ :<br />
<pre><br />
nano /var/lib/vz/root/*/etc/inittab<br />
<br />
# add this<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
</pre><br />
<br />
<br />
== Debian Lenny 5.0 ==<br />
[[Image:Screen-Debian-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 122<br />
<br />
root@debian:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/getty 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Debian Squeeze 6.0 ==<br />
Same as Debian Lenny 5.0<br />
== Debian Wheezy 7.0 ==<br />
Same as Debian Lenny 5.0<br />
<br />
=Ubuntu=<br />
== Ubuntu 12.04 ==<br />
[[Image:Screen-Ubuntu-12.04-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 108<br />
<br />
root@ubuntu-1204:/# nano /etc/init/tty1.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
<br />
# tty1 - getty<br />
#<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/getty -8 38400 tty1<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
<br />
== Ubuntu 10.04 ==<br />
Same as Ubuntu 12.04<br />
<br />
=Centos=<br />
== Centos 5 ==<br />
[[Image:Screen-Centos-5-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 111<br />
<br />
root@centos5-64:/# nano /etc/inittab<br />
<br />
On the bottom of /etc/inittab just add the following line:<br />
1:2345:respawn:/sbin/agetty tty1 38400 linux<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
== Centos 6 ==<br />
[[Image:Screen-Centos-6-OpenVZ-console.png|thumb]] <br />
Login via SSH (or use the VNC "Shell") to your Proxmox VE host and 'vzctl enter CTID' the container:<br />
<br />
List all running container:<br />
<br />
proxmox-ve:~# vzlist<br />
CTID NPROC STATUS IP_ADDR HOSTNAME<br />
108 23 running 192.168.9.20 ubuntu-1204.proxmox.com<br />
109 18 running 192.168.9.21 centos63-64.proxmox.com<br />
111 15 running 192.168.9.23 centos5-64.proxmox.com<br />
114 14 running 192.168.9.30 deb6-32.proxmox.com<br />
115 15 running 192.168.9.31 deb7-32.proxmox.com<br />
122 14 running 192.168.9.36 deb5.proxmox.com<br />
<br />
Enter the container:<br />
proxmox-ve:~# vzctl enter 109<br />
<br />
root@centos63-64:/# nano /etc/init/tty.conf<br />
<br />
Change/Create the file that it looks exactly like this:<br />
# This service maintains a getty on tty1 from the point the system is<br />
# started until it is shut down again.<br />
<br />
start on stopped rc RUNLEVEL=[2345]<br />
<br />
stop on runlevel [!2345]<br />
<br />
respawn<br />
exec /sbin/agetty -8 tty1 38400<br />
<br />
Save the changes and shutdown/start the container via Console.<br />
=Troubleshooting=<br />
If you still want to use the previous method (vzctl enter CTID) you can open the host "Shell" and just type 'vzctl enter CTID" to manage your containers.<br />
==Java browser plugin==<br />
The console is using a Java applet, therefore you need latest Oracle (Sun) Java browser plugin installed and enabled in your browser (Google Chrome and Firefox preferred). If you are on Windows desktop, just go to java.com, if you run a Linux desktop you need to make sure that you run Oracle (Sun) Java plugin instead of the default openjdk. For Debian/Ubuntu based desktops, see [[Java_Console_(Ubuntu)]] <br />
[[Category: HOWTO]][[Category: Technology]][[Category: Proxmox VE 2.0]]</div>Simon Smith