Difference between revisions of "FAQ"

From Proxmox VE
Jump to: navigation, search
(General: Proxmox VE distribution)
 
(36 intermediate revisions by 14 users not shown)
Line 1: Line 1:
tbd, if possible, detailed answers should linked to internal wiki pages
+
<!--PVE_IMPORT_START_MARKER-->
 
+
<!-- Do not edit - this is autogenerated content -->
==General==
+
{{#pvedocs:pve-faq-plain.html}}
===What is a container, CT, VE, Virtual Private Server, VPS?===
+
[[Category:Reference Documentation]]
:See [[Container and Full Virtualization]]
+
<pvehide>
 
+
New FAQs are appended to the bottom of this section.
===What is a KVM guest (KVM VM)?===
+
What distribution is Proxmox VE based on?
:A KVM guest or KVM VM is a guest system running virtualized under Proxmox VE with KVM.
+
Proxmox VE is based on Debian GNU/Linux
 
+
What license does the Proxmox VE project use?
===What is a Virtual Appliance?===
+
Proxmox VE code is licensed under the GNU Affero General Public License,
:See [[Overview]]
+
version 3.
 
+
Will Proxmox VE run on a 32bit processor?
===What distribution is Proxmox VE based on?===
+
Proxmox VE works only on 64-bit CPUs (AMD or Intel). There is no plan
:Proxmox is debian based
+
for 32-bit for the platform.
 
+
VMs and Containers can be both 32-bit and/or 64-bit.
==Installation and upgrade==
+
Does my CPU support virtualization?
 
+
To check if your CPU is virtualization compatible, check for the vmx
===Where can I find installation instructions?===
+
or svm tag in this command output:
:See [[Installation]]
+
egrep '(vmx|svm)' /proc/cpuinfo
 
+
Supported Intel CPUs
===Proxmox VE command line tools===
+
64-bit processors with
:See [[Command line tools]]
+
Intel
 
+
Virtualization Technology (Intel VT-x) support. (List of processors with Intel VT and 64-bit)
==Hardware==
+
Supported AMD CPUs
===CPU===
+
64-bit processors with
====Will Proxmox VE run on a 32bit processor?====
+
AMD
:Proxmox VE works only on 64-bit CPU´s (AMD or Intel). There is no plan for 32-bit for the platform.
+
Virtualization Technology (AMD-V) support.
===Supported CPU chips===
+
What is a container, CT, VE, Virtual Private Server, VPS?
====Intel====
+
Operating-system-level virtualization is a server-virtualization
* [[Media:Intel-VT-LG775.jpg|Intel VT - LG775 chips]]
+
method where the kernel of an operating system allows for multiple
 
+
isolated user-space instances, instead of just one. We call such
====AMD====
+
instances containers. As containers use the host&#8217;s kernel they are
 
+
limited to Linux guests.
 
+
What is a QEMU/KVM guest (or VM)?
==Networking==
+
A QEMU/KVM guest (or VM) is a guest system running virtualized under
 
+
Proxmox VE using QEMU and the Linux KVM kernel module.
===How do I configure bridged networking in an OpenVZ Ubuntu/Debian container?===
+
What is QEMU?
<ol>
+
QEMU is a generic and open source machine emulator and
<li>In the web gui under Virtual Machine configuration got to the network tab.
+
virtualizer. QEMU uses the Linux KVM kernel module to achieve near
<li>Remove the ip address for venet and save. (Bridged Ethernet Devices will appear)
+
native performance by executing the guest code directly on the host
<li>SSH into your host system and enter the container you want set bridge networking for:
+
CPU.
# vzctl enter <VMID>
+
It is not limited to Linux guests but allows arbitrary operating systems
<li>edit  /etc/network/interfaces using using the following format and save. (replace with settings for your network)
+
to run.
<pre>
+
How long will my Proxmox VE version be supported?
  auto lo
+
Proxmox VE versions are supported at least as long as the corresponding
  iface lo inet loopback
+
Debian Version is
   
+
oldstable. Proxmox VE uses a
  auto eth0
+
rolling release model and using the latest stable version is always
  iface eth0 inet static
+
recommended.
        address 10.0.0.17
+
  Proxmox VE Version
        netmask 255.255.255.0
+
  Debian Version
        network 10.0.0.0
+
  First Release
        broadcast 10.0.0.255
+
  Debian EOL
        gateway 10.0.0.10
+
  Proxmox EOL
</pre>
+
Proxmox VE 6.x
<li>Shutdown the container.  
+
Debian 10 (Buster)
<li>Go back to web gui and under "Bridged Ethernet Devices"  configure eth0 to vmbr0  and save. (a mac address will be automatically assigned)
+
2019-07
<li>Start the container.
+
tba
</ol>
+
tba
Finally check that networking is working by entering the guest and viewing the results of ifconfig
+
Proxmox VE 5.x
 
+
Debian 9 (Stretch)
==Virtualization==
+
2017-07
===Why do you recommend 32-bit guests over 64 bit guests?===
+
2020-07
:64-bit makes sense only if you need greater than 4GB of memory.
+
2020-07
:32-bit guests use less memory in certain situations
+
Proxmox VE 4.x
::e.g. a standard installation of apache2 on 64 bit containers consumes much more memory than on 32 bit.
+
Debian 8 (Jessie)
 
+
2015-10
 
+
2018-06
==Troubleshooting==
+
2018-06
 
+
Proxmox VE 3.x
===I can't switch virtual consoles in Linux KVM guests with alt-F1, alt-F2...===
+
Debian 7 (Wheezy)
 
+
2013-05
VNC viewer does not pass some key combinations or they may be intercepted by your operating system.
+
2016-04
 
+
2017-02
To send custom key combinations to the guest, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.
+
Proxmox VE 2.x
 
+
Debian 6 (Squeeze)
For example, to switch to the third console (tty3) you would use:
+
2012-04
 
+
2014-05
sendkey alt-f3
+
2014-05
 
+
Proxmox VE 1.x
 
+
Debian 5 (Lenny)
===How can I send sysrq to Linux KVM guests?===
+
2008-10
 
+
2012-03
Similarly to the above, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.
+
2013-01
 
+
How can I upgrade Proxmox VE to the next release?
For example, to issue "Emergency Sync", you would use:
+
Minor version upgrades, for example upgrading from Proxmox VE in version 5.1
 
+
to 5.2, can be done just like any normal update, either through the Web
sendkey alt-sysrq-s
+
GUI Node &#8594; Updates panel or through the CLI with:
 
+
apt update
In the VNC viewer for the given guest you should see:
+
apt full-upgrade
 
+
Always ensure you correctly setup the
SysRq : Emergency Sync
+
package repositories and only
 
+
continue with the actual upgrade if apt update did not hit any error.
You should also see this entry if you run "dmesg" on this guest.
+
Major version upgrades, for example going from Proxmox VE 4.4 to 5.0, are
 
+
also supported. They must be carefully planned and tested and should
See also http://en.wikipedia.org/wiki/Magic_SysRq_key for a full reference of possible combinations.
+
never be started without having a current backup ready.
 
+
Although the specific upgrade steps depend on your respective setup, we
 
+
provide general instructions and advice of how a upgrade should be
===How can I access Linux guests through a serial console?===
+
performed:
 
+
Upgrade from Proxmox VE 5.x to 6.0
Sometimes, it is necessary to access the guest through a serial console:
+
Upgrade from Proxmox VE 4.x to 5.0
* you lost network access to the guest and VNC is either too slow for you or don't have the features you need (i.e. easy copy/paste between other terminals)
+
Upgrade from Proxmox VE 3.x to 4.0
* your guest freezes or kernel panics, you want to debug it, but it's impossible to capture all messages on VNC screen
+
LXC vs LXD vs Proxmox Containers vs Docker
* you are familiar with <code>xm console <guest></code> from Xen and you want to use a similar feature here
+
LXC is a userspace interface for the Linux kernel containment
 
+
features. Through a powerful API and simple tools, it lets Linux users
 
+
easily create and manage system containers. LXC, as well as the former
The necessary steps are:
+
OpenVZ, aims at system virtualization, i.e. allows you to run a
* on Proxmox VE host, in guest's configuration file in <code>/etc/qemu-server/<VMID>.conf</code> add:
+
complete OS inside a container, where you log in as ssh, add users,
 
+
run apache, etc&#8230;
args: -serial unix:/var/run/qemu-server/<VMID>.serial,server,nowait
+
LXD is building on top of LXC to provide a new, better user
 
+
experience. Under the hood, LXD uses LXC through liblxc and its Go
This will open console in <code>/var/run/qemu-server/<VMID>.serial</code> socket file, which can be accessed by minicom or other serial communication program.
+
binding to create and manage the containers. It&#8217;s basically an
 
+
alternative to LXC&#8217;s tools and distribution template system with the
An alternative is to add:
+
added features that come from being controllable over the network.
 
+
Proxmox Containers also aims at system virtualization, and thus uses
args: -serial tcp:localhost:6000,server,nowait
+
LXC as the basis of its own container offer. The Proxmox Container
 
+
Toolkit is called pct, and is tightly coupled with Proxmox VE. That means
With this, you can connect to guest's serial console with telnet. Note that with telnet, any passwords will be visible on the screen.
+
that it is aware of the cluster setup, and it can use the same network
 
+
and storage resources as fully virtualized VMs. You can even use the
 
+
Proxmox VE firewall, create and restore backups, or manage containers using
* on guest, in /etc/inittab, look for lines similar to the ones below and make sure you have "ttyS0" there - this would be your serial console:
+
the HA framework. Everything can be controlled over the network using
 
+
the Proxmox VE API.
5:2345:respawn:/sbin/mingetty tty5
+
Docker aims at running a single application running in a contained
6:2345:respawn:/sbin/mingetty ttyS0
+
environment. Hence you&#8217;re managing a docker instance from the host with the
 
+
docker toolkit. It is not recommended to run docker directly on your
If you want kernel messages to be shown on both serial and VGA consoles, you have to add a kernel parameter in your bootloader's configuration. For grub, it would be these "console" entries:
+
Proxmox VE host.
 
+
You can however perfectly install and use docker inside a Proxmox Qemu
kernel (hd0,0)/boot/vmlinuz root=LABEL=guest-root console=ttyS0 console=tty0
+
VM, and thus getting the benefit of software containerization with the very
 
+
strong isolation that VMs provide.
 
+
</pvehide>
To access the guest from minicom, configure it to use a path such as <code>unix#/var/run/qemu-server/<VMID>.serial</code> in "cOnfigure Minicom -> Serial port setup -> Serial Device".
+
<!--PVE_IMPORT_END_MARKER-->
 
 
 
 
To use multiple minicom configurations for several guests, create a file like /etc/minicom/minirc.someguest for each of your guests, with contents:
 
 
 
pu port            unix#/var/run/qemu-server/<VMID>.serial
 
pu minit
 
pu mreset
 
 
 
 
 
Then, start the console with:
 
 
 
minicom someguest
 
 
 
 
 
See also http://www.proxmox.com/forum/showthread.php?p=5615
 
 
 
===How can I assign a physical disk to a VM?===
 
Add it first in the web interface, then use:
 
qm set <vmid> -ide# /dev/sdb
 
Or:
 
qm set <vmid> -ide# /dev/disk/by-id/[your disk ID]
 
 
 
...since having the drive letter change (should you add a drive) might have unintended consequences.
 
 
 
Also see /etc/qemu-server/<vmid>.conf
 
 
 
===How can I assign a physical disk to a CT?===
 
See http://wiki.openvz.org/Bind_mounts
 

Latest revision as of 11:23, 16 July 2019

Note New FAQs are appended to the bottom of this section.
  1. What distribution is Proxmox VE based on?

    Proxmox VE is based on Debian GNU/Linux

  2. What license does the Proxmox VE project use?

    Proxmox VE code is licensed under the GNU Affero General Public License, version 3.

  3. Will Proxmox VE run on a 32bit processor?

    Proxmox VE works only on 64-bit CPUs (AMD or Intel). There is no plan for 32-bit for the platform.

    Note VMs and Containers can be both 32-bit and/or 64-bit.
  4. Does my CPU support virtualization?

    To check if your CPU is virtualization compatible, check for the vmx or svm tag in this command output:

    egrep '(vmx|svm)' /proc/cpuinfo
  5. Supported Intel CPUs

    64-bit processors with Intel Virtualization Technology (Intel VT-x) support. (List of processors with Intel VT and 64-bit)

  6. Supported AMD CPUs

    64-bit processors with AMD Virtualization Technology (AMD-V) support.

  7. What is a container, CT, VE, Virtual Private Server, VPS?

    Operating-system-level virtualization is a server-virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. We call such instances containers. As containers use the host’s kernel they are limited to Linux guests.

  8. What is a QEMU/KVM guest (or VM)?

    A QEMU/KVM guest (or VM) is a guest system running virtualized under Proxmox VE using QEMU and the Linux KVM kernel module.

  9. What is QEMU?

    QEMU is a generic and open source machine emulator and virtualizer. QEMU uses the Linux KVM kernel module to achieve near native performance by executing the guest code directly on the host CPU. It is not limited to Linux guests but allows arbitrary operating systems to run.

  10. How long will my Proxmox VE version be supported?

    Proxmox VE versions are supported at least as long as the corresponding Debian Version is oldstable. Proxmox VE uses a rolling release model and using the latest stable version is always recommended.

    Proxmox VE Version Debian Version First Release Debian EOL Proxmox EOL

    Proxmox VE 6.x

    Debian 10 (Buster)

    2019-07

    tba

    tba

    Proxmox VE 5.x

    Debian 9 (Stretch)

    2017-07

    2020-07

    2020-07

    Proxmox VE 4.x

    Debian 8 (Jessie)

    2015-10

    2018-06

    2018-06

    Proxmox VE 3.x

    Debian 7 (Wheezy)

    2013-05

    2016-04

    2017-02

    Proxmox VE 2.x

    Debian 6 (Squeeze)

    2012-04

    2014-05

    2014-05

    Proxmox VE 1.x

    Debian 5 (Lenny)

    2008-10

    2012-03

    2013-01

  11. How can I upgrade Proxmox VE to the next release?

    Minor version upgrades, for example upgrading from Proxmox VE in version 5.1 to 5.2, can be done just like any normal update, either through the Web GUI Node → Updates panel or through the CLI with:

    apt update
    apt full-upgrade
    Note Always ensure you correctly setup the package repositories and only continue with the actual upgrade if apt update did not hit any error.

    Major version upgrades, for example going from Proxmox VE 4.4 to 5.0, are also supported. They must be carefully planned and tested and should never be started without having a current backup ready. Although the specific upgrade steps depend on your respective setup, we provide general instructions and advice of how a upgrade should be performed:

  12. LXC vs LXD vs Proxmox Containers vs Docker

    LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system containers. LXC, as well as the former OpenVZ, aims at system virtualization, i.e. allows you to run a complete OS inside a container, where you log in as ssh, add users, run apache, etc…

    LXD is building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It’s basically an alternative to LXC’s tools and distribution template system with the added features that come from being controllable over the network.

    Proxmox Containers also aims at system virtualization, and thus uses LXC as the basis of its own container offer. The Proxmox Container Toolkit is called pct, and is tightly coupled with Proxmox VE. That means that it is aware of the cluster setup, and it can use the same network and storage resources as fully virtualized VMs. You can even use the Proxmox VE firewall, create and restore backups, or manage containers using the HA framework. Everything can be controlled over the network using the Proxmox VE API.

    Docker aims at running a single application running in a contained environment. Hence you’re managing a docker instance from the host with the docker toolkit. It is not recommended to run docker directly on your Proxmox VE host.

    Note You can however perfectly install and use docker inside a Proxmox Qemu VM, and thus getting the benefit of software containerization with the very strong isolation that VMs provide.