FAQ: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
(General cleanup and moving FAQs from forum to wiki)
No edit summary
 
(46 intermediate revisions by 16 users not shown)
Line 1: Line 1:
tbd, if possible, detailed answers should linked to internal wiki pages
<!--PVE_IMPORT_START_MARKER-->
 
<!-- Do not edit - this is autogenerated content -->
==General==
{{#pvedocs:pve-faq-plain.html}}
===What is a container, CT, VE, Virtual Private Server, VPS?===
[[Category:Reference Documentation]]
:See [[Container and Full Virtualization]]
<pvehide>
 
New FAQs are appended to the bottom of this section.
===What is a KVM guest (KVM VM)?===
What distribution is Proxmox VE based on?
:A KVM guest or KVM VM is a guest system running virtualized under Proxmox VE with KVM.
Proxmox VE is based on Debian GNU/Linux
 
What license does the Proxmox VE project use?
===What is a Virtual Appliance?===
Proxmox VE code is licensed under the GNU Affero General Public License,
:See [[Overview]]
version 3.
==Installation and upgrade==
Will Proxmox VE run on a 32bit processor?
 
Proxmox VE works only on 64-bit CPUs (AMD or Intel). There is no plan
===Where can I find installation instructions?===
for 32-bit for the platform.
:See [[Installation]]
VMs and Containers can be both 32-bit and 64-bit.
 
Does my CPU support virtualization?
===Proxmox VE command line tools===
To check if your CPU is virtualization compatible, check for the vmx
:See [[Command line tools]]
or svm tag in this command output:
 
egrep '(vmx|svm)' /proc/cpuinfo
==Hardware==
Supported Intel CPUs
===CPU===
64-bit processors with
====Will Proxmox VE run on a 32bit processor?====
Intel
:Proxmox VE works only on 64-bit CPU´s (AMD or Intel). There is no plan for 32-bit for the platform.
Virtualization Technology (Intel VT-x) support.
===Supported CPU chips===
(List of processors with Intel VT and 64-bit)
====Intel====
Supported AMD CPUs
* [[Media:Intel-VT-LG775.jpg|Intel VT - LG775 chips]]
64-bit processors with
 
AMD
====AMD====
Virtualization Technology (AMD-V) support.
 
What is a container/virtual environment (VE)/virtual private server (VPS)?
 
In the context of containers, these terms all refer to the concept of
==Networking==
operating-system-level virtualization. Operating-system-level virtualization is
 
a method of virtualization, in which the kernel of an operating system
===How do I configure bridged networking in an OpenVZ Ubuntu/Debian container?===
allows for multiple isolated instances, that all share the kernel. When
<ol>
referring to LXC, we call such instances containers. Because containers use the
<li>In the web gui under Virtual Machine configuration got to the network tab.
host&#8217;s kernel rather than emulating a full operating system, they require less
<li>Remove the ip address for venet and save. (Bridged Ethernet Devices will appear)
overhead, but are limited to Linux guests.
<li>SSH into your host system and enter the container you want set bridge networking for:
What is a QEMU/KVM guest (or VM)?
# vzctl enter <VMID>
A QEMU/KVM guest (or VM) is a guest system running virtualized under
<li>edit  /etc/network/interfaces using using the following format and save. (replace with settings for your network)
Proxmox VE using QEMU and the Linux KVM kernel module.
<pre>
What is QEMU?
auto lo
QEMU is a generic and open source machine emulator and
iface lo inet loopback
virtualizer. QEMU uses the Linux KVM kernel module to achieve near
native performance by executing the guest code directly on the host
auto eth0
CPU.
iface eth0 inet static
It is not limited to Linux guests but allows arbitrary operating systems
        address 10.0.0.17
to run.
        netmask 255.255.255.0
How long will my Proxmox VE version be supported?
        network 10.0.0.0
Proxmox VE versions are supported at least as long as the corresponding
        broadcast 10.0.0.255
Debian Version is
        gateway 10.0.0.10
oldstable. Proxmox VE uses a
</pre>
rolling release model and using the latest stable version is always
<li>Shutdown the container.
recommended.
<li>Go back to web gui and under "Bridged Ethernet Devices" configure eth0 to vmbr0 and save. (a mac address will be automatically assigned)
Proxmox VE Version
<li>Start the container.
Debian Version
</ol>
First Release
Finally check that networking is working by entering the guest and viewing the results of ifconfig
  Debian EOL
 
  Proxmox EOL
==Troubleshooting==
Proxmox VE 8
 
Debian 12 (Bookworm)
===I can't switch virtual consoles in Linux KVM guests with alt-F1, alt-F2...===
2023-06
 
tba
VNC viewer does not pass some key combinations or they may be intercepted by your operating system.
tba
 
Proxmox VE 7
To send custom key combinations to the guest, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.
Debian 11 (Bullseye)
 
2021-07
For example, to switch to the third console (tty3) you would use:
2024-07
 
2024-07
sendkey alt-f3
Proxmox VE 6
 
Debian 10 (Buster)
 
2019-07
===How can I send sysrq to Linux KVM guests?===
2022-09
 
2022-09
Similarly to the above, go to "Monitor" in Virtual Machine Configuration for a given guest and use "sendkey" command.
Proxmox VE 5
 
Debian 9 (Stretch)
For example, to issue "Emergency Sync", you would use:
2017-07
 
2020-07
sendkey alt-sysrq-s
2020-07
 
Proxmox VE 4
In the VNC viewer for the given guest you should see:
Debian 8 (Jessie)
 
2015-10
SysRq : Emergency Sync
2018-06
 
2018-06
You should also see this entry if you run "dmesg" on this guest.
Proxmox VE 3
 
Debian 7 (Wheezy)
See also http://en.wikipedia.org/wiki/Magic_SysRq_key for a full reference of possible combinations.
2013-05
 
2016-04
 
2017-02
===How can I access Linux guests through a serial console?===
Proxmox VE 2
 
Debian 6 (Squeeze)
Sometimes, it is necessary to access the guest through a serial console:
2012-04
* you lost network access to the guest and VNC is either too slow for you or don't have the features you need (i.e. easy copy/paste between other terminals)
2014-05
* your guest freezes or kernel panics, you want to debug it, but it's impossible to capture all messages on VNC screen
2014-05
* you are familiar with <code>xm console <guest></code> from Xen and you want to use a similar feature here
Proxmox VE 1
 
Debian 5 (Lenny)
 
2008-10
The necessary steps are:
2012-03
* on Proxmox VE host, in guest's configuration file in <code>/etc/qemu-server/<VMID>.conf</code> add:
2013-01
 
How can I upgrade Proxmox VE to the next point release?
args: -serial unix:/var/run/qemu-server/<VMID>.serial,server,nowait
Minor version upgrades, for example upgrading from Proxmox VE in version 7.1
 
to 7.2 or 7.3, can be done just like any normal update.
This will open console in <code>/var/run/qemu-server/<VMID>.serial</code> socket file, which can be accessed by minicom or other serial communication program.
But you should still check the release notes
 
for any relevant notable, or breaking change.
An alternative is to add:
For the update itself use either the Web UI Node &#8594; Updates panel or
 
through the CLI with:
args: -serial tcp:localhost:6000,server,nowait
apt update
 
apt full-upgrade
With this, you can connect to guest's serial console with telnet. Note that with telnet, any passwords will be visible on the screen.
Always ensure you correctly setup the
 
package repositories and only
 
continue with the actual upgrade if apt update did not hit any error.
* on guest, in /etc/inittab, look for lines similar to the ones below and make sure you have "ttyS0" there - this would be your serial console:
How can I upgrade Proxmox VE to the next major release?
 
Major version upgrades, for example going from Proxmox VE 4.4 to 5.0, are also
5:2345:respawn:/sbin/mingetty tty5
supported.
6:2345:respawn:/sbin/mingetty ttyS0
They must be carefully planned and tested and should never be started without
 
having a current backup ready.
If you want kernel messages to be shown on both serial and VGA consoles, you have to add a kernel parameter in your bootloader's configuration. For grub, it would be these "console" entries:
Although the specific upgrade steps depend on your respective setup, we provide
 
general instructions and advice of how a upgrade should be performed:
kernel (hd0,0)/boot/vmlinuz root=LABEL=guest-root console=ttyS0 console=tty0
Upgrade from Proxmox VE 7 to 8
 
Upgrade from Proxmox VE 6 to 7
 
Upgrade from Proxmox VE 5 to 6
To access the guest from minicom, configure it to use a path such as <code>unix#/var/run/qemu-server/<VMID>.serial</code> in "cOnfigure Minicom -> Serial port setup -> Serial Device".
Upgrade from Proxmox VE 4 to 5
 
Upgrade from Proxmox VE 3 to 4
 
LXC vs LXD vs Proxmox Containers vs Docker
To use multiple minicom configurations for several guests, create a file like /etc/minicom/minirc.someguest for each of your guests, with contents:
LXC is a userspace interface for the Linux kernel containment
 
features. Through a powerful API and simple tools, it lets Linux users
pu port            unix#/var/run/qemu-server/<VMID>.serial
easily create and manage system containers. LXC, as well as the former
pu minit
OpenVZ, aims at system virtualization. Thus, it allows you to run a
pu mreset
complete OS inside a container, where you log in using ssh, add users,
 
run apache, etc&#8230;
 
LXD is built on top of LXC to provide a new, better user
Then, start the console with:
experience. Under the hood, LXD uses LXC through liblxc and its Go
 
binding to create and manage the containers. It&#8217;s basically an
minicom someguest
alternative to LXC&#8217;s tools and distribution template system with the
 
added features that come from being controllable over the network.
 
Proxmox Containers are how we refer to containers that are created and managed
See also http://www.proxmox.com/forum/showthread.php?p=5615
using the Proxmox Container Toolkit (pct). They also target system
 
virtualization and use LXC as the basis of the container offering. The
===How can I assign a physical disk to a VM?===
Proxmox Container Toolkit (pct) is tightly coupled with Proxmox VE. This means
Add it first in the web interface, then use:
that it is aware of cluster setups, and it can use the same network
qm set <vmid> -ide# /dev/sdb
and storage resources as QEMU virtual machines (VMs). You can even use the
Or:
Proxmox VE firewall, create and restore backups, or manage containers using
qm set <vmid> -ide# /dev/disk/by-id/[your disk ID]
the HA framework. Everything can be controlled over the network using
 
the Proxmox VE API.
...since having the drive letter change (should you add a drive) might have unintended consequences.
Docker aims at running a single application in an isolated, self-contained
 
environment. These are generally referred to as &#8220;Application Containers&#8221;, rather
Also see /etc/qemu-server/<vmid>.conf
than &#8220;System Containers&#8221;. You manage a Docker instance from the host, using the
 
Docker Engine command-line interface. It is not recommended to run docker
===How can I assign a physical disk to a CT?===
directly on your Proxmox VE host.
See http://wiki.openvz.org/Bind_mounts
If you want to run application containers, for example, Docker images, it
is best to run them inside a Proxmox QEMU VM.
</pvehide>
<!--PVE_IMPORT_END_MARKER-->

Latest revision as of 12:09, 28 November 2024

Note New FAQs are appended to the bottom of this section.
  1. What distribution is Proxmox VE based on?

    Proxmox VE is based on Debian GNU/Linux

  2. What license does the Proxmox VE project use?

    Proxmox VE code is licensed under the GNU Affero General Public License, version 3.

  3. Will Proxmox VE run on a 32bit processor?

    Proxmox VE works only on 64-bit CPUs (AMD or Intel). There is no plan for 32-bit for the platform.

    Note VMs and Containers can be both 32-bit and 64-bit.
  4. Does my CPU support virtualization?

    To check if your CPU is virtualization compatible, check for the vmx or svm tag in this command output:

    egrep '(vmx|svm)' /proc/cpuinfo
  5. Supported Intel CPUs

    64-bit processors with Intel Virtualization Technology (Intel VT-x) support. (List of processors with Intel VT and 64-bit)

  6. Supported AMD CPUs

    64-bit processors with AMD Virtualization Technology (AMD-V) support.

  7. What is a container/virtual environment (VE)/virtual private server (VPS)?

    In the context of containers, these terms all refer to the concept of operating-system-level virtualization. Operating-system-level virtualization is a method of virtualization, in which the kernel of an operating system allows for multiple isolated instances, that all share the kernel. When referring to LXC, we call such instances containers. Because containers use the host’s kernel rather than emulating a full operating system, they require less overhead, but are limited to Linux guests.

  8. What is a QEMU/KVM guest (or VM)?

    A QEMU/KVM guest (or VM) is a guest system running virtualized under Proxmox VE using QEMU and the Linux KVM kernel module.

  9. What is QEMU?

    QEMU is a generic and open source machine emulator and virtualizer. QEMU uses the Linux KVM kernel module to achieve near native performance by executing the guest code directly on the host CPU. It is not limited to Linux guests but allows arbitrary operating systems to run.

  10. How long will my Proxmox VE version be supported?

    Proxmox VE versions are supported at least as long as the corresponding Debian Version is oldstable. Proxmox VE uses a rolling release model and using the latest stable version is always recommended.

    Proxmox VE Version Debian Version First Release Debian EOL Proxmox EOL

    Proxmox VE 8

    Debian 12 (Bookworm)

    2023-06

    tba

    tba

    Proxmox VE 7

    Debian 11 (Bullseye)

    2021-07

    2024-07

    2024-07

    Proxmox VE 6

    Debian 10 (Buster)

    2019-07

    2022-09

    2022-09

    Proxmox VE 5

    Debian 9 (Stretch)

    2017-07

    2020-07

    2020-07

    Proxmox VE 4

    Debian 8 (Jessie)

    2015-10

    2018-06

    2018-06

    Proxmox VE 3

    Debian 7 (Wheezy)

    2013-05

    2016-04

    2017-02

    Proxmox VE 2

    Debian 6 (Squeeze)

    2012-04

    2014-05

    2014-05

    Proxmox VE 1

    Debian 5 (Lenny)

    2008-10

    2012-03

    2013-01

  11. How can I upgrade Proxmox VE to the next point release?

    Minor version upgrades, for example upgrading from Proxmox VE in version 7.1 to 7.2 or 7.3, can be done just like any normal update. But you should still check the release notes for any relevant notable, or breaking change.

    For the update itself use either the Web UI Node → Updates panel or through the CLI with:

    apt update
    apt full-upgrade
    Note Always ensure you correctly setup the package repositories and only continue with the actual upgrade if apt update did not hit any error.
  12. How can I upgrade Proxmox VE to the next major release?

    Major version upgrades, for example going from Proxmox VE 4.4 to 5.0, are also supported. They must be carefully planned and tested and should never be started without having a current backup ready.

    Although the specific upgrade steps depend on your respective setup, we provide general instructions and advice of how a upgrade should be performed:

  13. LXC vs LXD vs Proxmox Containers vs Docker

    LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system containers. LXC, as well as the former OpenVZ, aims at system virtualization. Thus, it allows you to run a complete OS inside a container, where you log in using ssh, add users, run apache, etc…

    LXD is built on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It’s basically an alternative to LXC’s tools and distribution template system with the added features that come from being controllable over the network.

    Proxmox Containers are how we refer to containers that are created and managed using the Proxmox Container Toolkit (pct). They also target system virtualization and use LXC as the basis of the container offering. The Proxmox Container Toolkit (pct) is tightly coupled with Proxmox VE. This means that it is aware of cluster setups, and it can use the same network and storage resources as QEMU virtual machines (VMs). You can even use the Proxmox VE firewall, create and restore backups, or manage containers using the HA framework. Everything can be controlled over the network using the Proxmox VE API.

    Docker aims at running a single application in an isolated, self-contained environment. These are generally referred to as “Application Containers”, rather than “System Containers”. You manage a Docker instance from the host, using the Docker Engine command-line interface. It is not recommended to run docker directly on your Proxmox VE host.

    Note If you want to run application containers, for example, Docker images, it is best to run them inside a Proxmox QEMU VM.