FAQ: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
 
(20 intermediate revisions by 5 users not shown)
Line 1: Line 1:
tbd, if possible, detailed answers should linked to internal wiki pages
<!--PVE_IMPORT_START_MARKER-->
 
<!-- Do not edit - this is autogenerated content -->
==General==
{{#pvedocs:pve-faq-plain.html}}
===What is a container, CT, VE, Virtual Private Server, VPS?===
[[Category:Reference Documentation]]
:See [[Container and Full Virtualization]]
<pvehide>
 
New FAQs are appended to the bottom of this section.
===What is a KVM guest (KVM VM)?===
What distribution is Proxmox VE based on?
:A KVM guest or KVM VM is a guest system running virtualized under Proxmox VE with KVM.
Proxmox VE is based on Debian GNU/Linux
 
What license does the Proxmox VE project use?
===What distribution is Proxmox VE based on?===
Proxmox VE code is licensed under the GNU Affero General Public License,
:Proxmox VE is based on [http://www.debian.org Debian GNU/Linux], Proxmox VE Kernel is based on RHEL6 Kernel with OpenVZ patches
version 3.
 
Will Proxmox VE run on a 32bit processor?
==Installation and upgrade==
Proxmox VE works only on 64-bit CPUs (AMD or Intel). There is no plan
 
for 32-bit for the platform.
===Where can I find installation instructions?===
VMs and Containers can be both 32-bit and 64-bit.
:See [[Installation]]
Does my CPU support virtualization?
 
To check if your CPU is virtualization compatible, check for the vmx
===Proxmox VE command line tools===
or svm tag in this command output:
:See [[Command line tools]]
egrep '(vmx|svm)' /proc/cpuinfo
 
Supported Intel CPUs
===How long will my Proxmox VE version be supported?===
64-bit processors with
According to [https://forum.proxmox.com/threads/version-end-of-life.22564/#post-113759 Wolfgang] Proxmox VE 3.4 is supported at least as long as the corresponding Debian Version is [https://wiki.debian.org/DebianOldStable oldstable]. Proxmox VE uses a rolling release model and the use of the latest stable version is always recommended.  
Intel
{| class="wikitable"
Virtualization Technology (Intel VT-x) support.
|-
(List of processors with Intel VT and 64-bit)
! Proxmox VE Version !! Debian Version !! First Release !! Debian EOL !! Proxmox EOL
Supported AMD CPUs
|-
64-bit processors with
| Proxmox VE 4.x || Debian 8 (Jessie) || 2015-10 || 2018-05 ||  tba
AMD
|-
Virtualization Technology (AMD-V) support.
| Proxmox VE 3.x || Debian 7 (Wheezy) || 2013-05 || 2016-04 || 2017-02
What is a container/virtual environment (VE)/virtual private server (VPS)?
|-
In the context of containers, these terms all refer to the concept of
| Proxmox VE 2.x || Debian 6 (Squeezy) || 2012-04 || 2014-05 || 2014-05
operating-system-level virtualization. Operating-system-level virtualization is
|-
a method of virtualization, in which the kernel of an operating system
| Proxmox VE 1.x || Debian 6 (Lenny) || 2008-10 || 2012-03 || 2013-01
allows for multiple isolated instances, that all share the kernel. When
|}
referring to LXC, we call such instances containers. Because containers use the
 
host&#8217;s kernel rather than emulating a full operating system, they require less
==Hardware==
overhead, but are limited to Linux guests.
===CPU===
What is a QEMU/KVM guest (or VM)?
====Will Proxmox VE run on a 32bit processor?====
A QEMU/KVM guest (or VM) is a guest system running virtualized under
Proxmox VE works only on 64-bit CPU´s (AMD or Intel). There is no plan for 32-bit for the platform.  
Proxmox VE using QEMU and the Linux KVM kernel module.
 
What is QEMU?
There are, however,  unofficial (and unsupported) instructions for manually installing Proxmox on 32-bit systems:
QEMU is a generic and open source machine emulator and
* Proxmox 2.0 on Squeeze [[Install Proxmox VE on Debian Squeeze on 32-Bit Processor]]
virtualizer. QEMU uses the Linux KVM kernel module to achieve near
* Proxmox 1.4 on Lenny [[Install Proxmox VE on Debian Lenny on 32-Bit Processor]]
native performance by executing the guest code directly on the host
 
CPU.
===Supported CPU chips===
It is not limited to Linux guests but allows arbitrary operating systems
To check if your CPU is virtualization compatible, check for the "vmx" or "svm" tag in this command output:
to run.
egrep '(vmx|svm)' /proc/cpuinfo
How long will my Proxmox VE version be supported?
 
Proxmox VE versions are supported at least as long as the corresponding
====Intel====
Debian Version is
64-bit processors with [http://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29 Intel Virtualization Technology (Intel VT-x)] support
oldstable. Proxmox VE uses a
 
rolling release model and using the latest stable version is always
[http://ark.intel.com/search/advanced/?s=t&VTX=true&InstructionSet=64-bit List of processors with Intel VT and 64-bit]
recommended.
 
Proxmox VE Version
====AMD====
Debian Version
64-bit processors with [http://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29 AMD Virtualization Technology (AMD-V)] support
First Release
 
Debian EOL
==Networking==
Proxmox EOL
 
Proxmox VE 8
===How do I configure bridged networking in an OpenVZ Ubuntu/Debian container?===
Debian 12 (Bookworm)
<ol>
2023-06
<li>In the web gui under Virtual Machine configuration go to the network tab.
tba
<li>Remove the ip address for venet and save. (Bridged Ethernet Devices will appear)
tba
<li>SSH into your host system and enter the container you want set bridged networking for:
Proxmox VE 7
# vzctl enter <VMID>
Debian 11 (Bullseye)
<li>edit  /etc/network/interfaces using using the following format and save. (replace with settings for your network)
2021-07
<pre>
2024-07
auto lo
2024-07
iface lo inet loopback
Proxmox VE 6
Debian 10 (Buster)
auto eth0
2019-07
iface eth0 inet static
2022-09
        address 10.0.0.17
2022-09
        netmask 255.255.255.0
Proxmox VE 5
        network 10.0.0.0
Debian 9 (Stretch)
        broadcast 10.0.0.255
2017-07
        gateway 10.0.0.10
2020-07
</pre>
2020-07
<li>Shutdown the container.  
Proxmox VE 4
<li>Go back to web gui and under "Bridged Ethernet Devices"  configure eth0 to vmbr0  and save. (a mac address will be automatically assigned)
Debian 8 (Jessie)
<li>Start the container.
2015-10
</ol>
2018-06
Finally check that networking is working by entering the guest and viewing the results of ifconfig
2018-06
 
Proxmox VE 3
*In a Centos/RHEL container, check the gateway device is set correctly.
Debian 7 (Wheezy)
edit /etc/sysconfig/network  
2013-05
<pre>
2016-04
NETWORKING="yes"
2017-02
#GATEWAYDEV="venet0"              # comment this and add line below
Proxmox VE 2
GATEWAYDEV="eth0"
Debian 6 (Squeeze)
HOSTNAME="hostname"    # should be set by proxmox
2012-04
GATEWAY=123.123.123.123        # CHANGE (and remove from ifcfg-eth0)
2014-05
</pre>
2014-05
 
Proxmox VE 1
==Virtualization==
Debian 5 (Lenny)
===Why do you recommend 32-bit guests over 64 bit guests?===
2008-10
:64-bit makes sense only if you need greater than 4GB of memory.
2012-03
:32-bit guests use less memory in certain situations
2013-01
::e.g. a standard installation of apache2 on 64 bit containers consumes much more memory than on 32 bit.
How can I upgrade Proxmox VE to the next point release?
 
Minor version upgrades, for example upgrading from Proxmox VE in version 7.1
 
to 7.2 or 7.3, can be done just like any normal update.
== Troubleshooting ==
But you should still check the release notes
See [[Troubleshooting]] page.
for any relevant noteable, or breaking change.
For the update itself use either the Web UI Node &#8594; Updates panel or
through the CLI with:
apt update
apt full-upgrade
Always ensure you correctly setup the
package repositories and only
continue with the actual upgrade if apt update did not hit any error.
How can I upgrade Proxmox VE to the next major release?
Major version upgrades, for example going from Proxmox VE 4.4 to 5.0, are also
supported.
They must be carefully planned and tested and should never be started without
having a current backup ready.
Although the specific upgrade steps depend on your respective setup, we provide
general instructions and advice of how a upgrade should be performed:
Upgrade from Proxmox VE 7 to 8
Upgrade from Proxmox VE 6 to 7
Upgrade from Proxmox VE 5 to 6
Upgrade from Proxmox VE 4 to 5
Upgrade from Proxmox VE 3 to 4
LXC vs LXD vs Proxmox Containers vs Docker
LXC is a userspace interface for the Linux kernel containment
features. Through a powerful API and simple tools, it lets Linux users
easily create and manage system containers. LXC, as well as the former
OpenVZ, aims at system virtualization. Thus, it allows you to run a
complete OS inside a container, where you log in using ssh, add users,
run apache, etc&#8230;
LXD is built on top of LXC to provide a new, better user
experience. Under the hood, LXD uses LXC through liblxc and its Go
binding to create and manage the containers. It&#8217;s basically an
alternative to LXC&#8217;s tools and distribution template system with the
added features that come from being controllable over the network.
Proxmox Containers are how we refer to containers that are created and managed
using the Proxmox Container Toolkit (pct). They also target system
virtualization and use LXC as the basis of the container offering. The
Proxmox Container Toolkit (pct) is tightly coupled with Proxmox VE. This means
that it is aware of cluster setups, and it can use the same network
and storage resources as QEMU virtual machines (VMs). You can even use the
Proxmox VE firewall, create and restore backups, or manage containers using
the HA framework. Everything can be controlled over the network using
the Proxmox VE API.
Docker aims at running a single application in an isolated, self-contained
environment. These are generally referred to as &#8220;Application Containers&#8221;, rather
than &#8220;System Containers&#8221;. You manage a Docker instance from the host, using the
Docker Engine command-line interface. It is not recommended to run docker
directly on your Proxmox VE host.
If you want to run application containers, for example, Docker images, it
is best to run them inside a Proxmox QEMU VM.
</pvehide>
<!--PVE_IMPORT_END_MARKER-->

Latest revision as of 14:11, 23 November 2023

Note New FAQs are appended to the bottom of this section.
  1. What distribution is Proxmox VE based on?

    Proxmox VE is based on Debian GNU/Linux

  2. What license does the Proxmox VE project use?

    Proxmox VE code is licensed under the GNU Affero General Public License, version 3.

  3. Will Proxmox VE run on a 32bit processor?

    Proxmox VE works only on 64-bit CPUs (AMD or Intel). There is no plan for 32-bit for the platform.

    Note VMs and Containers can be both 32-bit and 64-bit.
  4. Does my CPU support virtualization?

    To check if your CPU is virtualization compatible, check for the vmx or svm tag in this command output:

    egrep '(vmx|svm)' /proc/cpuinfo
  5. Supported Intel CPUs

    64-bit processors with Intel Virtualization Technology (Intel VT-x) support. (List of processors with Intel VT and 64-bit)

  6. Supported AMD CPUs

    64-bit processors with AMD Virtualization Technology (AMD-V) support.

  7. What is a container/virtual environment (VE)/virtual private server (VPS)?

    In the context of containers, these terms all refer to the concept of operating-system-level virtualization. Operating-system-level virtualization is a method of virtualization, in which the kernel of an operating system allows for multiple isolated instances, that all share the kernel. When referring to LXC, we call such instances containers. Because containers use the host’s kernel rather than emulating a full operating system, they require less overhead, but are limited to Linux guests.

  8. What is a QEMU/KVM guest (or VM)?

    A QEMU/KVM guest (or VM) is a guest system running virtualized under Proxmox VE using QEMU and the Linux KVM kernel module.

  9. What is QEMU?

    QEMU is a generic and open source machine emulator and virtualizer. QEMU uses the Linux KVM kernel module to achieve near native performance by executing the guest code directly on the host CPU. It is not limited to Linux guests but allows arbitrary operating systems to run.

  10. How long will my Proxmox VE version be supported?

    Proxmox VE versions are supported at least as long as the corresponding Debian Version is oldstable. Proxmox VE uses a rolling release model and using the latest stable version is always recommended.

    Proxmox VE Version Debian Version First Release Debian EOL Proxmox EOL

    Proxmox VE 8

    Debian 12 (Bookworm)

    2023-06

    tba

    tba

    Proxmox VE 7

    Debian 11 (Bullseye)

    2021-07

    2024-07

    2024-07

    Proxmox VE 6

    Debian 10 (Buster)

    2019-07

    2022-09

    2022-09

    Proxmox VE 5

    Debian 9 (Stretch)

    2017-07

    2020-07

    2020-07

    Proxmox VE 4

    Debian 8 (Jessie)

    2015-10

    2018-06

    2018-06

    Proxmox VE 3

    Debian 7 (Wheezy)

    2013-05

    2016-04

    2017-02

    Proxmox VE 2

    Debian 6 (Squeeze)

    2012-04

    2014-05

    2014-05

    Proxmox VE 1

    Debian 5 (Lenny)

    2008-10

    2012-03

    2013-01

  11. How can I upgrade Proxmox VE to the next point release?

    Minor version upgrades, for example upgrading from Proxmox VE in version 7.1 to 7.2 or 7.3, can be done just like any normal update. But you should still check the release notes for any relevant notable, or breaking change.

    For the update itself use either the Web UI Node → Updates panel or through the CLI with:

    apt update
    apt full-upgrade
    Note Always ensure you correctly setup the package repositories and only continue with the actual upgrade if apt update did not hit any error.
  12. How can I upgrade Proxmox VE to the next major release?

    Major version upgrades, for example going from Proxmox VE 4.4 to 5.0, are also supported. They must be carefully planned and tested and should never be started without having a current backup ready.

    Although the specific upgrade steps depend on your respective setup, we provide general instructions and advice of how a upgrade should be performed:

  13. LXC vs LXD vs Proxmox Containers vs Docker

    LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system containers. LXC, as well as the former OpenVZ, aims at system virtualization. Thus, it allows you to run a complete OS inside a container, where you log in using ssh, add users, run apache, etc…

    LXD is built on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers. It’s basically an alternative to LXC’s tools and distribution template system with the added features that come from being controllable over the network.

    Proxmox Containers are how we refer to containers that are created and managed using the Proxmox Container Toolkit (pct). They also target system virtualization and use LXC as the basis of the container offering. The Proxmox Container Toolkit (pct) is tightly coupled with Proxmox VE. This means that it is aware of cluster setups, and it can use the same network and storage resources as QEMU virtual machines (VMs). You can even use the Proxmox VE firewall, create and restore backups, or manage containers using the HA framework. Everything can be controlled over the network using the Proxmox VE API.

    Docker aims at running a single application in an isolated, self-contained environment. These are generally referred to as “Application Containers”, rather than “System Containers”. You manage a Docker instance from the host, using the Docker Engine command-line interface. It is not recommended to run docker directly on your Proxmox VE host.

    Note If you want to run application containers, for example, Docker images, it is best to run them inside a Proxmox QEMU VM.