FAQ

From Proxmox VE
Jump to navigation Jump to search

tbd, if possible, detailed answers should linked to internal wiki pages

General

What is a container, CT, VE, Virtual Private Server, VPS?

See Container and Full Virtualization

What is a KVM guest (KVM VM)?

A KVM guest or KVM VM is a guest system running virtualized under Proxmox VE with KVM.

What distribution is Proxmox VE based on?

Proxmox VE is based on Debian GNU/Linux, Proxmox VE Kernel is based on RHEL6 Kernel with OpenVZ patches

Installation and upgrade

Where can I find installation instructions?

See Installation

Proxmox VE command line tools

See Command line tools

How long will my Proxmox VE version be supported?

According to Wolfgang Proxmox VE 3.4 is supported at least as long as the corresponding Debian Version is oldstable. Proxmox VE uses a rolling release model and the use of the latest stable version is always recommended.

Proxmox VE Version Debian Version First Release Debian EOL Proxmox EOL
Proxmox VE 4.x Debian 8 (Jessie) 2015-10 2018-05 tba
Proxmox VE 3.x Debian 7 (Wheezy) 2013-05 2016-04 2017-02
Proxmox VE 2.x Debian 6 (Squeezy) 2012-04 2014-05 2014-05
Proxmox VE 1.x Debian 5 (Lenny) 2008-10 2012-03 2013-01

Hardware

CPU

Will Proxmox VE run on a 32bit processor?

Proxmox VE works only on 64-bit CPU´s (AMD or Intel). There is no plan for 32-bit for the platform.

There are, however, unofficial (and unsupported) instructions for manually installing Proxmox on 32-bit systems:

Supported CPU chips

To check if your CPU is virtualization compatible, check for the "vmx" or "svm" tag in this command output:

egrep '(vmx|svm)' /proc/cpuinfo

Intel

64-bit processors with Intel Virtualization Technology (Intel VT-x) support

List of processors with Intel VT and 64-bit

AMD

64-bit processors with AMD Virtualization Technology (AMD-V) support

Networking

How do I configure bridged networking in an OpenVZ Ubuntu/Debian container?

  1. In the web gui under Virtual Machine configuration go to the network tab.
  2. Remove the ip address for venet and save. (Bridged Ethernet Devices will appear)
  3. SSH into your host system and enter the container you want set bridged networking for: # vzctl enter <VMID>
  4. edit /etc/network/interfaces using using the following format and save. (replace with settings for your network)
     auto lo
     iface lo inet loopback
     
     auto eth0
     iface eth0 inet static
            address 10.0.0.17
            netmask 255.255.255.0
            network 10.0.0.0
            broadcast 10.0.0.255
            gateway 10.0.0.10
    
  5. Shutdown the container.
  6. Go back to web gui and under "Bridged Ethernet Devices" configure eth0 to vmbr0 and save. (a mac address will be automatically assigned)
  7. Start the container.

Finally check that networking is working by entering the guest and viewing the results of ifconfig

  • In a Centos/RHEL container, check the gateway device is set correctly.

edit /etc/sysconfig/network

NETWORKING="yes"
#GATEWAYDEV="venet0"              # comment this and add line below
GATEWAYDEV="eth0"
HOSTNAME="hostname"    # should be set by proxmox
GATEWAY=123.123.123.123         # CHANGE (and remove from ifcfg-eth0)

Virtualization

Why do you recommend 32-bit guests over 64 bit guests?

64-bit guests makes sense only if you need greater than 4GB of memory.
32-bit guests use less memory in certain situations, and are less resource intensive due to the shorter memory addressing scheme used.
e.g. a standard installation of Apache2 on 64-bit containers consumes much more memory than on 32-bit installations.

Troubleshooting

See Troubleshooting page.