Installation: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
No edit summary
 
(32 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{#pvedocs:sysadmin-pve-installation-plain.html}}
<!--PVE_IMPORT_START_MARKER-->
== Introduction ==
<!-- Do not edit - this is autogenerated content -->
 
{{#pvedocs:pve-installation-plain.html}}
Proxmox VE installs the complete operating system and management tools in 3 to 5 minutes (depending on the hardware used).
[[Category:Reference Documentation]]
 
<pvehide>
Including the following:
Proxmox VE is based on Debian. This is why the install disk images (ISO files)
 
provided by Proxmox include a complete Debian system as well as all necessary
*Complete operating system (Debian Linux, 64-bit)
Proxmox VE packages.
*Partition the hard drive with ext4 (alternative ext3 or xfs) or ZFS
See the support table in the FAQ for the
*[[Proxmox VE Kernel]] with LXC and KVM support
relationship between Proxmox VE releases and Debian releases.
*Complete toolset
The installer will guide you through the setup, allowing you to partition the
*Web based management interface
local disk(s), apply basic system configurations (for example, timezone,
 
language, network) and install all required packages. This process should not
Please note, by default the complete server is used and all existing data is removed.  
take more than a few minutes. Installing with the provided ISO is the
 
recommended method for new and existing users.
If you want to set custom options for the installer, or need to debug the installation process on your server, you can use some
Alternatively, Proxmox VE can be installed on top of an existing Debian system. This
[[Debugging_Installation|special boot options]].
option is only recommended for advanced users because detailed knowledge about
 
Proxmox VE is required.
=== Video tutorials ===
Using the Proxmox VE Installer
 
The installer ISO image includes the following:
*List of all official tutorials on our [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
Complete operating system (Debian Linux, 64-bit)
*Tutorials in Spanish language on [http://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z ITexperts.es YouTube Play List]
The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS,
 
  BTRFS (technology preview), or ZFS and installs the operating system
== System requirements ==
Proxmox VE Linux kernel with KVM and LXC support
 
Complete toolset for administering virtual machines, containers, the host
For production servers, high quality server equipment is needed. Keep in mind, if you run 10 Virtual Servers on one machine and you then experience a hardware failure, 10 services are lost. Proxmox VE supports clustering, this means that multiple Proxmox VE installations can be centrally managed thanks to the included cluster functionality.
  system, clusters and all necessary resources
 
Web-based management interface
Proxmox VE can use local storage (DAS), SAN, NAS and also distributed storage (Ceph RBD). For details see [[Storage Model]]
All existing data on the selected drives will be removed during the
 
installation process. The installer does not add boot menu entries for other
=== Minimum requirements, for evaluation ===
operating systems.
 
Please insert the prepared installation media
*CPU: 64bit (Intel EMT64 or AMD64), [[FAQ#Supported_CPU_chips|Intel VT/AMD-V capable CPU]]/Mainboard (for KVM Full Virtualization support)
(for example, USB flash drive or CD-ROM) and boot from it.
*RAM: 1 GB RAM
Make sure that booting from the installation medium (for example, USB) is
*Hard drive
enabled in your server&#8217;s firmware settings. Secure boot needs to be disabled
*One NIC
when booting an installer prior to Proxmox VE version 8.1.
 
After choosing the correct entry (for example, Boot from USB) the Proxmox VE menu
=== Recommended system requirements ===
will be displayed, and one of the following options can be selected:
 
Install Proxmox VE (Graphical)
*CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended, [[FAQ#Supported_CPU_chips|Intel VT/AMD-V capable CPU]]/Mainboard (for KVM Full Virtualization support)
Starts the normal installation.
*RAM: 8 GB is good, more is better
It&#8217;s possible to use the installation wizard with a keyboard only. Buttons
*[[Raid controller|Hardware RAID]] with batteries protected write cache (BBU) or flash based protection ([[Software RAID]] is not supported)
can be clicked by pressing the ALT key combined with the underlined character
*Fast hard drives, best results with 15k rpm SAS, Raid10
from the respective button. For example, ALT + N to press a Next button.
*At least two NIC´s, depending on the used storage technology you need more
Install Proxmox VE (Terminal UI)
 
Starts the terminal-mode installation wizard. It provides the same overall
=== Certified hardware ===
installation experience as the graphical installer, but has generally better
 
compatibility with very old and very new hardware.
Basically you can use any hardware supporting RHEL6, 64 bit. If you are unsure, post in the [http://forum.proxmox.com/ forum].
Install Proxmox VE (Terminal UI, Serial Console)
 
Starts the terminal-mode installation wizard, additionally setting up the Linux
As the browser will be used to manage the Proxmox VE server, it would be prudent to follow [[Safe Browsing settings in Firefox | safe browsing practices]].
kernel to use the (first) serial port of the machine for in- and output. This
 
can be used if the machine is completely headless and only has a serial console
== Steps to get your Proxmox VE up and running ==
available.
 
Both modes use the same code base for the actual installation process to
=== Install Proxmox VE server ===
benefit from more than a decade of bug fixes and ensure feature parity.
 
The Terminal UI option can be used in case the graphical installer does
See [[Quick installation]]
not work correctly, due to e.g. driver issues. See also
 
adding the nomodeset kernel parameter.
[http://youtu.be/ckvPt1Bp9p0 Proxmox VE installation (Video Tutorial)]
Advanced Options: Install Proxmox VE (Graphical, Debug Mode)
 
Starts the installation in debug mode. A console will be opened at several
If you need to install the outdated 1.9 release, check [[Installing Proxmox VE v1.9 post Lenny retirement]]
installation steps. This helps to debug the situation if something goes wrong.
 
To exit a debug console, press CTRL-D. This option can be used to boot a live
=== Optional: Install Proxmox VE on Debian 6 Squeeze (64 bit) ===
system with all basic tools available. You can use it, for example, to
EOL.
repair a degraded ZFS rpool or fix the
 
bootloader for an existing Proxmox VE setup.
See [[Install Proxmox VE on Debian Squeeze]]
Advanced Options: Install Proxmox VE (Terminal UI, Debug Mode)
 
Same as the graphical debug mode, but preparing the system to run the
=== Optional: Install Proxmox VE on Debian 7 Wheezy (64 bit) ===
terminal-based installer instead.
 
Advanced Options: Install Proxmox VE (Serial Console Debug Mode)
EOL April 2016
Same the terminal-based debug mode, but additionally sets up the Linux kernel to
 
use the (first) serial port of the machine for in- and output.
See [[Install Proxmox VE on Debian Wheezy]]
Advanced Options: Install Proxmox VE (Automated)
 
Starts the installer in unattended mode, even if the ISO has not been
=== Optional: Install Proxmox VE on Debian 8 Jessie (64 bit) ===
appropriately prepared for an automated installation. This option can be used to
 
gather hardware details or might be useful to debug an automated installation
See [[Install Proxmox VE on Debian Jessie]]
setup. See Unattended Installation for more
 
information.
=== [[Developer_Workstations_with_Proxmox_VE_and_X11]] ===
Advanced Options: Rescue Boot
 
With this option you can boot an existing installation. It searches all attached
This page will cover the install of X11 and a basic Desktop on top of Proxmox. [[Developer_Workstations_with_Proxmox_VE_and_X11#Optional:_Linux_Mint_Mate_Desktop | Optional:_Linux_Mint_Mate_Desktop]] is also available.
hard disks. If it finds an existing installation, it boots directly into that
 
disk using the Linux kernel from the ISO. This can be useful if there are
=== Optional: Install Proxmox VE over iSCSI ===
problems with the bootloader (GRUB/systemd-boot) or the BIOS/UEFI is unable to
 
read the boot block from the disk.
See [[Proxmox ISCSI installation]]
Advanced Options: Test Memory (memtest86+)
 
Runs memtest86+. This is useful to check if the memory is functional and free
=== Proxmox VE web interface ===
of errors. Secure Boot must be turned off in the UEFI firmware setup utility to
 
run this option.
Configuration is done via the Proxmox web interface, just point your browser to the IP address given during installation (<nowiki>https://youripaddress:8006</nowiki>). Please make sure that your browser has the latest Oracle Java browser plugin installed. Proxmox VE is tested for IE9, Firefox 10 and higher, Google Chrome (latest).  
You normally select Install Proxmox VE (Graphical) to start the installation.
 
The first step is to read our EULA (End User License Agreement). Following this,
'''Default login is "root" and the root password is defined during the installation process.'''
you can select the target hard disk(s) for the installation.
 
By default, the whole server is used and all existing data is removed.
==== Configure basic system setting ====
Make sure there is no important data on the server before proceeding with the
 
installation.
Please review the NIC setup, IP and hostname.  
The Options button lets you select the target file system, which
 
defaults to ext4. The installer uses LVM if you select
'''Note: changing IP or hostname after cluster creation is not possible (unless you know exactly what you do)'''
ext4 or xfs as a file system, and offers additional options to
 
restrict LVM space (see below).
 
Proxmox VE can also be installed on ZFS. As ZFS offers several software RAID levels,
 
this is an option for systems that don&#8217;t have a hardware RAID controller. The
==== Get Appliance Templates ====
target disks must be selected in the Options dialog. More ZFS specific
 
settings can be changed under Advanced Options.
===== Download =====
ZFS on top of any hardware RAID is not supported and can result in data
 
loss.
Just go to your content tab of your storage (e.g. "local") and [[Get Virtual Appliances|download pre-built Virtual Appliances]] directly to your server. This list is maintained by the Proxmox VE team and more and more Appliances will be available. This is the easiest way and a good place to start.  
The next page asks for basic configuration options like your location, time
 
zone, and keyboard layout. The location is used to select a nearby download
===== Use a NFS share for ISO´s =====
server, in order to increase the speed of updates. The installer is usually able
 
to auto-detect these settings, so you only need to change them in rare
If you have a NFS server you can use a NFS share for storing ISO images. To start, configure the NFS ISO store on the web interface (Configuration/Storage).  
situations when auto-detection fails, or when you want to use a keyboard layout
 
not commonly used in your country.
===== Upload from your desktop =====
Next the password of the superuser (root) and an email address needs to be
 
specified. The password must consist of at least 5 characters. It&#8217;s highly
If you already got Virtually Appliances you can upload them via the upload button. To install a virtual machine from an ISO image (using KVM full virtualization) just upload the ISO file via the upload button.
recommended to use a stronger password. Some guidelines are:
 
Use a minimum password length of at least 12 characters.
===== Directly to file system =====
Include lowercase and uppercase alphabetic characters, numbers, and symbols.
 
Avoid character repetition, keyboard patterns, common dictionary words,
Templates and ISO images are stored on the Proxmox VE server (see /var/lib/vz/template/cache for openvz templates and /var/lib/vz/template/iso for ISO images). You can also transfer templates and ISO images via secure copy (scp) to these directories. If you work on a windows desktop, you can use a graphical scp client like [http://winscp.net winscp].
  letter or number sequences, usernames, relative or pet names, romantic links
 
  (current or past), and biographical information (for example ID numbers,
=== Optional: Reverting Thin-LVM to "old" Behavior of <code>/var/lib/vz</code> (Proxmox 4.2 and later) ===
  ancestors' names or dates).
 
The email address is used to send notifications to the system administrator.
If you installed Proxmox 4.2 (or later), you see yourself confronted with a changed layout of your data. There is no mounted <code>/var/lib/vz</code> LVM volume anymore, instead you find a thin-provisioned volume. This is technically the right choice, but one sometimes want to get the old behavior back, which is described here. This section describes the steps to revert to the "old" layout on a freshly installed Proxmox 4.2:
For example:
 
Information about available package updates.
* After the Installation your storage configuration in <code>/etc/pve/storage.cfg</code> will look like this:
Error messages from periodic cron jobs.
<pre>
All those notification mails will be sent to the specified email address.
dir: local
The last step is the network configuration. Network interfaces that are UP
        path /var/lib/vz
show a filled circle in front of their name in the drop down menu. Please note
        content iso,vztmpl,backup
that during installation you can either specify an IPv4 or IPv6 address, but not
 
both. To configure a dual stack node, add additional IP addresses after the
lvmthin: local-lvm
installation.
        thinpool data
The next step shows a summary of the previously selected options. Please
        vgname pve
re-check every setting and use the Previous button if a setting needs to be
        content rootdir,images
changed.
</pre>
After clicking Install, the installer will begin to format the disks and copy
* You can delete the thin-volume via GUI or manually and have to set the local directory to store images and container aswell. You should have such a config in the end:
packages to the target disk(s). Please wait until this step has finished; then
<pre>
remove the installation medium and restart your system.
dir: local
Copying the packages usually takes several minutes, mostly depending on the
        path /var/lib/vz
speed of the installation medium and the target disk performance.
        maxfiles 0
When copying and setting up the packages has finished, you can reboot the
        content backup,iso,vztmpl,rootdir,images
server. This will be done automatically after a few seconds by default.
</pre>
Installation Failure
* Now you need to recreate <code>/var/lib/vz</code>
If the installation failed, check out specific errors on the second TTY
<pre>
(CTRL + ALT + F2) and ensure that the systems meets the
root@pve-42 ~ > lvs
minimum requirements.
  LV  VG  Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
If the installation is still not working, look at the
  data pve  twi-a-tz-- 16.38g            0.00  0.49
how to get help chapter.
  root pve  -wi-ao----  7.75g
Accessing the Management Interface Post-Installation
  swap pve  -wi-ao----  3.88g
After a successful installation and reboot of the system you can use the Proxmox VE
 
web interface for further configuration.
root@pve-42 ~ > lvremove pve/data
Point your browser to the IP address given during the installation and port
Do you really want to remove active logical volume data? [y/n]: y
  8006, for example: https://youripaddress:8006
  Logical volume "data" successfully removed
Log in using the root (realm PAM) username and the password chosen during
 
  installation.
root@pve-42 ~ > lvcreate --name data -l +100%FREE pve
Upload your subscription key to gain access to the Enterprise repository.
  Logical volume "data" created.
  Otherwise, you will need to set up one of the public, less tested package
 
  repositories to get updates for security fixes, bug fixes, and new features.
root@pve-42 ~ > mkfs.ext4 /dev/pve/data
Check the IP configuration and hostname.
mke2fs 1.42.12 (29-Aug-2014)
Check the timezone.
Discarding device blocks: done
Check your Firewall settings.
Creating filesystem with 5307392 4k blocks and 1327104 inodes
Advanced LVM Configuration Options
Filesystem UUID: 310d346a-de4e-48ae-83d0-4119088af2e3
The installer creates a Volume Group (VG) called pve, and additional Logical
Superblock backups stored on blocks:
Volumes (LVs) called root, data, and swap, if ext4 or xfs is used. To
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
control the size of these volumes use:
        4096000
hdsize
 
Defines the total hard disk size to be used. This way you can reserve free space
Allocating group tables: done
on the hard disk for further partitioning (for example for an additional PV and
Writing inode tables: done
VG on the same hard disk that can be used for LVM storage).
Creating journal (32768 blocks): done
swapsize
Writing superblocks and filesystem accounting information: done
Defines the size of the swap volume. The default is the size of the installed
</pre>
memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater
* Then add the new volume in your <code>/etc/fstab</code>:
than hdsize/8.
<pre>
If set to 0, no swap volume will be created.
/dev/pve/data /var/lib/vz ext4 defaults 0 1
maxroot
</pre>
Defines the maximum size of the root volume, which stores the operation
* Restart to check if everything survives a reboot.
system. The maximum limit of the root volume size is hdsize/4.
 
maxvz
You should end up with a working "old-style" configuration where you "see" your files as it was before Proxmox 4.2
Defines the maximum size of the data volume. The actual size of the data
 
volume is:
== Create Virtual Machines ==
datasize = hdsize - rootsize - swapsize - minfree
 
Where datasize cannot be bigger than maxvz.
=== Linux Containers (LXC) ===
In case of LVM thin, the data pool will only be created if datasize is
 
bigger than 4GB.
See [[Linux Container]] for a detailed description. Get LXC images from [http://images.linuxcontainers.org/ images.linuxcontainers.org] | [http://images.linuxcontainers.org/images/ Read their descriptions].
If set to 0, no data volume will be created and the storage
 
configuration will be adapted accordingly.
=== Container (OpenVZ) ===
minfree
 
Defines the amount of free space that should be left in the LVM volume group
[[Image:Screen-create-container-mailgateway.png|thumb]] [[Image:Screen-create-container-mailgateway-log.png|thumb]] [[Image:Screen-virtual-machine-detail1.png|thumb]]
pve. With more than 128GB storage available, the default is 16GB, otherwise
 
hdsize/8 will be used.
First [[#Get_Appliance_Templates|get the adequate(s) appliance(s) template(s)]].
LVM requires free space in the VG for snapshot creation (not required for
 
lvmthin snapshots).
Then just click "Create CT":
Advanced ZFS Configuration Options
 
The installer creates the ZFS pool rpool, if ZFS is used. No swap space is
'''General'''
created but you can reserve some unpartitioned space on the install disks for
 
swap. You can also create a swap zvol after the installation, although this can
*Node: If you have several Proxmox VE servers, select the node where you want to create the new container
lead to problems (see ZFS swap notes).
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
ashift
*Hostname: give a unique server name for the new container
Defines the ashift value for the created pool. The ashift needs to be set at
*Resource Pool: select the previously resource pool (optional)  
least to the sector-size of the underlying disks (2 to the power of ashift is
*Storage: select the storage for your container
the sector-size), or any disk which might be put in the pool (for example the
*Password: set the root password for your container
replacement of a defective disk).
 
compress
'''Template'''
Defines whether compression is enabled for rpool.
 
checksum
*Storage: select your template data store (you need to download templates before you can select them here)
Defines which checksumming algorithm should be used for rpool.
*Template: choose the template
copies
 
Defines the copies parameter for rpool. Check the zfs(8) manpage for the
'''Resources'''
semantics, and why this does not replace redundancy on disk-level.
 
ARC max size
*Memory (MB): set the memory (RAM)
Defines the maximum size the ARC can grow to and thus limits the amount of
*Swap (MB): set the swap
memory ZFS will use. See also the section on
*Disk size (GB): set the total disk size
how to limit ZFS memory usage for more
*CPUs: set the number of CPUs (if you run java inside your container, choose at least 2 here)
details.
 
hdsize
'''Network'''
Defines the total hard disk size to be used. This is useful to save free space
 
on the hard disk(s) for further partitioning (for example to create a
*Routed mode (venet): default([http://wiki.openvz.org/Venet venet]) give a unique IP
swap-partition). hdsize is only honored for bootable disks, that is only the
*Briged mode
first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
 
Advanced BTRFS Configuration Options
- in only some case you need Bridged Ethernet([http://wiki.openvz.org/Veth veth]) (see [http://wiki.openvz.org/Differences_between_venet_and_veth Differences_between_venet_and_veth] on OpenVZ wiki for details) If you select Brigded Ethernet, the IP configuration has to be done in the container, like you would do it on a physical server.  
No swap space is created when BTRFS is used but you can reserve some
 
unpartitioned space on the install disks for swap. You can either create a
'''DNS'''
separate partition, BTRFS subvolume or a swapfile using the btrfs filesystem
 
mkswapfile command.
*DNS Domain: e.g. yourdomain.com
compress
*First/Second DNS Servers: enter DNS servers
Defines whether compression is enabled for the BTRFS subvolume. Different
 
compression algorithms are supported: on (equivalent to zlib), zlib, lzo
'''Confirm'''
and zstd. Defaults to off.
 
hdsize
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking.
Defines the total hard disk size to be used. This is useful to save free space
 
on the hard disk(s) for further partitioning (for example, to create a
After you clicked "Finish", all settings are applied - wait for completion (this process can take between a view seconds and up to a minute, depends on the used template and your hardware).
swap partition).
 
ZFS Performance Tips
'''CentOS 7'''
ZFS works best with a lot of memory. If you intend to use ZFS make sure to have
 
enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB
If you can't PING a CentOS 7 container: In order to get rid of problems with venet0 when running a CentOS 7 container (OpenVZ), just activate the patch redhat-add_ip.sh-patch as follows:
RAW disk space.
 
ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL).
1. transfer the patchfile to your working directory into Proxmox VE host (i.e:/ root) and extract it (unzip)
Use a fast drive (SSD) for it. It can be added after installation with the
2. perform this command:
following command:
  # patch -p0 < redhat-add_ip.sh-patch
# zpool add &lt;pool-name&gt; log &lt;/dev/path_to_fast_ssd&gt;
3. stop and start the respective container
Adding the nomodeset Kernel Parameter
 
Problems may arise on very old or very new hardware due to graphics drivers. If
Patch file: http://forum.proxmox.com/threads/22770-fix-for-centos-7-container-networking
the installation hangs during boot, you can try adding the nomodeset
 
parameter. This prevents the Linux kernel from loading any graphics drivers and
=== Video Tutorials ===
forces it to continue using the BIOS/UEFI-provided framebuffer.
 
On the Proxmox VE bootloader menu, navigate to Install Proxmox VE (Terminal UI) and
*See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
press e to edit the entry. Using the arrow keys, navigate to the line starting
 
with linux, move the cursor to the end of that line and add the
== Virtual Machines (KVM) ==
parameter nomodeset, separated by a space from the pre-existing last
 
parameter.
Just click "Create VM":
Then press Ctrl-X or F10 to boot the configuration.
 
Unattended Installation
=== General ===
It is possible to install Proxmox VE automatically in an unattended manner. This
 
enables you to fully automate the setup process on bare-metal. Once the
*Node: If you have several Proxmox VE servers, select the node where you want to create the new VM
installation is complete and the host has booted up, automation tools like
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
Ansible can be used to further configure the installation.
*Name: choose a name for your VM (this is not the hostname), can be changed any time
The necessary options for the installer must be provided in an answer file. This
*Resource Pool: select the previously resource pool (optional)
file allows the use of filter rules to determine which disks and network cards
 
should be used.
=== OS ===
To use the automated installation, it is first necessary to prepare an
 
installation ISO.
Select the Operating System (OS) of your VM
Visit our wiki for more
 
details and information on the unattended installation.
=== CD/DVD ===
Video Tutorials
See the list of all official tutorials on our
Proxmox VE YouTube Channel
See Also
Prepare Installation Media
Install Proxmox VE on Debian 12 Bookworm
System Requirements
Package Repositories
Host System Administration
Network Configuration
Installation: Tips and Tricks
</pvehide>
<!--PVE_IMPORT_END_MARKER-->


*Use CD/DVD disc image file (iso): Select the storage where you previously uploaded your iso images and choose the file
*Use physical CD/DVD Drive: choose this to use the CD/DVD from your Proxmox VE node
*Do not use any media: choose this if you do not want any media
=== Hard disk ===
* Bus/Device: choose the bus type, as long as your guest supports go for ''virtio''
* Storage: select the storage where you want to store the disk Disk size (GB): define the size
* Format: Define the disk image format. For good performance, go for raw. If you plan to use snapshots, go for qcow2.
* Cache: define the cache policy for the virtual disk
* Limits: (if necessary) set the maximum transfer speeds
=== CPU ===
*Sockets: set the number of CPU sockets
*Cores: set the number of CPU Cores per socket
*CPU type: select CPU type
*Total cores: never use more CPU cores than physical available on the Proxmox VE host
=== Memory ===
*Memory (MB): set the memory (RAM) for your VM
=== Network ===
*Briged mode: this is the default setting, just choose the Brigde where you want to connect your VM. If you want to use VLAN, you can define the VLAN tag for the VM
*NAT mode
*No network device
*Model: choose the emulated network device, as long as your guest support it, go for virtio
*MAC address: use 'auto' or overwrite with a valid and unique MAC address
*Rate limit (MB/s): set a speed limit for this network adapter
=== Confirm ===
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking. After you clicked "Finish", all settings are applied - wait for completion (this process just takes a second).
== Video Tutorials ==
*See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
== Managing Virtual Machines ==
Go to "VM Manager/Virtual Machines" to see a list of your Virtual Machines.
Basic tasks can be done by clicking on the red arrow - drop down menu:
*start, restart, shutdown, stop
*migrate: migrate a Virtual Machine to another physical host (you need at least two Proxmox VE servers - see [[Proxmox VE Cluster]]
*console: using the VNC console for container virtualization automatically logs in via root. For managing KVM Virtual Machine, the console shows the screen of the full virtualized machine)
For a '''detailed view''' and '''configuration changes''' just click on a Virtual Machine row in the list of VMs.
"Logs" on a container Virtual Machine:
*Boot/Init: shows the Boot/Init logs generated during start or stop
*Command: see the current/last executed task
*Syslog: see the real time syslog of the Virtual Machine
== Networking and Firewall ==
See [[Network Model]] and [[Proxmox VE Firewall]]


[[Category:HOWTO]] [[Category:Installation]]
[[Category:HOWTO]] [[Category:Installation]]
[[Category:Reference Documentation]]

Latest revision as of 12:09, 28 November 2024

Proxmox VE is based on Debian. This is why the install disk images (ISO files) provided by Proxmox include a complete Debian system as well as all necessary Proxmox VE packages.

Tip See the support table in the FAQ for the relationship between Proxmox VE releases and Debian releases.

The installer will guide you through the setup, allowing you to partition the local disk(s), apply basic system configurations (for example, timezone, language, network) and install all required packages. This process should not take more than a few minutes. Installing with the provided ISO is the recommended method for new and existing users.

Alternatively, Proxmox VE can be installed on top of an existing Debian system. This option is only recommended for advanced users because detailed knowledge about Proxmox VE is required.

Using the Proxmox VE Installer

The installer ISO image includes the following:

  • Complete operating system (Debian Linux, 64-bit)

  • The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS, BTRFS (technology preview), or ZFS and installs the operating system

  • Proxmox VE Linux kernel with KVM and LXC support

  • Complete toolset for administering virtual machines, containers, the host system, clusters and all necessary resources

  • Web-based management interface

Note All existing data on the selected drives will be removed during the installation process. The installer does not add boot menu entries for other operating systems.

Please insert the prepared installation media (for example, USB flash drive or CD-ROM) and boot from it.

Tip Make sure that booting from the installation medium (for example, USB) is enabled in your server’s firmware settings. Secure boot needs to be disabled when booting an installer prior to Proxmox VE version 8.1.
screenshot/pve-grub-menu.png

After choosing the correct entry (for example, Boot from USB) the Proxmox VE menu will be displayed, and one of the following options can be selected:

Install Proxmox VE (Graphical)

Starts the normal installation.

Tip It’s possible to use the installation wizard with a keyboard only. Buttons can be clicked by pressing the ALT key combined with the underlined character from the respective button. For example, ALT + N to press a Next button.
Install Proxmox VE (Terminal UI)

Starts the terminal-mode installation wizard. It provides the same overall installation experience as the graphical installer, but has generally better compatibility with very old and very new hardware.

Install Proxmox VE (Terminal UI, Serial Console)

Starts the terminal-mode installation wizard, additionally setting up the Linux kernel to use the (first) serial port of the machine for in- and output. This can be used if the machine is completely headless and only has a serial console available.

screenshot/pve-tui-installer.png

Both modes use the same code base for the actual installation process to benefit from more than a decade of bug fixes and ensure feature parity.

Tip The Terminal UI option can be used in case the graphical installer does not work correctly, due to e.g. driver issues. See also adding the nomodeset kernel parameter.
Advanced Options: Install Proxmox VE (Graphical, Debug Mode)

Starts the installation in debug mode. A console will be opened at several installation steps. This helps to debug the situation if something goes wrong. To exit a debug console, press CTRL-D. This option can be used to boot a live system with all basic tools available. You can use it, for example, to repair a degraded ZFS rpool or fix the bootloader for an existing Proxmox VE setup.

Advanced Options: Install Proxmox VE (Terminal UI, Debug Mode)

Same as the graphical debug mode, but preparing the system to run the terminal-based installer instead.

Advanced Options: Install Proxmox VE (Serial Console Debug Mode)

Same the terminal-based debug mode, but additionally sets up the Linux kernel to use the (first) serial port of the machine for in- and output.

Advanced Options: Install Proxmox VE (Automated)

Starts the installer in unattended mode, even if the ISO has not been appropriately prepared for an automated installation. This option can be used to gather hardware details or might be useful to debug an automated installation setup. See Unattended Installation for more information.

Advanced Options: Rescue Boot

With this option you can boot an existing installation. It searches all attached hard disks. If it finds an existing installation, it boots directly into that disk using the Linux kernel from the ISO. This can be useful if there are problems with the bootloader (GRUB/systemd-boot) or the BIOS/UEFI is unable to read the boot block from the disk.

Advanced Options: Test Memory (memtest86+)

Runs memtest86+. This is useful to check if the memory is functional and free of errors. Secure Boot must be turned off in the UEFI firmware setup utility to run this option.

You normally select Install Proxmox VE (Graphical) to start the installation.

screenshot/pve-select-target-disk.png

The first step is to read our EULA (End User License Agreement). Following this, you can select the target hard disk(s) for the installation.

Caution By default, the whole server is used and all existing data is removed. Make sure there is no important data on the server before proceeding with the installation.

The Options button lets you select the target file system, which defaults to ext4. The installer uses LVM if you select ext4 or xfs as a file system, and offers additional options to restrict LVM space (see below).

Proxmox VE can also be installed on ZFS. As ZFS offers several software RAID levels, this is an option for systems that don’t have a hardware RAID controller. The target disks must be selected in the Options dialog. More ZFS specific settings can be changed under Advanced Options.

Warning ZFS on top of any hardware RAID is not supported and can result in data loss.
screenshot/pve-select-location.png

The next page asks for basic configuration options like your location, time zone, and keyboard layout. The location is used to select a nearby download server, in order to increase the speed of updates. The installer is usually able to auto-detect these settings, so you only need to change them in rare situations when auto-detection fails, or when you want to use a keyboard layout not commonly used in your country.

screenshot/pve-set-password.png

Next the password of the superuser (root) and an email address needs to be specified. The password must consist of at least 5 characters. It’s highly recommended to use a stronger password. Some guidelines are:

  • Use a minimum password length of at least 12 characters.

  • Include lowercase and uppercase alphabetic characters, numbers, and symbols.

  • Avoid character repetition, keyboard patterns, common dictionary words, letter or number sequences, usernames, relative or pet names, romantic links (current or past), and biographical information (for example ID numbers, ancestors' names or dates).

The email address is used to send notifications to the system administrator. For example:

  • Information about available package updates.

  • Error messages from periodic cron jobs.

screenshot/pve-setup-network.png

All those notification mails will be sent to the specified email address.

The last step is the network configuration. Network interfaces that are UP show a filled circle in front of their name in the drop down menu. Please note that during installation you can either specify an IPv4 or IPv6 address, but not both. To configure a dual stack node, add additional IP addresses after the installation.

screenshot/pve-installation.png

The next step shows a summary of the previously selected options. Please re-check every setting and use the Previous button if a setting needs to be changed.

After clicking Install, the installer will begin to format the disks and copy packages to the target disk(s). Please wait until this step has finished; then remove the installation medium and restart your system.

screenshot/pve-install-summary.png

Copying the packages usually takes several minutes, mostly depending on the speed of the installation medium and the target disk performance.

When copying and setting up the packages has finished, you can reboot the server. This will be done automatically after a few seconds by default.

Installation Failure

If the installation failed, check out specific errors on the second TTY (CTRL + ALT + F2) and ensure that the systems meets the minimum requirements.

If the installation is still not working, look at the how to get help chapter.

Accessing the Management Interface Post-Installation

screenshot/gui-login-window.png

After a successful installation and reboot of the system you can use the Proxmox VE web interface for further configuration.

  1. Point your browser to the IP address given during the installation and port 8006, for example: https://youripaddress:8006

  2. Log in using the root (realm PAM) username and the password chosen during installation.

  3. Upload your subscription key to gain access to the Enterprise repository. Otherwise, you will need to set up one of the public, less tested package repositories to get updates for security fixes, bug fixes, and new features.

  4. Check the IP configuration and hostname.

  5. Check the timezone.

  6. Check your Firewall settings.

Advanced LVM Configuration Options

The installer creates a Volume Group (VG) called pve, and additional Logical Volumes (LVs) called root, data, and swap, if ext4 or xfs is used. To control the size of these volumes use:

hdsize

Defines the total hard disk size to be used. This way you can reserve free space on the hard disk for further partitioning (for example for an additional PV and VG on the same hard disk that can be used for LVM storage).

swapsize

Defines the size of the swap volume. The default is the size of the installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater than hdsize/8.

Note If set to 0, no swap volume will be created.
maxroot

Defines the maximum size of the root volume, which stores the operation system. The maximum limit of the root volume size is hdsize/4.

maxvz

Defines the maximum size of the data volume. The actual size of the data volume is:

datasize = hdsize - rootsize - swapsize - minfree

Where datasize cannot be bigger than maxvz.

Note In case of LVM thin, the data pool will only be created if datasize is bigger than 4GB.
Note If set to 0, no data volume will be created and the storage configuration will be adapted accordingly.
minfree

Defines the amount of free space that should be left in the LVM volume group pve. With more than 128GB storage available, the default is 16GB, otherwise hdsize/8 will be used.

Note LVM requires free space in the VG for snapshot creation (not required for lvmthin snapshots).

Advanced ZFS Configuration Options

The installer creates the ZFS pool rpool, if ZFS is used. No swap space is created but you can reserve some unpartitioned space on the install disks for swap. You can also create a swap zvol after the installation, although this can lead to problems (see ZFS swap notes).

ashift

Defines the ashift value for the created pool. The ashift needs to be set at least to the sector-size of the underlying disks (2 to the power of ashift is the sector-size), or any disk which might be put in the pool (for example the replacement of a defective disk).

compress

Defines whether compression is enabled for rpool.

checksum

Defines which checksumming algorithm should be used for rpool.

copies

Defines the copies parameter for rpool. Check the zfs(8) manpage for the semantics, and why this does not replace redundancy on disk-level.

ARC max size

Defines the maximum size the ARC can grow to and thus limits the amount of memory ZFS will use. See also the section on how to limit ZFS memory usage for more details.

hdsize

Defines the total hard disk size to be used. This is useful to save free space on the hard disk(s) for further partitioning (for example to create a swap-partition). hdsize is only honored for bootable disks, that is only the first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].

Advanced BTRFS Configuration Options

No swap space is created when BTRFS is used but you can reserve some unpartitioned space on the install disks for swap. You can either create a separate partition, BTRFS subvolume or a swapfile using the btrfs filesystem mkswapfile command.

compress

Defines whether compression is enabled for the BTRFS subvolume. Different compression algorithms are supported: on (equivalent to zlib), zlib, lzo and zstd. Defaults to off.

hdsize

Defines the total hard disk size to be used. This is useful to save free space on the hard disk(s) for further partitioning (for example, to create a swap partition).

ZFS Performance Tips

ZFS works best with a lot of memory. If you intend to use ZFS make sure to have enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB RAW disk space.

ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL). Use a fast drive (SSD) for it. It can be added after installation with the following command:

# zpool add <pool-name> log </dev/path_to_fast_ssd>

Adding the nomodeset Kernel Parameter

Problems may arise on very old or very new hardware due to graphics drivers. If the installation hangs during boot, you can try adding the nomodeset parameter. This prevents the Linux kernel from loading any graphics drivers and forces it to continue using the BIOS/UEFI-provided framebuffer.

On the Proxmox VE bootloader menu, navigate to Install Proxmox VE (Terminal UI) and press e to edit the entry. Using the arrow keys, navigate to the line starting with linux, move the cursor to the end of that line and add the parameter nomodeset, separated by a space from the pre-existing last parameter.

Then press Ctrl-X or F10 to boot the configuration.

Unattended Installation

It is possible to install Proxmox VE automatically in an unattended manner. This enables you to fully automate the setup process on bare-metal. Once the installation is complete and the host has booted up, automation tools like Ansible can be used to further configure the installation.

The necessary options for the installer must be provided in an answer file. This file allows the use of filter rules to determine which disks and network cards should be used.

To use the automated installation, it is first necessary to prepare an installation ISO. Visit our wiki for more details and information on the unattended installation.

Video Tutorials

See the list of all official tutorials on our Proxmox VE YouTube Channel