Difference between revisions of "Installation"

From Proxmox VE
Jump to: navigation, search
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{#pvedocs:sysadmin-pve-installation-plain.html}}
+
<!--PVE_IMPORT_START_MARKER-->
== Introduction ==
+
<!-- Do not edit - this is autogenerated content -->
 
+
{{#pvedocs:pve-installation-plain.html}}
Proxmox VE installs the complete operating system and management tools in 3 to 5 minutes (depending on the hardware used).  
+
[[Category:Reference Documentation]]
 
+
<pvehide>
Including the following:
+
Proxmox VE is based on Debian, therefore the disk image (ISO file) provided
 
+
by us includes a complete Debian system ("stretch" for version 5.x) as
*Complete operating system (Debian Linux, 64-bit)  
+
well as all necessary Proxmox VE packages.
*Partition the hard drive with ext4 (alternative ext3 or xfs) or ZFS
+
Using the installer will guide you through the setup, allowing
*[[Proxmox VE Kernel]] with LXC and KVM support
+
you to partition the local disk(s), apply basic system configurations
*Complete toolset
+
(e.g. timezone, language, network) and install all required packages.
*Web based management interface
+
Using the provided ISO will get you started in just a few minutes,
 
+
that&#8217;s why we recommend this method for new and existing users.
Please note, by default the complete server is used and all existing data is removed.  
+
Alternatively, Proxmox VE can be installed on top of an existing Debian
 
+
system. This option is only recommended for advanced users since
If you want to set custom options for the installer, or need to debug the installation process on your server, you can use some
+
detailed knowledge about Proxmox VE is necessary.
[[Debugging_Installation|special boot options]].
+
Using the Proxmox VE Installer
 
+
You can download the ISO from https://www.proxmox.com/en/downloads.
=== Video tutorials ===
+
It includes the following:
 
+
Complete operating system (Debian Linux, 64-bit)
*List of all official tutorials on our [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
+
The Proxmox VE installer, which partitions the local disk(s) with ext4,
*Tutorials in Spanish language on [http://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z ITexperts.es YouTube Play List]
+
  ext3, xfs or ZFS and installs the operating system.
 
+
Proxmox VE kernel (Linux) with LXC and KVM support
== System requirements ==
+
Complete toolset for administering virtual machines, containers and
 
+
  all necessary resources
For production servers, high quality server equipment is needed. Keep in mind, if you run 10 Virtual Servers on one machine and you then experience a hardware failure, 10 services are lost. Proxmox VE supports clustering, this means that multiple Proxmox VE installations can be centrally managed thanks to the included cluster functionality.
+
Web based management interface for using the toolset
 
+
During the installation process, the complete server
Proxmox VE can use local storage (DAS), SAN, NAS and also distributed storage (Ceph RBD). For details see [[Storage Model]]
+
is used by default and all existing data is removed.
 
+
Please insert the installation media (e.g. USB stick, CD-ROM) and boot
=== Minimum requirements, for evaluation ===
+
from it.
 
+
After choosing the correct entry (e.g. Boot from USB) the Proxmox VE menu
*CPU: 64bit (Intel EMT64 or AMD64), [[FAQ#Supported_CPU_chips|Intel VT/AMD-V capable CPU]]/Mainboard (for KVM Full Virtualization support)
+
will be displayed, you can now select one of the following options:
*RAM: 1 GB RAM
+
Install Proxmox VE
*Hard drive
+
Start normal installation.
*One NIC
+
It is possible to only use the keyboard to progress through the
 
+
installation wizard. Buttons can be pressed by pressing down the ALT
=== Recommended system requirements ===
+
key, combined with the underlined character from the respective Button.
 
+
For example, ALT + N to press a Next button.
*CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended, [[FAQ#Supported_CPU_chips|Intel VT/AMD-V capable CPU]]/Mainboard (for KVM Full Virtualization support)  
+
Install Proxmox VE (Debug mode)
*RAM: 8 GB is good, more is better
+
Start installation in debug mode. It opens a shell console at several
*[[Raid controller|Hardware RAID]] with batteries protected write cache (BBU) or flash based protection ([[Software RAID]] is not supported)
+
installation steps, so that you can debug things if something goes
*Fast hard drives, best results with 15k rpm SAS, Raid10
+
wrong. Please press CTRL-D to exit those debug consoles and continue
*At least two NIC´s, depending on the used storage technology you need more
+
installation. This option is mostly for developers and not meant for
 
+
general use.
=== Certified hardware ===
+
Rescue Boot
 
+
This option allows you to boot an existing installation. It searches
Basically you can use any hardware supporting RHEL6, 64 bit. If you are unsure, post in the [http://forum.proxmox.com/ forum].
+
all attached hard disks and, if it finds an existing installation,
 
+
boots directly into that disk using the existing Linux kernel. This
As the browser will be used to manage the Proxmox VE server, it would be prudent to follow [[Safe Browsing settings in Firefox | safe browsing practices]].
+
can be useful if there are problems with the boot block (grub), or the
 
+
BIOS is unable to read the boot block from the disk.
== Steps to get your Proxmox VE up and running ==
+
Test Memory
 
+
Runs memtest86+. This is useful to check if your memory is
=== Install Proxmox VE server ===
+
functional and error free.
 
+
You normally select Install Proxmox VE to start the installation.
See [[Quick installation]]
+
After that you get prompted to select the target hard disk(s). The
 
+
Options button lets you select the target file system, which
[http://youtu.be/ckvPt1Bp9p0 Proxmox VE installation (Video Tutorial)]
+
defaults to ext4. The installer uses LVM if you select ext3,
 
+
ext4 or xfs as file system, and offers additional option to
If you need to install the outdated 1.9 release, check [[Installing Proxmox VE v1.9 post Lenny retirement]]
+
restrict LVM space (see below)
 
+
You can also use ZFS as file system. ZFS supports several software RAID
=== Optional: Install Proxmox VE on Debian 6 Squeeze (64 bit) ===
+
levels, so this is specially useful if you do not have a hardware RAID
EOL.
+
controller. The Options button lets you select the ZFS RAID level, and
 
+
you can choose disks there. Additionally you can set additional options (see
See [[Install Proxmox VE on Debian Squeeze]]
+
below).
 
+
The next page just asks for basic configuration options like your
=== Optional: Install Proxmox VE on Debian 7 Wheezy (64 bit) ===
+
location, the time zone and keyboard layout. The location is used to
 
+
select a download server near you to speed up updates. The installer is
EOL April 2016
+
usually able to auto detect those settings, so you only need to change
 
+
them in rare situations when auto detection fails, or when you want to
See [[Install Proxmox VE on Debian Wheezy]]
+
use some special keyboard layout not commonly used in your country.
 
+
You then need to specify an email address and the superuser (root)
=== Optional: Install Proxmox VE on Debian 8 Jessie (64 bit) ===
+
password. The password must have at least 5 characters, but we highly
 
+
recommend to use stronger passwords - here are some guidelines:
See [[Install Proxmox VE on Debian Jessie]]
+
Use a minimum password length of 12 to 14 characters.
 
+
Include lowercase and uppercase alphabetic characters, numbers and symbols.
=== [[Developer_Workstations_with_Proxmox_VE_and_X11]] ===
+
Avoid character repetition, keyboard patterns, dictionary words,
 
+
  letter or number sequences, usernames, relative or pet names,
This page will cover the install of X11 and a basic Desktop on top of Proxmox. [[Developer_Workstations_with_Proxmox_VE_and_X11#Optional:_Linux_Mint_Mate_Desktop | Optional:_Linux_Mint_Mate_Desktop]] is also available.
+
  romantic links (current or past) and biographical information (e.g.,
 
+
  ID numbers, ancestors' names or dates).
=== Optional: Install Proxmox VE over iSCSI ===
+
It is sometimes necessary to send notifications to the system
 
+
administrator, for example:
See [[Proxmox ISCSI installation]]
+
Information about available package updates.
 
+
Error messages from periodic CRON jobs.
=== Proxmox VE web interface ===
+
All those notification mails will be sent to the specified email
 
+
address.
Configuration is done via the Proxmox web interface, just point your browser to the IP address given during installation (<nowiki>https://youripaddress:8006</nowiki>). Please make sure that your browser has the latest Oracle Java browser plugin installed. Proxmox VE is tested for IE9, Firefox 10 and higher, Google Chrome (latest).  
+
The last step is the network configuration. Please note that you can
 
+
use either IPv4 or IPv6 here, but not both. If you want to configure a
'''Default login is "root" and the root password is defined during the installation process.'''
+
dual stack node, you can easily do that after installation.
 
+
If you press Next now, installation starts to format disks, and
==== Configure basic system setting ====
+
copies packages to the target. Please wait until that is finished,
 
+
then remove the installation media and restart your system.
Please review the NIC setup, IP and hostname.  
+
Further configuration is done via the Proxmox web interface. Just
 
+
point your browser to the IP address given during installation
'''Note: changing IP or hostname after cluster creation is not possible (unless you know exactly what you do)'''
+
(https://youripaddress:8006).
 
+
Default login is "root" (realm PAM) and the root password is
 
+
defined during the installation process.
 
+
Advanced LVM Configuration Options
==== Get Appliance Templates ====
+
The installer creates a Volume Group (VG) called pve, and additional
 
+
Logical Volumes (LVs) called root, data and swap. The size of
===== Download =====
+
those volumes can be controlled with:
 
+
hdsize
Just go to your content tab of your storage (e.g. "local") and [[Get Virtual Appliances|download pre-built Virtual Appliances]] directly to your server. This list is maintained by the Proxmox VE team and more and more Appliances will be available. This is the easiest way and a good place to start.  
+
Defines the total HD size to be used. This way you can save free
 
+
space on the HD for further partitioning (i.e. for an additional PV
===== Use a NFS share for ISO´s =====
+
and VG on the same hard disk that can be used for LVM storage).
 
+
swapsize
If you have a NFS server you can use a NFS share for storing ISO images. To start, configure the NFS ISO store on the web interface (Configuration/Storage).  
+
Defines the size of the swap volume. The default is the size of the
 
+
installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot
===== Upload from your desktop =====
+
be greater than hdsize/8.
 
+
If set to 0, no swap volume will be created.
If you already got Virtually Appliances you can upload them via the upload button. To install a virtual machine from an ISO image (using KVM full virtualization) just upload the ISO file via the upload button.
+
maxroot
 
+
Defines the maximum size of the root volume, which stores the operation
===== Directly to file system =====
+
system. The maximum limit of the root volume size is hdsize/4.
 
+
maxvz
Templates and ISO images are stored on the Proxmox VE server (see /var/lib/vz/template/cache for openvz templates and /var/lib/vz/template/iso for ISO images). You can also transfer templates and ISO images via secure copy (scp) to these directories. If you work on a windows desktop, you can use a graphical scp client like [http://winscp.net winscp].
+
Defines the maximum size of the data volume. The actual size of the data
 
+
volume is:
=== Optional: Reverting Thin-LVM to "old" Behavior of <code>/var/lib/vz</code> (Proxmox 4.2 and later) ===
+
datasize = hdsize - rootsize - swapsize - minfree
 
+
Where datasize cannot be bigger than maxvz.
If you installed Proxmox 4.2 (or later), you see yourself confronted with a changed layout of your data. There is no mounted <code>/var/lib/vz</code> LVM volume anymore, instead you find a thin-provisioned volume. This is technically the right choice, but one sometimes want to get the old behavior back, which is described here. This section describes the steps to revert to the "old" layout on a freshly installed Proxmox 4.2:
+
In case of LVM thin, the data pool will only be created if datasize
 
+
is bigger than 4GB.
* After the Installation your storage configuration in <code>/etc/pve/storage.cfg</code> will look like this:
+
If set to 0, no data volume will be created and the storage
<pre>
+
configuration will be adapted accordingly.
dir: local
+
minfree
        path /var/lib/vz
+
Defines the amount of free space left in LVM volume group pve.
        content iso,vztmpl,backup
+
With more than 128GB storage available the default is 16GB, else hdsize/8
 
+
will be used.
lvmthin: local-lvm
+
LVM requires free space in the VG for snapshot creation (not
        thinpool data
+
required for lvmthin snapshots).
        vgname pve
+
Advanced ZFS Configuration Options
        content rootdir,images
+
The installer creates a ZFS pool rpool. When selecting ZFS, no swap space is
</pre>
+
created by default. You can leave some unpartitioned space for swap or create
* You can delete the thin-volume via GUI or manually and have to set the local directory to store images and container aswell. You should have such a config in the end:
+
a swap zvol after installation, though the latter can lead to problems
<pre>
+
(see ZFS swap notes).
dir: local
+
ashift
        path /var/lib/vz
+
Defines the ashift value for the created pool. The ashift needs
        maxfiles 0
+
to be set at least to the sector-size of the underlying disks (2 to
        content backup,iso,vztmpl,rootdir,images
+
the power of ashift is the sector-size), or any disk,
</pre>
+
which might be put in the pool (e.g. during replacing a defective disk).
* Now you need to recreate <code>/var/lib/vz</code>
+
compress
<pre>
+
Defines whether compression is enabled for rpool.
root@pve-42 ~ > lvs
+
checksum
   LV  VG  Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
+
Defines which checksumming algorithm should be used for rpool.
  data pve  twi-a-tz-- 16.38g            0.00  0.49
+
copies
  root pve  -wi-ao----  7.75g
+
Defines the copies parameter for rpool. Check the zfs(8) manpage for the
  swap pve  -wi-ao----  3.88g
+
semantics, and why this does not replace redundancy on disk-level.
 
+
hdsize
root@pve-42 ~ > lvremove pve/data
+
Defines the total HD size to be used. This way you can save free
Do you really want to remove active logical volume data? [y/n]: y
+
space on the HD(s) for further partitioning (e.g. for creating a swap-partition).
  Logical volume "data" successfully removed
+
hdsize is only honored for bootable disks, i.e., only the first disk or
 
+
mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
root@pve-42 ~ > lvcreate --name data -l +100%FREE pve
+
ZFS Performance Tips
  Logical volume "data" created.
+
ZFS uses a lot of memory, so it is best to add additional RAM if you
 
+
want to use ZFS. A good calculation is 4GB plus 1GB RAM for each TB
root@pve-42 ~ > mkfs.ext4 /dev/pve/data
+
RAW disk space.
mke2fs 1.42.12 (29-Aug-2014)
+
ZFS also provides the feature to use a fast SSD drive as write cache. The
Discarding device blocks: done
+
write cache is called the ZFS Intent Log (ZIL). You can add that after
Creating filesystem with 5307392 4k blocks and 1327104 inodes
+
installation using the following command:
Filesystem UUID: 310d346a-de4e-48ae-83d0-4119088af2e3
+
zpool add &lt;pool-name&gt; log &lt;/dev/path_to_fast_ssd&gt;
Superblock backups stored on blocks:
+
Install from USB Stick
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
+
Install Proxmox VE on Debian Stretch
        4096000
+
Video Tutorials
 
+
List of all official tutorials on our
Allocating group tables: done
+
  Proxmox VE YouTube Channel
Writing inode tables: done
+
Tutorials in Spanish language on
Creating journal (32768 blocks): done
+
  ITexperts.es
Writing superblocks and filesystem accounting information: done
+
   YouTube Play List
</pre>
+
See Also
* Then add the new volume in your <code>/etc/fstab</code>:
+
System Requirements
<pre>
+
Package Repositories
/dev/pve/data /var/lib/vz ext4 defaults 0 1
+
Host System Administration
</pre>
+
Network Configuration
* Restart to check if everything survives a reboot.
+
Installation: Tips and Tricks
 
+
</pvehide>
You should end up with a working "old-style" configuration where you "see" your files as it was before Proxmox 4.2
+
<!--PVE_IMPORT_END_MARKER-->
 
 
== Create Virtual Machines ==
 
 
 
=== Linux Containers (LXC) ===
 
 
 
See [[Linux Container]] for a detailed description. Get LXC images from [http://images.linuxcontainers.org/ images.linuxcontainers.org] | [http://images.linuxcontainers.org/images/ Read their descriptions].
 
 
 
=== Container (OpenVZ) ===
 
 
 
[[Image:Screen-create-container-mailgateway.png|thumb]] [[Image:Screen-create-container-mailgateway-log.png|thumb]] [[Image:Screen-virtual-machine-detail1.png|thumb]]
 
 
 
First [[#Get_Appliance_Templates|get the adequate(s) appliance(s) template(s)]].
 
 
 
Then just click "Create CT":
 
 
 
'''General'''
 
 
 
*Node: If you have several Proxmox VE servers, select the node where you want to create the new container
 
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
 
*Hostname: give a unique server name for the new container
 
*Resource Pool: select the previously resource pool (optional)
 
*Storage: select the storage for your container
 
*Password: set the root password for your container
 
 
 
'''Template'''
 
 
 
*Storage: select your template data store (you need to download templates before you can select them here)
 
*Template: choose the template
 
 
 
'''Resources'''
 
 
 
*Memory (MB): set the memory (RAM)
 
*Swap (MB): set the swap
 
*Disk size (GB): set the total disk size
 
*CPUs: set the number of CPUs (if you run java inside your container, choose at least 2 here)
 
 
 
'''Network'''
 
 
 
*Routed mode (venet): default([http://wiki.openvz.org/Venet venet]) give a unique IP
 
*Briged mode
 
 
 
- in only some case you need Bridged Ethernet([http://wiki.openvz.org/Veth veth]) (see [http://wiki.openvz.org/Differences_between_venet_and_veth Differences_between_venet_and_veth] on OpenVZ wiki for details) If you select Brigded Ethernet, the IP configuration has to be done in the container, like you would do it on a physical server.
 
 
 
'''DNS'''
 
 
 
*DNS Domain: e.g. yourdomain.com
 
*First/Second DNS Servers: enter DNS servers
 
 
 
'''Confirm'''
 
 
 
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking.
 
 
 
After you clicked "Finish", all settings are applied - wait for completion (this process can take between a view seconds and up to a minute, depends on the used template and your hardware).
 
 
 
'''CentOS 7'''
 
 
 
If you can't PING a CentOS 7 container: In order to get rid of problems with venet0 when running a CentOS 7 container (OpenVZ), just activate the patch redhat-add_ip.sh-patch as follows:
 
 
 
1. transfer the patchfile to your working directory into Proxmox VE host (i.e:/ root) and extract it (unzip)
 
2. perform this command:
 
  # patch -p0 < redhat-add_ip.sh-patch
 
3. stop and start the respective container
 
 
 
Patch file: http://forum.proxmox.com/threads/22770-fix-for-centos-7-container-networking
 
 
 
=== Video Tutorials ===
 
 
 
*See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
 
 
 
== Virtual Machines (KVM) ==
 
 
 
Just click "Create VM":
 
 
 
=== General ===
 
 
 
*Node: If you have several Proxmox VE servers, select the node where you want to create the new VM
 
*VM ID: choose a virtual machine identification number, just use the given ID or overwrite the suggested one
 
*Name: choose a name for your VM (this is not the hostname), can be changed any time
 
*Resource Pool: select the previously resource pool (optional)
 
 
 
=== OS ===
 
 
 
Select the Operating System (OS) of your VM
 
 
 
=== CD/DVD ===
 
  
*Use CD/DVD disc image file (iso): Select the storage where you previously uploaded your iso images and choose the file
 
*Use physical CD/DVD Drive: choose this to use the CD/DVD from your Proxmox VE node
 
*Do not use any media: choose this if you do not want any media
 
 
=== Hard disk ===
 
 
* Bus/Device: choose the bus type, as long as your guest supports go for ''virtio''
 
* Storage: select the storage where you want to store the disk Disk size (GB): define the size
 
* Format: Define the disk image format. For good performance, go for raw. If you plan to use snapshots, go for qcow2.
 
* Cache: define the cache policy for the virtual disk
 
* Limits: (if necessary) set the maximum transfer speeds
 
 
=== CPU ===
 
 
*Sockets: set the number of CPU sockets
 
*Cores: set the number of CPU Cores per socket
 
*CPU type: select CPU type
 
*Total cores: never use more CPU cores than physical available on the Proxmox VE host
 
 
=== Memory ===
 
 
*Memory (MB): set the memory (RAM) for your VM
 
 
=== Network ===
 
 
*Briged mode: this is the default setting, just choose the Brigde where you want to connect your VM. If you want to use VLAN, you can define the VLAN tag for the VM
 
*NAT mode
 
*No network device
 
*Model: choose the emulated network device, as long as your guest support it, go for virtio
 
*MAC address: use 'auto' or overwrite with a valid and unique MAC address
 
*Rate limit (MB/s): set a speed limit for this network adapter
 
 
=== Confirm ===
 
 
This tab shows a summary, please check if everything is done as needed. If you need to change a setting, you can jump to the previous tabs just by clicking. After you clicked "Finish", all settings are applied - wait for completion (this process just takes a second).
 
 
== Video Tutorials ==
 
 
*See [http://www.youtube.com/proxmoxve Proxmox VE YouTube Channel]
 
 
== Managing Virtual Machines ==
 
 
Go to "VM Manager/Virtual Machines" to see a list of your Virtual Machines.
 
 
Basic tasks can be done by clicking on the red arrow - drop down menu:
 
 
*start, restart, shutdown, stop
 
*migrate: migrate a Virtual Machine to another physical host (you need at least two Proxmox VE servers - see [[Proxmox VE Cluster]]
 
*console: using the VNC console for container virtualization automatically logs in via root. For managing KVM Virtual Machine, the console shows the screen of the full virtualized machine)
 
 
For a '''detailed view''' and '''configuration changes''' just click on a Virtual Machine row in the list of VMs.
 
 
"Logs" on a container Virtual Machine:
 
 
*Boot/Init: shows the Boot/Init logs generated during start or stop
 
*Command: see the current/last executed task
 
*Syslog: see the real time syslog of the Virtual Machine
 
 
== Networking and Firewall ==
 
 
See [[Network Model]] and [[Proxmox VE Firewall]]
 
  
 
[[Category:HOWTO]] [[Category:Installation]]
 
[[Category:HOWTO]] [[Category:Installation]]
[[Category:Reference Documentation]]
 

Revision as of 10:23, 16 July 2019

Proxmox VE is based on Debian. This is why the install disk images (ISO files) provided by Proxmox include a complete Debian system (Debian 10 Buster for Proxmox VE version 6.x) as well as all necessary Proxmox VE packages.

The installer will guide through the setup, allowing you to partition the local disk(s), apply basic system configurations (for example, timezone, language, network) and install all required packages. This process should not take more than a few minutes. Installing with the provided ISO is the recommended method for new and existing users.

Alternatively, Proxmox VE can be installed on top of an existing Debian system. This option is only recommended for advanced users because detailed knowledge about Proxmox VE is required.

Using the Proxmox VE Installer

The installer ISO image includes the following:

  • Complete operating system (Debian Linux, 64-bit)

  • The Proxmox VE installer, which partitions the local disk(s) with ext4, ext3, xfs or ZFS and installs the operating system.

  • Proxmox VE Linux kernel with KVM and LXC support

  • Complete toolset for administering virtual machines, containers, the host system, clusters and all necessary resources

  • Web-based management interface

Note All existing data on the for installation selected drives will be removed during the installation process. The installer does not add boot menu entries for other operating systems.

Please insert the prepared installation media (for example, USB flash drive or CD-ROM) and boot from it.

Tip Make sure that booting from the installation medium (for example, USB) is enabled in your servers firmware settings.
screenshot/pve-grub-menu.png

After choosing the correct entry (e.g. Boot from USB) the Proxmox VE menu will be displayed and one of the following options can be selected:

Install Proxmox VE

Starts the normal installation.

Tip It’s possible to use the installation wizard with a keyboard only. Buttons can be clicked by pressing the ALT key combined with the underlined character from the respective button. For example, ALT + N to press a Next button.
Install Proxmox VE (Debug mode)

Starts the installation in debug mode. A console will be opened at several installation steps. This helps to debug the situation if something goes wrong. To exit a debug console, press CTRL-D. This option can be used to boot a live system with all basic tools available. You can use it, for example, to repair a degraded ZFS rpool or fix the bootloader for an existing Proxmox VE setup.

Rescue Boot

With this option you can boot an existing installation. It searches all attached hard disks. If it finds an existing installation, it boots directly into that disk using the Linux kernel from the ISO. This can be useful if there are problems with the boot block (grub) or the BIOS is unable to read the boot block from the disk.

Test Memory

Runs memtest86+. This is useful to check if the memory is functional and free of errors.

screenshot/pve-select-target-disk.png

After selecting Install Proxmox VE and accepting the EULA, the prompt to select the target hard disk(s) will appear. The Options button opens the dialog to select the target file system.

The default file system is ext4. The Logical Volume Manager (LVM) is used when ext3, ext4 or xfs ist selected. Additional options to restrict LVM space can be set (see below).

Proxmox VE can be installed on ZFS. As ZFS offers several software RAID levels, this is an option for systems that don’t have a hardware RAID controller. The target disks must be selected in the Options dialog. More ZFS specific settings can be changed under Advanced Options (see below).

Warning ZFS on top of any hardware RAID is not supported and can result in data loss.
screenshot/pve-select-location.png

The next page asks for basic configuration options like the location, the time zone, and keyboard layout. The location is used to select a download server close by to speed up updates. The installer usually auto-detects these settings. They only need to be changed in the rare case that auto detection fails or a different keyboard layout should be used.

screenshot/pve-set-password.png

Next the password of the superuser (root) and an email address needs to be specified. The password must consist of at least 5 characters. It’s highly recommended to use a stronger password. Some guidelines are:

  • Use a minimum password length of 12 to 14 characters.

  • Include lowercase and uppercase alphabetic characters, numbers, and symbols.

  • Avoid character repetition, keyboard patterns, common dictionary words, letter or number sequences, usernames, relative or pet names, romantic links (current or past), and biographical information (for example ID numbers, ancestors' names or dates).

The email address is used to send notifications to the system administrator. For example:

  • Information about available package updates.

  • Error messages from periodic CRON jobs.

screenshot/pve-setup-network.png

The last step is the network configuration. Please note that during installation you can either use an IPv4 or IPv6 address, but not both. To configure a dual stack node, add additional IP addresses after the installation.

screenshot/pve-installation.png

The next step shows a summary of the previously selected options. Re-check every setting and use the Previous button if a setting needs to be changed. To accept, press Install. The installation starts to format disks and copies packages to the target. Please wait until this step has finished; then remove the installation medium and restart your system.

screenshot/pve-install-summary.png

If the installation failed check out specific errors on the second TTY (‘CTRL + ALT + F2’), ensure that the systems meets the minimum requirements. If the installation is still not working look at the how to get help chapter.

Further configuration is done via the Proxmox web interface. Point your browser to the IP address given during installation (https://youripaddress:8006).

Note Default login is "root" (realm PAM) and the root password is defined during the installation process.

Advanced LVM Configuration Options

The installer creates a Volume Group (VG) called pve, and additional Logical Volumes (LVs) called root, data, and swap. To control the size of these volumes use:

hdsize

Defines the total hard disk size to be used. This way you can reserve free space on the hard disk for further partitioning (for example for an additional PV and VG on the same hard disk that can be used for LVM storage).

swapsize

Defines the size of the swap volume. The default is the size of the installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater than hdsize/8.

Note If set to 0, no swap volume will be created.
maxroot

Defines the maximum size of the root volume, which stores the operation system. The maximum limit of the root volume size is hdsize/4.

maxvz

Defines the maximum size of the data volume. The actual size of the data volume is:

datasize = hdsize - rootsize - swapsize - minfree

Where datasize cannot be bigger than maxvz.

Note In case of LVM thin, the data pool will only be created if datasize is bigger than 4GB.
Note If set to 0, no data volume will be created and the storage configuration will be adapted accordingly.
minfree

Defines the amount of free space left in the LVM volume group pve. With more than 128GB storage available the default is 16GB, else hdsize/8 will be used.

Note LVM requires free space in the VG for snapshot creation (not required for lvmthin snapshots).

Advanced ZFS Configuration Options

The installer creates the ZFS pool rpool. No swap space is created but you can reserve some unpartitioned space on the install disks for swap. You can also create a swap zvol after the installation, although this can lead to problems. (see ZFS swap notes).

ashift

Defines the ashift value for the created pool. The ashift needs to be set at least to the sector-size of the underlying disks (2 to the power of ashift is the sector-size), or any disk which might be put in the pool (for example the replacement of a defective disk).

compress

Defines whether compression is enabled for rpool.

checksum

Defines which checksumming algorithm should be used for rpool.

copies

Defines the copies parameter for rpool. Check the zfs(8) manpage for the semantics, and why this does not replace redundancy on disk-level.

hdsize

Defines the total hard disk size to be used. This is useful to save free space on the hard disk(s) for further partitioning (for example to create a swap-partition). hdsize is only honored for bootable disks, that is only the first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].

ZFS Performance Tips

ZFS works best with a lot of memory. If you intend to use ZFS make sure to have enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB RAW disk space.

ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL). Use a fast drive (SSD) for it. It can be added after installation with the following command:

# zpool add <pool-name> log </dev/path_to_fast_ssd>

Video Tutorials