Difference between revisions of "Installation"

From Proxmox VE
Jump to navigation Jump to search
(remove duplicate content (see reference docs))
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{#pvedocs:sysadmin-pve-installation-plain.html}}
+
<!--PVE_IMPORT_START_MARKER-->
 
+
<!-- Do not edit - this is autogenerated content -->
== Steps to get your Proxmox VE up and running ==
+
{{#pvedocs:pve-installation-plain.html}}
 
+
[[Category:Reference Documentation]]
=== Install Proxmox VE server ===
+
<pvehide>
 
+
Proxmox VE is based on Debian. This is why the install disk images (ISO files)
See [[Quick installation]]
+
provided by Proxmox include a complete Debian system as well as all necessary
 
+
Proxmox VE packages.
[http://youtu.be/ckvPt1Bp9p0 Proxmox VE installation (Video Tutorial)]
+
See the support table in the FAQ for the
 
+
relationship between Proxmox VE releases and Debian releases.
If you need to install the outdated 1.9 release, check [[Installing Proxmox VE v1.9 post Lenny retirement]]
+
The installer will guide you through the setup, allowing you to partition the
 
+
local disk(s), apply basic system configurations (for example, timezone,
=== Optional: Install Proxmox VE on Debian 6 Squeeze (64 bit) ===
+
language, network) and install all required packages. This process should not
EOL.
+
take more than a few minutes. Installing with the provided ISO is the
 
+
recommended method for new and existing users.
See [[Install Proxmox VE on Debian Squeeze]]
+
Alternatively, Proxmox VE can be installed on top of an existing Debian system. This
 
+
option is only recommended for advanced users because detailed knowledge about
=== Optional: Install Proxmox VE on Debian 7 Wheezy (64 bit) ===
+
Proxmox VE is required.
 
+
Using the Proxmox VE Installer
EOL April 2016
+
The installer ISO image includes the following:
 +
Complete operating system (Debian Linux, 64-bit)
 +
The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS,
 +
  BTRFS (technology preview), or ZFS and installs the operating system.
 +
Proxmox VE Linux kernel with KVM and LXC support
 +
Complete toolset for administering virtual machines, containers, the host
 +
  system, clusters and all necessary resources
 +
Web-based management interface
 +
All existing data on the for installation selected drives will be removed
 +
during the installation process. The installer does not add boot menu entries
 +
for other operating systems.
 +
Please insert the prepared installation media
 +
(for example, USB flash drive or CD-ROM) and boot from it.
 +
Make sure that booting from the installation medium (for example, USB) is
 +
enabled in your servers firmware settings.
 +
After choosing the correct entry (e.g. Boot from USB) the Proxmox VE menu will be
 +
displayed and one of the following options can be selected:
 +
Install Proxmox VE
 +
Starts the normal installation.
 +
It&#8217;s possible to use the installation wizard with a keyboard only. Buttons
 +
can be clicked by pressing the ALT key combined with the underlined character
 +
from the respective button. For example, ALT + N to press a Next button.
 +
Install Proxmox VE (Debug mode)
 +
Starts the installation in debug mode. A console will be opened at several
 +
installation steps. This helps to debug the situation if something goes wrong.
 +
To exit a debug console, press CTRL-D. This option can be used to boot a live
 +
system with all basic tools available. You can use it, for example, to
 +
repair a degraded ZFS rpool or fix the
 +
bootloader for an existing Proxmox VE setup.
 +
Rescue Boot
 +
With this option you can boot an existing installation. It searches all attached
 +
hard disks. If it finds an existing installation, it boots directly into that
 +
disk using the Linux kernel from the ISO. This can be useful if there are
 +
problems with the boot block (grub) or the BIOS is unable to read the boot block
 +
from the disk.
 +
Test Memory
 +
Runs memtest86+. This is useful to check if the memory is functional and free
 +
of errors.
 +
After selecting Install Proxmox VE and accepting the EULA, the prompt to select the
 +
target hard disk(s) will appear. The Options button opens the dialog to select
 +
the target file system.
 +
The default file system is ext4. The Logical Volume Manager (LVM) is used when
 +
ext4 or xfs is selected. Additional options to restrict LVM space
 +
can also be set (see below).
 +
Proxmox VE can be installed on ZFS. As ZFS offers several software RAID levels, this
 +
is an option for systems that don&#8217;t have a hardware RAID controller. The target
 +
disks must be selected in the Options dialog. More ZFS specific settings can
 +
be changed under Advanced Options (see below).
 +
ZFS on top of any hardware RAID is not supported and can result in data
 +
loss.
 +
The next page asks for basic configuration options like the location, the time
 +
zone, and keyboard layout. The location is used to select a download server
 +
close by to speed up updates. The installer usually auto-detects these settings.
 +
They only need to be changed in the rare case that auto detection fails or a
 +
different keyboard layout should be used.
 +
Next the password of the superuser (root) and an email address needs to be
 +
specified. The password must consist of at least 5 characters. It&#8217;s highly
 +
recommended to use a stronger password. Some guidelines are:
 +
Use a minimum password length of 12 to 14 characters.
 +
Include lowercase and uppercase alphabetic characters, numbers, and symbols.
 +
Avoid character repetition, keyboard patterns, common dictionary words,
 +
  letter or number sequences, usernames, relative or pet names, romantic links
 +
  (current or past), and biographical information (for example ID numbers,
 +
  ancestors' names or dates).
 +
The email address is used to send notifications to the system administrator.
 +
For example:
 +
Information about available package updates.
 +
Error messages from periodic CRON jobs.
 +
The last step is the network configuration. Please note that during installation
 +
you can either use an IPv4 or IPv6 address, but not both. To configure a dual
 +
stack node, add additional IP addresses after the installation.
 +
The next step shows a summary of the previously selected options. Re-check every
 +
setting and use the Previous button if a setting needs to be changed. To
 +
accept, press Install. The installation starts to format disks and copies
 +
packages to the target. Please wait until this step has finished; then remove
 +
the installation medium and restart your system.
 +
If the installation failed check out specific errors on the second TTY
 +
(&#8216;CTRL + ALT + F2&#8217;), ensure that the systems meets the
 +
minimum requirements. If the installation
 +
is still not working look at the how to get help chapter.
 +
Further configuration is done via the Proxmox web interface. Point your browser
 +
to the IP address given during installation (https://youripaddress:8006).
 +
Default login is "root" (realm PAM) and the root password is defined
 +
during the installation process.
 +
Advanced LVM Configuration Options
 +
The installer creates a Volume Group (VG) called pve, and additional Logical
 +
Volumes (LVs) called root, data, and swap. To control the size of these
 +
volumes use:
 +
hdsize
 +
Defines the total hard disk size to be used. This way you can reserve free space
 +
on the hard disk for further partitioning (for example for an additional PV and
 +
VG on the same hard disk that can be used for LVM storage).
 +
swapsize
 +
Defines the size of the swap volume. The default is the size of the installed
 +
memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater
 +
than hdsize/8.
 +
If set to 0, no swap volume will be created.
 +
maxroot
 +
Defines the maximum size of the root volume, which stores the operation
 +
system. The maximum limit of the root volume size is hdsize/4.
 +
maxvz
 +
Defines the maximum size of the data volume. The actual size of the data
 +
volume is:
 +
datasize = hdsize - rootsize - swapsize - minfree
 +
Where datasize cannot be bigger than maxvz.
 +
In case of LVM thin, the data pool will only be created if datasize is
 +
bigger than 4GB.
 +
If set to 0, no data volume will be created and the storage
 +
configuration will be adapted accordingly.
 +
minfree
 +
Defines the amount of free space left in the LVM volume group pve. With more
 +
than 128GB storage available the default is 16GB, else hdsize/8 will be used.
 +
LVM requires free space in the VG for snapshot creation (not required for
 +
lvmthin snapshots).
 +
Advanced ZFS Configuration Options
 +
The installer creates the ZFS pool rpool. No swap space is created but you can
 +
reserve some unpartitioned space on the install disks for swap. You can also
 +
create a swap zvol after the installation, although this can lead to problems.
 +
(see ZFS swap notes).
 +
ashift
 +
Defines the ashift value for the created pool. The ashift needs to be set at
 +
least to the sector-size of the underlying disks (2 to the power of ashift is
 +
the sector-size), or any disk which might be put in the pool (for example the
 +
replacement of a defective disk).
 +
compress
 +
Defines whether compression is enabled for rpool.
 +
checksum
 +
Defines which checksumming algorithm should be used for rpool.
 +
copies
 +
Defines the copies parameter for rpool. Check the zfs(8) manpage for the
 +
semantics, and why this does not replace redundancy on disk-level.
 +
hdsize
 +
Defines the total hard disk size to be used. This is useful to save free space
 +
on the hard disk(s) for further partitioning (for example to create a
 +
swap-partition). hdsize is only honored for bootable disks, that is only the
 +
first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
 +
ZFS Performance Tips
 +
ZFS works best with a lot of memory. If you intend to use ZFS make sure to have
 +
enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB
 +
RAW disk space.
 +
ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL).
 +
Use a fast drive (SSD) for it. It can be added after installation with the
 +
following command:
 +
# zpool add &lt;pool-name&gt; log &lt;/dev/path_to_fast_ssd&gt;
 +
Video Tutorials
 +
List of all official tutorials on our
 +
  Proxmox VE YouTube Channel
 +
Tutorials in Spanish language on
 +
  ITexperts.es
 +
  YouTube Play List
 +
See Also
 +
Prepare Installation Media
 +
Install Proxmox VE on Debian Buster
 +
System Requirements
 +
Package Repositories
 +
Host System Administration
 +
Network Configuration
 +
Installation: Tips and Tricks
 +
</pvehide>
 +
<!--PVE_IMPORT_END_MARKER-->
  
See [[Install Proxmox VE on Debian Wheezy]]
 
 
=== Optional: Install Proxmox VE on Debian 8 Jessie (64 bit) ===
 
 
See [[Install Proxmox VE on Debian Jessie]]
 
 
=== [[Developer_Workstations_with_Proxmox_VE_and_X11]] ===
 
 
This page will cover the install of X11 and a basic Desktop on top of Proxmox. [[Developer_Workstations_with_Proxmox_VE_and_X11#Optional:_Linux_Mint_Mate_Desktop | Optional:_Linux_Mint_Mate_Desktop]] is also available.
 
 
=== Optional: Install Proxmox VE over iSCSI ===
 
 
See [[Proxmox ISCSI installation]]
 
 
==== Get Appliance Templates ====
 
 
===== Download =====
 
 
Just go to your content tab of your storage (e.g. "local") and [[Get Virtual Appliances|download pre-built Virtual Appliances]] directly to your server. This list is maintained by the Proxmox VE team and more and more Appliances will be available. This is the easiest way and a good place to start.
 
 
===== Use a NFS share for ISO´s =====
 
 
If you have a NFS server you can use a NFS share for storing ISO images. To start, configure the NFS ISO store on the web interface (Configuration/Storage).
 
 
===== Upload from your desktop =====
 
 
If you already got Virtually Appliances you can upload them via the upload button. To install a virtual machine from an ISO image (using KVM full virtualization) just upload the ISO file via the upload button.
 
 
===== Directly to file system =====
 
 
Templates and ISO images are stored on the Proxmox VE server (see /var/lib/vz/template/cache for openvz templates and /var/lib/vz/template/iso for ISO images). You can also transfer templates and ISO images via secure copy (scp) to these directories. If you work on a windows desktop, you can use a graphical scp client like [http://winscp.net winscp].
 
 
=== Optional: Reverting Thin-LVM to "old" Behavior of <code>/var/lib/vz</code> (Proxmox 4.2 and later) ===
 
 
If you installed Proxmox 4.2 (or later), you see yourself confronted with a changed layout of your data. There is no mounted <code>/var/lib/vz</code> LVM volume anymore, instead you find a thin-provisioned volume. This is technically the right choice, but one sometimes want to get the old behavior back, which is described here. This section describes the steps to revert to the "old" layout on a freshly installed Proxmox 4.2:
 
 
* After the Installation your storage configuration in <code>/etc/pve/storage.cfg</code> will look like this:
 
<pre>
 
dir: local
 
        path /var/lib/vz
 
        content iso,vztmpl,backup
 
 
lvmthin: local-lvm
 
        thinpool data
 
        vgname pve
 
        content rootdir,images
 
</pre>
 
* You can delete the thin-volume via GUI or manually and have to set the local directory to store images and container aswell. You should have such a config in the end:
 
<pre>
 
dir: local
 
        path /var/lib/vz
 
        maxfiles 0
 
        content backup,iso,vztmpl,rootdir,images
 
</pre>
 
* Now you need to recreate <code>/var/lib/vz</code>
 
<pre>
 
root@pve-42 ~ > lvs
 
  LV  VG  Attr      LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
 
  data pve  twi-a-tz-- 16.38g            0.00  0.49
 
  root pve  -wi-ao----  7.75g
 
  swap pve  -wi-ao----  3.88g
 
 
root@pve-42 ~ > lvremove pve/data
 
Do you really want to remove active logical volume data? [y/n]: y
 
  Logical volume "data" successfully removed
 
 
root@pve-42 ~ > lvcreate --name data -l +100%FREE pve
 
  Logical volume "data" created.
 
 
root@pve-42 ~ > mkfs.ext4 /dev/pve/data
 
mke2fs 1.42.12 (29-Aug-2014)
 
Discarding device blocks: done
 
Creating filesystem with 5307392 4k blocks and 1327104 inodes
 
Filesystem UUID: 310d346a-de4e-48ae-83d0-4119088af2e3
 
Superblock backups stored on blocks:
 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 
        4096000
 
 
Allocating group tables: done
 
Writing inode tables: done
 
Creating journal (32768 blocks): done
 
Writing superblocks and filesystem accounting information: done
 
</pre>
 
* Then add the new volume in your <code>/etc/fstab</code>:
 
<pre>
 
/dev/pve/data /var/lib/vz ext4 defaults 0 1
 
</pre>
 
* Restart to check if everything survives a reboot.
 
 
You should end up with a working "old-style" configuration where you "see" your files as it was before Proxmox 4.2
 
  
 
[[Category:HOWTO]] [[Category:Installation]]
 
[[Category:HOWTO]] [[Category:Installation]]
[[Category:Reference Documentation]]
 

Revision as of 10:27, 17 November 2021

Proxmox VE is based on Debian. This is why the install disk images (ISO files) provided by Proxmox include a complete Debian system as well as all necessary Proxmox VE packages.

Tip See the support table in the FAQ for the relationship between Proxmox VE releases and Debian releases.

The installer will guide you through the setup, allowing you to partition the local disk(s), apply basic system configurations (for example, timezone, language, network) and install all required packages. This process should not take more than a few minutes. Installing with the provided ISO is the recommended method for new and existing users.

Alternatively, Proxmox VE can be installed on top of an existing Debian system. This option is only recommended for advanced users because detailed knowledge about Proxmox VE is required.

Using the Proxmox VE Installer

The installer ISO image includes the following:

  • Complete operating system (Debian Linux, 64-bit)

  • The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS, BTRFS (technology preview), or ZFS and installs the operating system.

  • Proxmox VE Linux kernel with KVM and LXC support

  • Complete toolset for administering virtual machines, containers, the host system, clusters and all necessary resources

  • Web-based management interface

Note All existing data on the for installation selected drives will be removed during the installation process. The installer does not add boot menu entries for other operating systems.

Please insert the prepared installation media (for example, USB flash drive or CD-ROM) and boot from it.

Tip Make sure that booting from the installation medium (for example, USB) is enabled in your servers firmware settings and secure boot is disabled.
screenshot/pve-grub-menu.png

After choosing the correct entry (e.g. Boot from USB) the Proxmox VE menu will be displayed and one of the following options can be selected:

Install Proxmox VE

Starts the normal installation.

Tip It’s possible to use the installation wizard with a keyboard only. Buttons can be clicked by pressing the ALT key combined with the underlined character from the respective button. For example, ALT + N to press a Next button.
Advanced Options: Install Proxmox VE (Debug mode)

Starts the installation in debug mode. A console will be opened at several installation steps. This helps to debug the situation if something goes wrong. To exit a debug console, press CTRL-D. This option can be used to boot a live system with all basic tools available. You can use it, for example, to repair a degraded ZFS rpool or fix the bootloader for an existing Proxmox VE setup.

Advanced Options: Rescue Boot

With this option you can boot an existing installation. It searches all attached hard disks. If it finds an existing installation, it boots directly into that disk using the Linux kernel from the ISO. This can be useful if there are problems with the boot block (grub) or the BIOS is unable to read the boot block from the disk.

Advanced Options: Test Memory

Runs memtest86+. This is useful to check if the memory is functional and free of errors.

screenshot/pve-select-target-disk.png

After selecting Install Proxmox VE and accepting the EULA, the prompt to select the target hard disk(s) will appear. The Options button opens the dialog to select the target file system.

The default file system is ext4. The Logical Volume Manager (LVM) is used when ext4 or xfs is selected. Additional options to restrict LVM space can also be set (see below).

Proxmox VE can be installed on ZFS. As ZFS offers several software RAID levels, this is an option for systems that don’t have a hardware RAID controller. The target disks must be selected in the Options dialog. More ZFS specific settings can be changed under Advanced Options (see below).

Warning ZFS on top of any hardware RAID is not supported and can result in data loss.
screenshot/pve-select-location.png

The next page asks for basic configuration options like the location, the time zone, and keyboard layout. The location is used to select a download server close by to speed up updates. The installer usually auto-detects these settings. They only need to be changed in the rare case that auto detection fails or a different keyboard layout should be used.

screenshot/pve-set-password.png

Next the password of the superuser (root) and an email address needs to be specified. The password must consist of at least 5 characters. It’s highly recommended to use a stronger password. Some guidelines are:

  • Use a minimum password length of 12 to 14 characters.

  • Include lowercase and uppercase alphabetic characters, numbers, and symbols.

  • Avoid character repetition, keyboard patterns, common dictionary words, letter or number sequences, usernames, relative or pet names, romantic links (current or past), and biographical information (for example ID numbers, ancestors' names or dates).

The email address is used to send notifications to the system administrator. For example:

  • Information about available package updates.

  • Error messages from periodic CRON jobs.

screenshot/pve-setup-network.png

The last step is the network configuration. Please note that during installation you can either use an IPv4 or IPv6 address, but not both. To configure a dual stack node, add additional IP addresses after the installation.

screenshot/pve-installation.png

The next step shows a summary of the previously selected options. Re-check every setting and use the Previous button if a setting needs to be changed. To accept, press Install. The installation starts to format disks and copies packages to the target. Please wait until this step has finished; then remove the installation medium and restart your system.

screenshot/pve-install-summary.png

If the installation failed, check out specific errors on the second TTY (‘CTRL + ALT + F2’) and ensure that the systems meets the minimum requirements. If the installation is still not working, look at the how to get help chapter.

Further configuration is done via the Proxmox web interface. Point your browser to the IP address given during installation (https://youripaddress:8006).

Note Default login is "root" (realm PAM) and the root password was defined during the installation process.

Advanced LVM Configuration Options

The installer creates a Volume Group (VG) called pve, and additional Logical Volumes (LVs) called root, data, and swap. To control the size of these volumes use:

hdsize

Defines the total hard disk size to be used. This way you can reserve free space on the hard disk for further partitioning (for example for an additional PV and VG on the same hard disk that can be used for LVM storage).

swapsize

Defines the size of the swap volume. The default is the size of the installed memory, minimum 4 GB and maximum 8 GB. The resulting value cannot be greater than hdsize/8.

Note If set to 0, no swap volume will be created.
maxroot

Defines the maximum size of the root volume, which stores the operation system. The maximum limit of the root volume size is hdsize/4.

maxvz

Defines the maximum size of the data volume. The actual size of the data volume is:

datasize = hdsize - rootsize - swapsize - minfree

Where datasize cannot be bigger than maxvz.

Note In case of LVM thin, the data pool will only be created if datasize is bigger than 4GB.
Note If set to 0, no data volume will be created and the storage configuration will be adapted accordingly.
minfree

Defines the amount of free space left in the LVM volume group pve. With more than 128GB storage available the default is 16GB, else hdsize/8 will be used.

Note LVM requires free space in the VG for snapshot creation (not required for lvmthin snapshots).

Advanced ZFS Configuration Options

The installer creates the ZFS pool rpool. No swap space is created but you can reserve some unpartitioned space on the install disks for swap. You can also create a swap zvol after the installation, although this can lead to problems. (see ZFS swap notes).

ashift

Defines the ashift value for the created pool. The ashift needs to be set at least to the sector-size of the underlying disks (2 to the power of ashift is the sector-size), or any disk which might be put in the pool (for example the replacement of a defective disk).

compress

Defines whether compression is enabled for rpool.

checksum

Defines which checksumming algorithm should be used for rpool.

copies

Defines the copies parameter for rpool. Check the zfs(8) manpage for the semantics, and why this does not replace redundancy on disk-level.

hdsize

Defines the total hard disk size to be used. This is useful to save free space on the hard disk(s) for further partitioning (for example to create a swap-partition). hdsize is only honored for bootable disks, that is only the first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].

ZFS Performance Tips

ZFS works best with a lot of memory. If you intend to use ZFS make sure to have enough RAM available for it. A good calculation is 4GB plus 1GB RAM for each TB RAW disk space.

ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL). Use a fast drive (SSD) for it. It can be added after installation with the following command:

# zpool add <pool-name> log </dev/path_to_fast_ssd>

Video Tutorials