Host Bootloader

From Proxmox VE
Revision as of 11:24, 16 July 2019 by Api (talk | contribs) (Created page with "<!--PVE_IMPORT_START_MARKER--> <!-- Do not edit - this is autogenerated content --> {{#pvedocs:system-booting-plain.html}} Category:Reference Documentation <pvehide>...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Proxmox VE currently uses one of two bootloaders depending on the disk setup selected in the installer.

For EFI Systems installed with ZFS as the root filesystem systemd-boot is used. All other deployments use the standard grub bootloader (this usually also applies to systems which are installed on top of Debian).

Partitioning scheme used by the installer

The Proxmox VE installer creates 3 partitions on the bootable disks selected for installation. The bootable disks are:

  • For Installations with ext4 or xfs the selected disk

  • For ZFS installations all disks belonging to the first vdev:

    • The first disk for RAID0

    • All disks for RAID1, RAIDZ1, RAIDZ2, RAIDZ3

    • The first two disks for RAID10

The created partitions are:

  • a 1 MB BIOS Boot Partition (gdisk type EF02)

  • a 512 MB EFI System Partition (ESP, gdisk type EF00)

  • a third partition spanning the set hdsize parameter or the remaining space used for the chosen storage type

grub in BIOS mode (--target i386-pc) is installed onto the BIOS Boot Partition of all bootable disks for supporting older systems.

Grub

grub has been the de-facto standard for booting Linux systems for many years and is quite well documented
[Grub Manual https://www.gnu.org/software/grub/manual/grub/grub.html]
.

The kernel and initrd images are taken from /boot and its configuration file /boot/grub/grub.cfg gets updated by the kernel installation process.

Configuration

Changes to the grub configuration are done via the defaults file /etc/default/grub or config snippets in /etc/default/grub.d. To regenerate the /boot/grub/grub.cfg after a change to the configuration run:

`update-grub`.

Systemd-boot

systemd-boot is a lightweight EFI bootloader. It reads the kernel and initrd images directly from the EFI Service Partition (ESP) where it is installed. The main advantage of directly loading the kernel from the ESP is that it does not need to reimplement the drivers for accessing the storage. In the context of ZFS as root filesystem this means that you can use all optional features on your root pool instead of the subset which is also present in the ZFS implementation in grub or having to create a separate small boot-pool
[Booting ZFS on root with grub https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS]
.

In setups with redundancy (RAID1, RAID10, RAIDZ*) all bootable disks (those being part of the first vdev) are partitioned with an ESP. This ensures the system boots even if the first boot device fails. The ESPs are kept in sync by a kernel postinstall hook script /etc/kernel/postinst.d/zz-pve-efiboot. The script copies certain kernel versions and the initrd images to EFI/proxmox/ on the root of each ESP and creates the appropriate config files in loader/entries/proxmox-*.conf. The pve-efiboot-tool script assists in managing both the synced ESPs themselves and their contents.

The following kernel versions are configured by default:

  • the currently running kernel

  • the version being newly installed on package updates

  • the two latest already installed kernels

  • the latest version of the second-to-last kernel series (e.g. 4.15, 5.0), if applicable

  • any manually selected kernels (see below)

The ESPs are not kept mounted during regular operation, in contrast to grub, which keeps an ESP mounted on /boot/efi. This helps to prevent filesystem corruption to the vfat formatted ESPs in case of a system crash, and removes the need to manually adapt /etc/fstab in case the primary boot device fails.

Configuration

systemd-boot is configured via the file loader/loader.conf in the root directory of an EFI System Partition (ESP). See the loader.conf(5) manpage for details.

Each bootloader entry is placed in a file of its own in the directory loader/entries/

An example entry.conf looks like this (/ refers to the root of the ESP):

title    Proxmox
version  5.0.15-1-pve
options   root=ZFS=rpool/ROOT/pve-1 boot=zfs
linux    /EFI/proxmox/5.0.15-1-pve/vmlinuz-5.0.15-1-pve
initrd   /EFI/proxmox/5.0.15-1-pve/initrd.img-5.0.15-1-pve
Manually keeping a kernel bootable

Should you wish to add a certain kernel and initrd image to the list of bootable kernel use pve-efiboot-tool kernel add.

For example run the following to add the kernel with ABI version 5.0.15-1-pve to the list of kernels to keep installed and synced to all ESPs:

pve-efiboot-tool kernel add 5.0.15-1-pve

pve-efiboot-tool kernel list will list all kernel versions currently selected for booting:

# pve-efiboot-tool kernel list
Manually selected kernels:
5.0.15-1-pve

Automatically selected kernels:
5.0.12-1-pve
4.15.18-18-pve

Run pve-efiboot-tool remove to remove a kernel from the list of manually selected kernels, for example:

pve-efiboot-tool kernel remove 5.0.15-1-pve
Note It’s required to run pve-efiboot-tool refresh to update all EFI System Partitions (ESPs) after a manual kernel addition or removal from above.
Setting up a new partition for use as synced ESP

To format and initialize a partition as synced ESP, e.g., after replacing a failed vdev in an rpool, or when converting an existing system that pre-dates the sync mechanism, pve-efiboot-tool from pve-kernel-helpers can be used.

Warning the format command will format the <partition>, make sure to pass in the right device/partition!

For example, to format an empty partition /dev/sda2 as ESP, run the following:

pve-efiboot-tool format /dev/sda2

To setup an existing, unmounted ESP located on /dev/sda2 for inclusion in Proxmox VE’s kernel update synchronization mechanism, use the following:

pve-efiboot-tool init /dev/sda2

Afterwards /etc/kernel/pve-efiboot-uuids should contain a new line with the UUID of the newly added partition. The init command will also automatically trigger a refresh of all configured ESPs.

Updating the configuration on all ESPs

To copy and configure all bootable kernels and keep all ESPs listed in /etc/kernel/pve-efiboot-uuids in sync you just need to run:

 pve-efiboot-tool refresh

(The equivalent to running update-grub on Systems being booted with grub).

This is necessary should you make changes to the kernel commandline, or want to sync all kernels and initrds after regenerating the latter.

Editing the kernel commandline

You can modify the kernel commandline in the following places, depending on the bootloarder used:

Grub

The kernel commandline needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all linux entries in /boot/grub/grub.cfg.

Systemd-boot

The kernel commandline needs to be placed as line in /etc/kernel/cmdline Running /etc/kernel/postinst.d/zz-pve-efiboot sets it as option line for all config files in loader/entries/proxmox-*.conf.