Difference between revisions of "ZFS on Linux"

From Proxmox VE
Jump to: navigation, search
 
(4 intermediate revisions by the same user not shown)
Line 36: Line 36:
 
practice, use as much you can get for your hardware/budget. To prevent
 
practice, use as much you can get for your hardware/budget. To prevent
 
data corruption, we recommend the use of high quality ECC RAM.
 
data corruption, we recommend the use of high quality ECC RAM.
If you use a dedicated cache and/or log disk, you should use a
+
If you use a dedicated cache and/or log disk, you should use an
 
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
 
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
 
increase the overall performance significantly.
 
increase the overall performance significantly.
Line 104: Line 104:
 
rpool/swap        4.25G  7.69T    64K  -
 
rpool/swap        4.25G  7.69T    64K  -
 
Bootloader
 
Bootloader
The default ZFS disk partitioning scheme does not use the first 2048
+
Depending on whether the system is booted in EFI or legacy BIOS mode the
sectors. This gives enough room to install a GRUB boot partition. The
+
Proxmox VE installer sets up either grub or systemd-boot as main bootloader.
Proxmox VE installer automatically allocates that space, and installs the
+
See the chapter on Proxmox VE host bootladers for details.
GRUB boot loader there. If you use a redundant RAID setup, it installs
 
the boot loader on all disk required for booting. So you can boot
 
even if some disks fail.
 
It is not possible to use ZFS as root file system with UEFI
 
boot.
 
 
ZFS Administration
 
ZFS Administration
 
This section gives you some usage examples for common tasks. ZFS
 
This section gives you some usage examples for common tasks. ZFS
Line 162: Line 157:
 
zpool add -f <pool> log <device-part1> cache <device-part2>
 
zpool add -f <pool> log <device-part1> cache <device-part2>
 
Changing a failed device
 
Changing a failed device
zpool replace -f <pool> <old device> <new-device>
+
zpool replace -f <pool> <old device> <new device>
 +
Changing a failed bootable device when using systemd-boot
 +
sgdisk <healthy bootable device> -R <new device>
 +
sgdisk -G <new device>
 +
zpool replace -f <pool> <old zfs partition> <new zfs partition>
 +
pve-efiboot-tool format <new disk's ESP>
 +
pve-efiboot-tool init <new disk's ESP>
 +
ESP stands for EFI System Partition, which is setup as partition #2 on
 +
bootable disks setup by the Proxmox VE installer since version 5.4. For details, see
 +
Setting up a new partition for use as synced ESP.
 
Activate E-Mail Notification
 
Activate E-Mail Notification
 
ZFS comes with an event daemon, which monitors events generated by the
 
ZFS comes with an event daemon, which monitors events generated by the
 
ZFS kernel module. The daemon can also send emails on ZFS events like
 
ZFS kernel module. The daemon can also send emails on ZFS events like
pool errors.
+
pool errors. Newer ZFS packages ships the daemon in a separate package,
 +
and you can install it using apt-get:
 +
# apt-get install zfs-zed
 
To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your
 
To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your
 
favourite editor, and uncomment the ZED_EMAIL_ADDR setting:
 
favourite editor, and uncomment the ZED_EMAIL_ADDR setting:
Line 185: Line 191:
 
update-initramfs -u
 
update-initramfs -u
 
SWAP on ZFS
 
SWAP on ZFS
SWAP on ZFS on Linux may generate some troubles, like blocking the
+
Swap-space created on a zvol may generate some troubles, like blocking the
 
server or generating a high IO load, often seen when starting a Backup
 
server or generating a high IO load, often seen when starting a Backup
 
to an external Storage.
 
to an external Storage.
 
We strongly recommend to use enough memory, so that you normally do not
 
We strongly recommend to use enough memory, so that you normally do not
run into low memory situations. Additionally, you can lower the
+
run into low memory situations. Should you need or want to add swap, it is
 +
preferred to create a partition on a physical disk and use it as swapdevice.
 +
You can leave some space free for this purpose in the advanced options of the
 +
installer. Additionally, you can lower the
 
“swappiness” value. A good value for servers is 10:
 
“swappiness” value. A good value for servers is 10:
 
sysctl -w vm.swappiness=10
 
sysctl -w vm.swappiness=10
Line 211: Line 220:
 
vm.swappiness = 100
 
vm.swappiness = 100
 
The kernel will swap aggressively.
 
The kernel will swap aggressively.
 +
Encrypted ZFS Datasets
 +
ZFS on Linux version 0.8.0 introduced support for native encryption of
 +
datasets. After an upgrade from previous ZFS on Linux versions, the encryption
 +
feature can be enabled per pool:
 +
# zpool get feature@encryption tank
 +
NAME  PROPERTY            VALUE            SOURCE
 +
tank  feature@encryption  disabled        local
 +
# zpool set feature@encryption=enabled
 +
# zpool get feature@encryption tank
 +
NAME  PROPERTY            VALUE            SOURCE
 +
tank  feature@encryption  enabled        local
 +
There is currently no support for booting from pools with encrypted
 +
datasets using Grub, and only limited support for automatically unlocking
 +
encrypted datasets on boot. Older versions of ZFS without encryption support
 +
will not be able to decrypt stored data.
 +
It is recommended to either unlock storage datasets manually after
 +
booting, or to write a custom unit to pass the key material needed for
 +
unlocking on boot to zfs load-key.
 +
Establish and test a backup procedure before enabling encryption of
 +
production data.If the associated key material/passphrase/keyfile has been
 +
lost, accessing the encrypted data is no longer possible.
 +
Encryption needs to be setup when creating datasets/zvols, and is inherited by
 +
default to child datasets. For example, to create an encrypted dataset
 +
tank/encrypted_data and configure it as storage in Proxmox VE, run the following
 +
commands:
 +
# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
 +
Enter passphrase:
 +
Re-enter passphrase:
 +
# pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
 +
All guest volumes/disks create on this storage will be encrypted with the
 +
shared key material of the parent dataset.
 +
To actually use the storage, the associated key material needs to be loaded
 +
with zfs load-key:
 +
# zfs load-key tank/encrypted_data
 +
Enter passphrase for 'tank/encrypted_data':
 +
It is also possible to use a (random) keyfile instead of prompting for a
 +
passphrase by setting the keylocation and keyformat properties, either at
 +
creation time or with zfs change-key on existing datasets:
 +
# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
 +
# zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
 +
When using a keyfile, special care needs to be taken to secure the
 +
keyfile against unauthorized access or accidental loss. Without the keyfile, it
 +
is not possible to access the plaintext data!
 +
A guest volume created underneath an encrypted dataset will have its
 +
encryptionroot property set accordingly. The key material only needs to be
 +
loaded once per encryptionroot to be available to all encrypted datasets
 +
underneath it.
 +
See the encryptionroot, encryption, keylocation, keyformat and
 +
keystatus properties, the zfs load-key, zfs unload-key and zfs
 +
change-key commands and the Encryption section from man zfs for more
 +
details and advanced usage.
 
</pvehide>
 
</pvehide>
 
<!--PVE_IMPORT_END_MARKER-->
 
<!--PVE_IMPORT_END_MARKER-->

Latest revision as of 11:23, 16 July 2019

ZFS is a combined file system and logical volume manager designed by Sun Microsystems. Starting with Proxmox VE 3.4, the native Linux kernel port of the ZFS file system is introduced as optional file system and also as an additional selection for the root file system. There is no need for manually compile ZFS modules - all packages are included.

By using ZFS, its possible to achieve maximum enterprise features with low budget hardware, but also high performance systems by leveraging SSD caching or even SSD only setups. ZFS can replace cost intense hardware raid cards by moderate CPU and memory load combined with easy management.

General ZFS advantages
  • Easy configuration and management with Proxmox VE GUI and CLI.

  • Reliable

  • Protection against data corruption

  • Data compression on file system level

  • Snapshots

  • Copy-on-write clone

  • Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3

  • Can use SSD for cache

  • Self healing

  • Continuous integrity checking

  • Designed for high storage capacities

  • Protection against data corruption

  • Asynchronous replication over network

  • Open Source

  • Encryption

Hardware

ZFS depends heavily on memory, so you need at least 8GB to start. In practice, use as much you can get for your hardware/budget. To prevent data corruption, we recommend the use of high quality ECC RAM.

If you use a dedicated cache and/or log disk, you should use an enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can increase the overall performance significantly.

Important Do not use ZFS on top of hardware controller which has its own cache management. ZFS needs to directly communicate with disks. An HBA adapter is the way to go, or something like LSI controller flashed in “IT” mode.

If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use virtio for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with virtio SCSI controller type).

Installation as Root File System

When you install using the Proxmox VE installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time:

RAID0

Also called “striping”. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any redundancy, so the failure of a single drive makes the volume unusable.

RAID1

Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk.

RAID10

A combination of RAID0 and RAID1. Requires at least 4 disks.

RAIDZ-1

A variation on RAID-5, single parity. Requires at least 3 disks.

RAIDZ-2

A variation on RAID-5, double parity. Requires at least 4 disks.

RAIDZ-3

A variation on RAID-5, triple parity. Requires at least 5 disks.

The installer automatically partitions the disks, creates a ZFS pool called rpool, and installs the root file system on the ZFS subvolume rpool/ROOT/pve-1.

Another subvolume called rpool/data is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in /etc/pve/storage.cfg:

zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir

After installation, you can view your ZFS pool status using the zpool command:

# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

errors: No known data errors

The zfs command is used configure and manage your ZFS file systems. The following command lists all file systems after installation:

# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             4.94G  7.68T    96K  /rpool
rpool/ROOT         702M  7.68T    96K  /rpool/ROOT
rpool/ROOT/pve-1   702M  7.68T   702M  /
rpool/data          96K  7.68T    96K  /rpool/data
rpool/swap        4.25G  7.69T    64K  -

Bootloader

Depending on whether the system is booted in EFI or legacy BIOS mode the Proxmox VE installer sets up either grub or systemd-boot as main bootloader. See the chapter on Proxmox VE host bootladers for details.

ZFS Administration

This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are zfs and zpool. Both commands come with great manual pages, which can be read with:

# man zpool
# man zfs
Create a new zpool

To create a new pool, at least one disk is needed. The ashift should have the same sector-size (2 power of ashift) or larger as the underlying disk.

zpool create -f -o ashift=12 <pool> <device>

To activate compression

zfs set compression=lz4 <pool>
Create a new pool with RAID-0

Minimum 1 Disk

zpool create -f -o ashift=12 <pool> <device1> <device2>
Create a new pool with RAID-1

Minimum 2 Disks

zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
Create a new pool with RAID-10

Minimum 4 Disks

zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
Create a new pool with RAIDZ-1

Minimum 3 Disks

zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
Create a new pool with RAIDZ-2

Minimum 4 Disks

zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
Create a new pool with cache (L2ARC)

It is possible to use a dedicated cache drive partition to increase the performance (use SSD).

As <device> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".

zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
Create a new pool with log (ZIL)

It is possible to use a dedicated cache drive partition to increase the performance(SSD).

As <device> it is possible to use more devices, like it’s shown in "Create a new pool with RAID*".

zpool create -f -o ashift=12 <pool> <device> log <log_device>
Add cache and log to an existing pool

If you have an pool without cache and log. First partition the SSD in 2 partition with parted or gdisk

Important Always use GPT partition tables.

The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD can be used as cache.

zpool add -f <pool> log <device-part1> cache <device-part2>
Changing a failed device
zpool replace -f <pool> <old device> <new device>
Changing a failed bootable device when using systemd-boot
sgdisk <healthy bootable device> -R <new device>
sgdisk -G <new device>
zpool replace -f <pool> <old zfs partition> <new zfs partition>
pve-efiboot-tool format <new disk's ESP>
pve-efiboot-tool init <new disk's ESP>
Note ESP stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the Proxmox VE installer since version 5.4. For details, see Setting up a new partition for use as synced ESP.

Activate E-Mail Notification

ZFS comes with an event daemon, which monitors events generated by the ZFS kernel module. The daemon can also send emails on ZFS events like pool errors. Newer ZFS packages ships the daemon in a separate package, and you can install it using apt-get:

# apt-get install zfs-zed

To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your favourite editor, and uncomment the ZED_EMAIL_ADDR setting:

ZED_EMAIL_ADDR="root"

Please note Proxmox VE forwards mails to root to the email address configured for the root user.

Important The only setting that is required is ZED_EMAIL_ADDR. All other settings are optional.

Limit ZFS Memory Usage

It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in /etc/modprobe.d/zfs.conf and insert:

options zfs zfs_arc_max=8589934592

This example setting limits the usage to 8GB.

Important

If your root file system is ZFS you must update your initramfs every time this value changes:

update-initramfs -u

SWAP on ZFS

Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage.

We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is preferred to create a partition on a physical disk and use it as swapdevice. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the “swappiness” value. A good value for servers is 10:

sysctl -w vm.swappiness=10

To make the swappiness persistent, open /etc/sysctl.conf with an editor of your choice and add the following line:

vm.swappiness = 10
Table 1. Linux kernel swappiness parameter values
Value Strategy

vm.swappiness = 0

The kernel will swap only to avoid an out of memory condition

vm.swappiness = 1

Minimum amount of swapping without disabling it entirely.

vm.swappiness = 10

This value is sometimes recommended to improve performance when sufficient memory exists in a system.

vm.swappiness = 60

The default value.

vm.swappiness = 100

The kernel will swap aggressively.

Encrypted ZFS Datasets

ZFS on Linux version 0.8.0 introduced support for native encryption of datasets. After an upgrade from previous ZFS on Linux versions, the encryption feature can be enabled per pool:

# zpool get feature@encryption tank
NAME  PROPERTY            VALUE            SOURCE
tank  feature@encryption  disabled         local

# zpool set feature@encryption=enabled

# zpool get feature@encryption tank
NAME  PROPERTY            VALUE            SOURCE
tank  feature@encryption  enabled         local
Warning There is currently no support for booting from pools with encrypted datasets using Grub, and only limited support for automatically unlocking encrypted datasets on boot. Older versions of ZFS without encryption support will not be able to decrypt stored data.
Note It is recommended to either unlock storage datasets manually after booting, or to write a custom unit to pass the key material needed for unlocking on boot to zfs load-key.
Warning Establish and test a backup procedure before enabling encryption of production data.If the associated key material/passphrase/keyfile has been lost, accessing the encrypted data is no longer possible.

Encryption needs to be setup when creating datasets/zvols, and is inherited by default to child datasets. For example, to create an encrypted dataset tank/encrypted_data and configure it as storage in Proxmox VE, run the following commands:

# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
Enter passphrase:
Re-enter passphrase:

# pvesm add zfspool encrypted_zfs -pool tank/encrypted_data

All guest volumes/disks create on this storage will be encrypted with the shared key material of the parent dataset.

To actually use the storage, the associated key material needs to be loaded with zfs load-key:

# zfs load-key tank/encrypted_data
Enter passphrase for 'tank/encrypted_data':

It is also possible to use a (random) keyfile instead of prompting for a passphrase by setting the keylocation and keyformat properties, either at creation time or with zfs change-key on existing datasets:

# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1

# zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
Warning When using a keyfile, special care needs to be taken to secure the keyfile against unauthorized access or accidental loss. Without the keyfile, it is not possible to access the plaintext data!

A guest volume created underneath an encrypted dataset will have its encryptionroot property set accordingly. The key material only needs to be loaded once per encryptionroot to be available to all encrypted datasets underneath it.

See the encryptionroot, encryption, keylocation, keyformat and keystatus properties, the zfs load-key, zfs unload-key and zfs change-key commands and the Encryption section from man zfs for more details and advanced usage.