Storage: ZFS
Introduction
ZFS is a combined file system and logical volume manager designed by Sun Microsystems. Starting with Proxmox VE 3.4, the native Linux kernel port of the ZFS filesystem is introduced as optional file-system and also as an additional selection for the root file-system. There is no need for manually compiling ZFS, all packages are included (for both kernel branches, 2.6.32 and 3.10).
By using ZFS, its possible to achieve maximal enterprise features with low budget hardware but also high performance systems by leveraging SSD caching or even SSD only setups. ZFS can replace cost intense hardware raid cards by moderate CPU and memory load combined with easy management.
In the first release, there are two ways to use ZFS on Proxmox VE:
- as an local directory, supports all storage content types (instead of ext3 or ext4)
- as zvol block-storage, currently supporting kvm images in raw format (new ZFS storage plugin)
- The advantage of zvol is the snapshot capability on fs-level (fast)
This articles describes how to use ZFS on Proxmox VE.
General ZFS advantages
- Easy configuration and management with Proxmox VE GUI and CLI.
- Reliable
- Protection against data corruption
- Data compression on file-system level
- Snapshots
- Copy-on-write clone
- Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
- Can use SSD for cache
- Self healing
- Continuous integrity checking
- Designed for high storage capacities
- Protection against data corruption
- Asynchrony replication over network
- Open Source
- Encryption
- ...
Hardware
ZFS depends heavily on memory, so you need at least 4GB to start. In practice, use as much you can get for your hardware/budget. To prevent data corruption, the use of high quality ECC RAM is very recommended.
If you use a dedicated cache and/or log disk, you should use a enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can increase the overall performance quite significantly.
If you are experimenting with an installation of Proxmox inside a VM (Nested_Virtualization), don't use Virtio for disks of that VM, since are not supported by ZFS, use IDE or SCSI (it's ok also with Virtio type contorller) instead.
IMPORTANT: Do not use zfs on top of hardware controller which has it's own cache management. Zfs needs to directly communicate with disks, an HBA adapter is the way to go (or something like LSI controller flashed in 'IT' mode).
Installation as root file-system
When you install with Proxmox-VE installer grater than 3.4. you can choose what FS you prefer.
It is not possible to use ZFS as rpool(root partition) with UEFI boot.
Administration
Create a new ZPool
To create a zfspool, at least one disk is needed. The ashift should have the same sector-size (2 power of ashift) or larger as the underlying disk.
zpool create -f -o ashift=12 <pool-name> <device>
To activate the compression
zfs set compression=lz4 <pool-name>
Create a new pool with RAID-0
Minimum 1 Disk
zpool create -f -o ashift=12 <pool-name> <device1> <device2>
Create a new pool with RAID-1
Minimum 2 Disks
zpool create -f -o ashift=12 <pool-name> mirror <device1> <device2>
Create a new pool with RAID-10
Minimum 4 Disks
zpool create -f -o ashift=12 <pool-name> mirror <device1> <device2> mirror <device3> <device4>
Create a new pool with RAIDZ-1
Minimum 3 Disks
zpool create -f -o ashift=12 <pool-name> raidz1 <device1> <device2> <device3>
Create a new pool with RAIDZ-2
Minimum 4 Disks
zpool create -f -o ashift=12 <pool-name> raidz2 <device1> <device2> <device3> <device4>
Create a new pool with Cache (L2ARC)
It is possible to use a dedicated cache drive partition to increase the performance (use SSD).
As <device> it is possible to use more devices, like it's shown in "Create a new pool with RAID*".
zpool create -f -o ashift=12 <pool-name> <device> cache <cache_device>
Create a new pool with Log (ZIL)
It is possible to use a dedicated cache drive partition to increase the performance(SSD).
As <device> it is possible to use more devices, like it's shown in "Create a new pool with RAID*".
zpool create -f -o ashift=12 <pool-name> <device> log <log_device>
Create a new pool with Cache and Log on one Disk
It is possible to create ZIL and L2ARC on one SSD. First partition the SSD in 2 partition with parted or gdisk (important: use GPT partition table).
As <device> it is possible to use more devices, like it's shown in "Create a new pool with RAID*". Important: identify device with /dev/disk/by-id/scsi-*<device>
zpool create -f -o ashift=12 <pool-name> <device> log <log_device>
Minimum 1 Disk
zpool create -f -o ashift=12 <pool-name> <device1> <device2>
Activate Email notification
ZFS >=0.64 has an Email notification daemon. The daemon send an Email on zfs event like pool errors.
to activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your favored editor.
Important: the only settings what is required is ZED_EMAIL the other are optional.
Add Cache and Log to existing pool
If you have an pool without cache and log. First partition the SSD in 2 partition with parted or gdisk (important: use GPT partition table).
Important: identify device with /dev/disk/by-id/scsi-*<device>
zpool add -f <pool-name> cache <device1.part1> log <device1.part2>
Example: Proxmox with zfs as rootFS
zpool add -f rpool cache /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 \ log /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part2
Changing a failed Device
zpool replace -f <pool-name> <old device> <new-device>
Using ZFS Storage Plugin (via Proxmox VE GUI or shell)
If the zpool is created, you can use it on Proxmox VE GUI and CLI.
Adding a ZFS storage via GUI
Go to Datacenter/Storage and use the ZFSStorage plugin to add your zpool (select ZFS).
- ID is for identification of the Storage
- the checkbox ZFS Pool shows all existing pools (use CLI to create more)
- Thin provisioning: allocate not all space immediately by creating virtual disks
Adding a ZFS storage via CLI
To create it by CLI use
pvesm add zfspool <storage-ID> -pool <pool-name>
Adding ZFS root file-system as storage with Plugin
If you install Proxmox with ZFS and then create qcow2 VMs, you will have a "copy on write" image disk (qcow2) that writes on a "copy on write" file system (ZFS), that is not ideal for performance. To avoid it, thanks to the flexibility of ZFS, you can create an additional file system on the default storage pool and, thanks to the ZFS Plugin, use it as storage and have VMs disks created as volumes (block devices) on top of ZFS. To have this do the following: Create a new filesystem, that we will call "zfsdisks" in this example, from shell:
zfs create rpool/zfsdisks
Now add it to the storage (Datacenter -> [Storage] -> Add, choose "ZFS", ad ID let's call it, for example, "zfsvols", as "ZFS Pool" choose "rpool/zfsdisks", set "thin provisioning" and you are ok. When you create a VM choose "zfsvols" as storage.
cat /etc/pve/storage.cfg zfspool: zfsvols pool rpool/zfsdisks content images,rootdir sparse
If you want to add the storage from the shell, you will have the same result (even if /etc/pve/storage.cfg does not exists yet) with
pvesm add zfspool zfsvols -pool rpool/zfsdisks -content images,rootdir -sparse
With a VM 100 and one disk you will have something like that
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.78G 26.0G 96K /rpool rpool/ROOT 676M 26.0G 96K /rpool/ROOT rpool/ROOT/pve-1 676M 26.0G 676M / rpool/swap 4.12G 30.1G 64K - rpool/zfsdisks 160K 26.0G 96K /rpool/zfsdisks rpool/zfsdisks/vm-100-disk-1 64K 26.0G 64K -
so you see that it's a "block device" and thin provisioning is used (USED 64K)
Limit ZFS memory usage
It is good to use max 50 percent (which is default) of the system memory for ZFS arc to prevent performance shortage of the host.
Use your preferred editor to change the config in /etc/modprobe.d/zfs.conf and insert:
options zfs zfs_arc_max=4299967296
This example setting limits the usage to 4GB.
IMPORTANT: If your root fs is ZFS you must update your initramfs every time this value changes.
update-initramfs -u
If your Server has enough memory (i.e. you do not over commit memory) and you have a SWAP partition on ZFS lower the swappiness value to avoid problems and hang ups, a good value for servers is 10:
sysctl -w vm.swappiness=10
Misc
QEMU tuning
see thread on proxmox forum, per user Nemesiz:
- pool:
zfs set primarycache=all tank
- kvm config:
- change cache to Write Back
- You can do it using web GUI or manually. Example:
ide0: data_zfs:100/vm-100-disk-1.raw,cache=writeback
if not set this happened:
qm start 4016 kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument
Example configurations for running Proxmox VE with ZFS
Install on a high performance system
As of 2013 and later, high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5" disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
- If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
# run 'udevadm trigger' after updating this file alias a0 scsi-36848f690e856b10018cdf39854055206 alias b0 scsi-36848f690e856b10018cdf3ce573fdeb6 alias a1 scsi-36848f690e856b10018cdf40f5b277cbc alias b1 scsi-36848f690e856b10018cdf43a5db1b99b alias a2 scsi-36848f690e856b10018cdf4575f652ad0 alias b2 scsi-36848f690e856b10018cdf47761587cec
Use flash for caching/logs. If you have only one SSD, use parted of gdisk to create a small partition for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition. In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB.
- edit /etc/modprobe.d/zfs.conf to apply several tuning options for high performance servers:
# ZFS tuning for a proxmox machine that reserves 64GB for ZFS # # Don't let ZFS use less than 4GB and more than 64GB options zfs zfs_arc_min=4294967296 options zfs zfs_arc_max=68719476736 # # disabling prefetch is no longer required options zfs l2arc_noprefetch=0
- create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
zpool create -o compression=on -f tank mirror a0 b0 mirror a1 b1 mirror a2 b2 log /dev/rssda1 cache /dev/rssda2
- check the status of the newly created pool:
root@proxmox:/# zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 a0 ONLINE 0 0 0 b0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 a1 ONLINE 0 0 0 b1 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 a2 ONLINE 0 0 0 b2 ONLINE 0 0 0 logs rssda1 ONLINE 0 0 0 cache rssda2 ONLINE 0 0 0 errors: No known data errors
Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
How to prevent lvm of scanning zvols
It is not necessary but to reduce confusion, when using lvm it is a good practice.
open the config file /etc/lvm/lvm.conf
and insert the following line
filter = [ "r|/dev/zd*|" ]
Troubleshooting and known issues
ZFS packages are not installed
If you upgraded to 3.4 or later, zfsutils package is not installed. You can install it with apt:
apt-get install zfsutils zfs-initramfs
Grub boot ZFS problem
- Symptoms: stuck at boot with an blinking prompt.
- Reason: If you ZFS raid it could happen that your mainboard does not initial all your disks correctly and Grub will wait for all RAID disk members - and fails. It can happen with more than 2 disks in ZFS RAID configuration - we saw this on some boards with ZFS RAID-0/RAID-10
ZFS mounting workaround
The default ZFS mount -a script runs too late in the boot process for most system scripts. The following helps to mount ZFS correctly. This is only necessary if you do not use ZFS as root file-system and if you use ZFS as an additional directory storage.
2014-01-22: the info below came from this excellent wiki page: http://wiki.complete.org/ConvertingToZFS
- Edit /etc/default/zfs and set ZFS_MOUNT='yes'
- edit /etc/insserv.conf,
- and at the end of the $local_fs line,
- add zfs-mount (without a plus).
# # All local file-systems are mounted (done during boot phase) # $local_fs +mountall +mountall-bootclean +mountoverflowtmp +umountfs
edit /etc/init.d/zfs-mount and find three lines near the top, changing them like this:
# Required-Start: # Required-Stop: # Default-Start: S
note remove the Required-Start and -Stop entries.
- Activating init.d changes Then run:
insserv -v -d zfs-mount
I had an issue with pve storage on ZFS, before pve would start before ZFS and create directories at the ZFS mount point. To fix that start single user mode and remove the directories [ make sure they are empty.... ].
also see https://github.com/zfsonlinux/pkg-zfs/issues/101
SWAP on ZFS
SWAP on ZFS on Linux may generate some Troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage. To work around this follow #Limit ZFS memory usage and especially lower the vm.swappiness value.
Snapshot of LXC on ZFS
If you can't create a snapshot of an LXC container on ZFS and you get following message:
INFO: rsync: set_acl: sys_acl_set_file(archiv, ACL_TYPE_DEFAULT): Operation not supported (95)
you can run following commands
zfs create -o mountpoint=/mnt/vztmp rpool/vztmp zfs set acltype=posixacl rpool/vztmp
Now set /mnt/vztmp in your /etc/vzdump.conf for tmp
Glossary
- ZPool is the logical unit of the underlying disks, what zfs use.
- ZVol is an emulated Block Device provided by ZFS
- ZIL is ZFS Intent Log
- ARC is Adaptive Replacement Cache and located in Ram
- L2ARC is Layer2 Adaptive Replacement Cache and should be on an fast device (like SSD).
Further readings about ZFS
- http://wiki.illumos.org/download/attachments/1146951/zfs_last.pdf
- http://zfsonlinux.org/faq.html
- http://wiki.complete.org/ConvertingToZFS
- https://www.freebsd.org/doc/handbook/zfs.html (even if written for freebsd, of course, I found this doc is extremely clear even for less "techie" admins [note by m.ardito])
- https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ (and all other pages linked there)
and this has some very important information to know before implementing zfs on a production system.
Very well written manual pages
man zfs man zpool