Native ZFS for Linux on Proxmox 2.X
- 2012-02-25. check http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
- 2013-03-17. 0.6.0 RC14 is stable, ZFS On Linux official stable version 0.6.1 is around the corner
I had tried 3 other operating systems including Debian/kFreebsd and for each was able to recover all the zfs file systems from previous installs. Even with disk name changes [bsd uses ad11, linux sdb ] no data was lost. Today I set up zfsonlinux using the following info on 2 prox 2.0 systems
Note (2013-03-17): this procedure worked well for PVE 2.3
# apt key apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F6B0FC61 #sources: echo "deb http://ppa.launchpad.net/zfs-native/stable/ubuntu lucid main" > /etc/apt/sources.list.d/zfs.list aptitude update aptitude install dkms pve-headers-$(uname -r)
- this is needed for dkms to work:
ln -s /lib/modules/2.6.32-10-pve/build /lib/modules/2.6.32-10-pve/source
Maybe someone who knows how to write a patch could submit one and suggest this be done when kernel is upgraded? Otherwise I think when the Proxmox kernel upgrades, dkms build may fail if this is not done [ using new pve-headers ).
then install zfs
aptitude install ubuntu-zfs
finally make sure that zfs is started at system boot, edit /etc/default/zfs
# ZoL userland configuration. # Run `zfs mount -a` during system start? # This should be 'no' if zfs-mountall or a systemd generator is available. ZFS_MOUNT='yes'
currently this needs to be done when a kernel upgrade occurs.
- before reboot:
- 22.214.171.124 upgrade from 126.96.36.199 did before reboot [ for you would need to change 2.6.32-11-pve if different version.
aptitude install pve-headers-2.6.32-11-pve ln -s /lib/modules/2.6.32-11-pve/build /lib/modules/2.6.32-11-pve/source
- this will force modules to be built ( there is probably a better way to do this , so correct this if you know how to make dkms do the build ).
aptitude reinstall spl-dkms zfs-dkms
- install these
aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted pve-headers-$(uname -r)
- get make and install spl
cd /usr/src git clone https://github.com/zfsonlinux/spl.git cd spl ./configure make deb dpkg -i *.deb modprobe spl
- get make and install zfs
cd /usr/src git clone https://github.com/zfsonlinux/zfs.git cd zfs ./configure make deb dpkg -i *.deb
- make sure module can load
- install debs does not do this:
update-rc.d zfs defaults
- note , the deb files made above can be used to install zfs on another system.
- if there is a pre existing zfs pool then simply use the next line to recover it. this can happen if you re-install proxmox and have zfs disks from before.
- If there is a need or want I could set up a repository with pre build debs. Just send an email .
Creating Pools and Filesystems
use raidz2 or mirror in main data systems.
Create a raidz pool named “tank”
zpool create -f -o ashift=12 tank raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
- note this happened to 2 or 3 disks the 1-st time. these were disks from a 3-ware raid.
zpool create -f -o ashift=12 tank raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh invalid vdev specification the following errors must be manually repaired: cannot stat /dev/sdg2: No such file or directory
had to use fdisk to add partitions 1 , 2 etc to those disks . I forget if I left the partitions there or not. but had to add to get this fixed.
NOTE: This is quite risky if you intend to deploy ZFS on a server. Any changes to the operating system's devices layout will break your pool. Don't create a pool using device names like
/dev/sda. For persistence, use devices under
/dev/disk/by-id. - Bryan Baldwin
Create directories and mount points
zfs create tank/data zfs set mountpoint=/data tank/data
- mount all
zfs mount -a
to start over
zpool destroy zfs-pool
- Automatically NFS-export all home directories [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
zfs set sharenfs=rw tank/home
- check sharenfs
zfs get sharenfs tank/home NAME PROPERTY VALUE SOURCE tank/home sharenfs rw local
- Turn on compression for everything in the pool
zfs set compression=on tank
see thread on prox forum , per user Nemesiz :
zfs set primarycache=all tank
- kvm config:
- change cache to Write Back
- You can do it using web GUI or manually. Example:
if not set this happened:
qm start 4016 kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument
Install on a high performance system
As of 2013 high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5" disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
- If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
# run 'udevadm trigger' after updating this file alias a0 scsi-36848f690e856b10018cdf39854055206 alias b0 scsi-36848f690e856b10018cdf3ce573fdeb6 alias a1 scsi-36848f690e856b10018cdf40f5b277cbc alias b1 scsi-36848f690e856b10018cdf43a5db1b99b alias a2 scsi-36848f690e856b10018cdf4575f652ad0 alias b2 scsi-36848f690e856b10018cdf47761587cec
Use flash for caching/logs. If you have only one SSD, use cfdisk to create a small partion for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition. In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB
- edit /etc/modprobe.d/zfs.conf to apply several tuning options for high performance servers:
# ZFS tuning for a proxmox machine that reserves 64GB for ZFS # # Don't let ZFS use less than 4GB and more than 64GB options zfs zfs_arc_min=4294967296 options zfs zfs_arc_max=68719476736 # # disabling prefetch is no longer required options zfs l2arc_noprefetch=0
- create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
zpool create -o compression=on -f tank mirror a0 b0 mirror a1 b1 mirror a2 b2 log /dev/rssda1 cache /dev/rssda2
- check the status of the newly created pool:
root@proxmox:/# zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 a0 ONLINE 0 0 0 b0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 a1 ONLINE 0 0 0 b1 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 a2 ONLINE 0 0 0 b2 ONLINE 0 0 0 logs rssda1 ONLINE 0 0 0 cache rssda2 ONLINE 0 0 0 errors: No known data errors
Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
and this has some very important information to know before implementing zfs on a production system.
check these well written manual pages:
man zfs man zpool
How to install ZFS-fuse under Proxmox
(Tested with Kernel 2.6.32)
apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2 tar jxvf zfs-fuse-0.6.0.tar.bz2 cd zfs-fuse-0.6.0 cd src scons scond install
Note: zfs-fuse-0.6.0 is now (Dec 2012) deprecated. Use zfs-fuse-0.7.0.tar.bz2 instead.
Fire up ZFS-fuse daemon
zpool create -m /var/lib/vz/images2 /dev/sdb
zfs create images2/109
ZFS clone a vm 109 to vm 110
zfs snapshot images2/109@master zfs clone images2/109@master images2/110
As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage. When Debsd has nfs support then I'd use this for storage of iso's and backups to start. In the mean time this works as a samba server..
zfs works much better using Debian/GNU kFreeBSD then fuse.
after installing the system to kvm or hardware:
- add some disks . in KVM i was only able to get ide to work.
- the scsi disks look like:
ls /dev/ad* /dev/ad10 /dev/ad10s1 /dev/ad10s2 /dev/ad11 /dev/ad12 /dev/ad13
- per http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide - if I read it right - use entire disk not slices for zfs (I assume slices = partitions).
- if the disks have partition tables, then clear them. just not the operating system disk! I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
fdisk /dev/ad11 o w
then same for ad12 and ad13
- install this
aptitude install zfsutils
- make a pool
zpool create -f zfs-pool /dev/ad11 /dev/ad12 /dev/ad13
df | grep zfs-pool zfs-pool zfs 1.4T 18K 1.4T 1% /zfs-pool