Storage: ZFS: Difference between revisions
Bread-baker (talk | contribs) |
Bread-baker (talk | contribs) |
||
Line 52: | Line 52: | ||
:test the above again and update the instructions as necessary. | :test the above again and update the instructions as necessary. | ||
:does dkms auto build modules when pve kernel is upgraded? for that to happen headers would need to be installed and the ln may be needed. | :does dkms auto build modules when pve kernel is upgraded? for that to happen headers would need to be installed and the ln may be needed. | ||
==pve kernel upgrade== | |||
currently this needs to be done when a kernel upgrade is done. | |||
*before reboot | |||
<pre> | |||
# | |||
# 1- you need to replace the XX with numbers from new kernel version: | |||
# | |||
aptitude install pve-headers-2.6.32-XX-pve | |||
# | |||
# 2- you need to replace the XX with numbers from new kernel version: | |||
# | |||
ln -s /lib/modules/2.6.32-XX-pve/build /lib/modules/2.6.32-XX-pve/source | |||
# | |||
# 3- this will force modules to be built | |||
# | |||
aptitude reinstall spl-dkms zfs-dkms | |||
# | |||
# 4- make sure modules are installed ok: | |||
# | |||
ls -l /lib/modules/2.6.32-XX-pve/updates/dkms/ | |||
# | |||
# 5- now reboot to use new kernel | |||
# | |||
</pre> | |||
some might want to put that to motd or save as a text file at /root/ | |||
== kvm tuning== | == kvm tuning== | ||
Line 66: | Line 102: | ||
kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument | kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument | ||
</pre> | </pre> | ||
== Install on a high performance system== | == Install on a high performance system== | ||
Revision as of 18:39, 1 March 2014
Introduction
tbd. (running zfs on Proxmox VE is not officially supported)
Native ZFS for Linux on Proxmox
2013-03-27: 0.6.1 ZFSOnLinux (ZoL) is now ready for wide scale deployment on everything from desktops to super computers. See announcement
check http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
using Debian Wheezy packages from zfsonlinux
Important , on each pve kernel upgrade, you'll need to manually install headers, make symlink and rebuild modules.
For PVE on Wheezy [ 3.0+ ].
2013-05-29 - Debian Packages DKMS style packages for Debian are available from the zfsonlinux.org repository. These packages track the latest official upstream tag and are refreshed as new releases are made available from http://zfsonlinux.org/debian.html .
also check http://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
- make sure pve headers are installed. if not :
aptitude install pve-headers-$(uname -r)
- I think the ln is still needed.
ln -s /lib/modules/$(uname -r)/build /lib/modules/$(uname -r)/source
- then follow instructions from http://zfsonlinux.org/debian.html , i copied here , but check link in case these changed:
su - wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_1%7Ewheezy_all.deb dpkg -i zfsonlinux_1~wheezy_all.deb apt-get update apt-get install debian-zfs
you should see zfs and spl module builds... check if OK
# ls -l /lib/modules/$(uname -r)/updates/dkms/ total 3128 -rw-r--r-- 1 root root 340944 May 29 10:25 splat.ko -rw-r--r-- 1 root root 302104 May 29 10:25 spl.ko -rw-r--r-- 1 root root 13392 May 29 10:27 zavl.ko -rw-r--r-- 1 root root 71232 May 29 10:27 zcommon.ko -rw-r--r-- 1 root root 1935120 May 29 10:27 zfs.ko -rw-r--r-- 1 root root 130408 May 29 10:27 znvpair.ko -rw-r--r-- 1 root root 40424 May 29 10:27 zpios.ko -rw-r--r-- 1 root root 330368 May 29 10:27 zunicode.ko
TBD:
- test the above again and update the instructions as necessary.
- does dkms auto build modules when pve kernel is upgraded? for that to happen headers would need to be installed and the ln may be needed.
pve kernel upgrade
currently this needs to be done when a kernel upgrade is done.
- before reboot
# # 1- you need to replace the XX with numbers from new kernel version: # aptitude install pve-headers-2.6.32-XX-pve # # 2- you need to replace the XX with numbers from new kernel version: # ln -s /lib/modules/2.6.32-XX-pve/build /lib/modules/2.6.32-XX-pve/source # # 3- this will force modules to be built # aptitude reinstall spl-dkms zfs-dkms # # 4- make sure modules are installed ok: # ls -l /lib/modules/2.6.32-XX-pve/updates/dkms/ # # 5- now reboot to use new kernel #
some might want to put that to motd or save as a text file at /root/
kvm tuning
see thread on prox forum , per user Nemesiz :
- pool:
zfs set primarycache=all tank
- kvm config:
- change cache to Write Back
- You can do it using web GUI or manually. Example:
ide0: data_zfs:100/vm-100-disk-1.raw,cache=writeback
if not set this happened:
qm start 4016 kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument
Install on a high performance system
As of 2013 high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5" disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
- If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
# run 'udevadm trigger' after updating this file alias a0 scsi-36848f690e856b10018cdf39854055206 alias b0 scsi-36848f690e856b10018cdf3ce573fdeb6 alias a1 scsi-36848f690e856b10018cdf40f5b277cbc alias b1 scsi-36848f690e856b10018cdf43a5db1b99b alias a2 scsi-36848f690e856b10018cdf4575f652ad0 alias b2 scsi-36848f690e856b10018cdf47761587cec
Use flash for caching/logs. If you have only one SSD, use cfdisk to create a small partion for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition. In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB
- edit /etc/modprobe.d/zfs.conf to apply several tuning options for high performance servers:
# ZFS tuning for a proxmox machine that reserves 64GB for ZFS # # Don't let ZFS use less than 4GB and more than 64GB options zfs zfs_arc_min=4294967296 options zfs zfs_arc_max=68719476736 # # disabling prefetch is no longer required options zfs l2arc_noprefetch=0
- create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
zpool create -o compression=on -f tank mirror a0 b0 mirror a1 b1 mirror a2 b2 log /dev/rssda1 cache /dev/rssda2
- check the status of the newly created pool:
root@proxmox:/# zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 a0 ONLINE 0 0 0 b0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 a1 ONLINE 0 0 0 b1 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 a2 ONLINE 0 0 0 b2 ONLINE 0 0 0 logs rssda1 ONLINE 0 0 0 cache rssda2 ONLINE 0 0 0 errors: No known data errors
Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
zfs links and docs
- http://zfsonlinux.org/faq.html
- http://wiki.complete.org/ConvertingToZFS
- http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
and this has some very important information to know before implementing zfs on a production system.
check these well written manual pages:
man zfs man zpool
zfs mounting workaround
The default zfs mount -a script runs too late in the boot process for most system scripts. The following make zfs mounts start on time.
2014-01-22 the info below came from this excellent wiki page: http://wiki.complete.org/ConvertingToZFS
- Edit /etc/default/zfs and set ZFS_MOUNT='yes'
- edit /etc/insserv.conf,
- and at the end of the $local_fs line,
- add zfs-mount (without a plus).
# # All local filesystems are mounted (done during boot phase) # $local_fs +mountall +mountall-bootclean +mountoverflowtmp +umountfs
edit /etc/init.d/zfs-mount and find three lines near the top, changing them like this:
# Required-Start: # Required-Stop: # Default-Start: S
note remove the Required-Start and -Stop entries.
- Activating init.d changes Then run:
insserv -v -d zfs-mount
I had an issue with pve storage on zfs, before pve would start before zfs and create directories at the zfs mount point. to fix that start single user mode and remove the directories [ make sure they are empty.... ].