|
|
(83 intermediate revisions by 14 users not shown) |
Line 1: |
Line 1: |
| =Introduction=
| | <!--PVE_IMPORT_START_MARKER--> |
| tbd. (running zfs on Proxmox VE is not officially supported)
| | <!-- Do not edit - this is autogenerated content --> |
| | | {{#pvedocs:pve-storage-zfspool-plain.html}} |
| =Native ZFS for Linux on Proxmox =
| | [[Category:Reference Documentation]] |
| | | <pvehide> |
| 2013-03-27: 0.6.1 ZFSOnLinux (ZoL) is now ready for wide scale deployment on everything from
| | Storage pool type: zfspool |
| desktops to super computers. [https://groups.google.com/a/zfsonlinux.org/group/zfs-announce/topics?lnk See announcement]
| | This backend allows you to access local ZFS pools (or ZFS file systems |
| | | inside such pools). |
| check http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
| | Configuration |
| | | The backend supports the common storage properties content, nodes, |
| | | disable, and the following ZFS specific properties: |
| == using Debian Wheezy packages from zfsonlinux ==
| | pool |
| For PVE on Wheezy [ 3.0+ ].
| | Select the ZFS pool/filesystem. All allocations are done within that |
| | | pool. |
| 2013-05-29 - ''Debian Packages DKMS style packages for Debian are available from the zfsonlinux.org repository. These packages track the latest official upstream tag and are refreshed as new releases are made available'' from http://zfsonlinux.org/debian.html .
| | blocksize |
| | | Set ZFS blocksize parameter. |
| also check http://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
| | sparse |
| | | Use ZFS thin-provisioning. A sparse volume is a volume whose |
| *If you are upgrading from the Ubuntu PPA method remove ubuntu packages first.
| | reservation is not equal to the volume size. |
| aptitude remove ubuntu-zfs
| | mountpoint |
| | | The mount point of the ZFS pool/filesystem. Changing this does not |
| *If this is a new install or you are repeating a previously failed attempt
| | affect the mountpoint property of the dataset seen by zfs. |
| dpkg --purge zfsonlinux zfsutils libzfs1 zfs-dkms dkms
| | Defaults to /<pool>. |
| | | Configuration Example (/etc/pve/storage.cfg) |
| :move the old zfs.list from /etc/apt/sources.list.d | | zfspool: vmdata |
| *make sure pve headers are installed. if not :
| | pool tank/vmdata |
| aptitude install pve-headers-$(uname -r)
| | content rootdir,images |
| | | sparse |
| *I think the ln is still needed.
| | File naming conventions |
| ln -s /lib/modules/$(uname -r)/build /lib/modules/$(uname -r)/source
| | The backend uses the following naming scheme for VM images: |
| | | vm-<VMID>-<NAME> // normal VM images |
| :then follow instructions from http://zfsonlinux.org/debian.html , i copied here , but check link in case these changed: | | base-<VMID>-<NAME> // template VM image (read-only) |
| <pre> | | subvol-<VMID>-<NAME> // subvolumes (ZFS filesystem for containers) |
| su -
| | <VMID> |
| wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_1%7Ewheezy_all.deb
| | This specifies the owner VM. |
| dpkg -i zfsonlinux_1~wheezy_all.deb
| | <NAME> |
| apt-get update
| | This can be an arbitrary name (ascii) without white space. The |
| apt-get install debian-zfs
| | backend uses disk[N] as default, where [N] is replaced by an |
| </pre>
| | integer to make the name unique. |
| | | Storage Features |
| you should see zfs and spl module builds... check if OK | | ZFS is probably the most advanced storage type regarding snapshot and |
| <pre>
| | cloning. The backend uses ZFS datasets for both VM images (format |
| # ls -l /lib/modules/$(uname -r)/updates/dkms/
| | raw) and container data (format subvol). ZFS properties are |
| total 3128
| | inherited from the parent dataset, so you can simply set defaults |
| -rw-r--r-- 1 root root 340944 May 29 10:25 splat.ko
| | on the parent dataset. |
| -rw-r--r-- 1 root root 302104 May 29 10:25 spl.ko
| | Table 1. Storage features for backend zfs |
| -rw-r--r-- 1 root root 13392 May 29 10:27 zavl.ko
| | Content types |
| -rw-r--r-- 1 root root 71232 May 29 10:27 zcommon.ko
| | Image formats |
| -rw-r--r-- 1 root root 1935120 May 29 10:27 zfs.ko
| | Shared |
| -rw-r--r-- 1 root root 130408 May 29 10:27 znvpair.ko
| | Snapshots |
| -rw-r--r-- 1 root root 40424 May 29 10:27 zpios.ko
| | Clones |
| -rw-r--r-- 1 root root 330368 May 29 10:27 zunicode.ko
| | images rootdir |
| </pre>
| | raw subvol |
| | | no |
| TBD:
| | yes |
| :test the above again and update the instructions as necessary.
| | yes |
| :does dkms auto build modules when pve kernel is upgraded? for that to happen headers would need to be installed and the ln may be needed.
| | Examples |
| | | It is recommended to create an extra ZFS file system to store your VM images: |
| == kernel upgrade ==
| | # zfs create tank/vmdata |
| currently this needs to be done when a kernel upgrade occurs.
| | To enable compression on that newly allocated file system: |
| *'''before reboot''':
| | # zfs set compression=on tank/vmdata |
| :2.6.32.11 upgrade from 2.6.32.10 did before reboot [ for you would need to change 2.6.32-11-pve if different version.
| | You can get a list of available ZFS filesystems with: |
| aptitude install pve-headers-2.6.32-11-pve
| | # pvesm zfsscan |
| ln -s /lib/modules/2.6.32-11-pve/build /lib/modules/2.6.32-11-pve/source
| | See Also |
| | | Storage |
| *this will force modules to be built ( there is probably a better way to do this , so correct this if you know how to make dkms do the build ).
| | ZFS on Linux |
| aptitude reinstall spl-dkms zfs-dkms
| | </pvehide> |
| | | <!--PVE_IMPORT_END_MARKER--> |
| == using git ==
| |
| *install these
| |
| aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted pve-headers-$(uname -r)
| |
| *get make and install spl
| |
| <pre>
| |
| cd /usr/src
| |
| git clone https://github.com/zfsonlinux/spl.git
| |
| cd spl
| |
| ./configure
| |
| make deb
| |
| dpkg -i *.deb
| |
| modprobe spl
| |
| </pre>
| |
| *get make and install zfs
| |
| <pre>
| |
| cd /usr/src
| |
| git clone https://github.com/zfsonlinux/zfs.git
| |
| cd zfs
| |
| ./configure
| |
| make deb
| |
| dpkg -i *.deb
| |
| </pre>
| |
| | |
| * make sure module can load
| |
| modprobe zfs
| |
| | |
| * install debs does not do this:
| |
| update-rc.d zfs defaults
| |
| | |
| *'''note , the deb files made above can be used to install zfs on another system'''.
| |
| | |
| *if there is a pre existing zfs pool then simply use the next line to recover it. this can happen if you re-install proxmox and have zfs disks from before.
| |
| zpoo </cman>
| |
| | |
| | |
| *If there is a need or want I could set up a repository with pre build debs. Just send an email .
| |
| | |
| =Creating Pools and Filesystems=
| |
| '''see Install on a high performance system''' for suggestions on creating pool. I'd set up alias for each disk to make looking at commands like this and zfs status reports easier.'''
| |
| | |
| <pre>
| |
| zpool create -f tank -o ashift=12 raidz2 scsi-1AMCC_F310KNWD4989B600434A \
| |
| scsi-1AMCC_F310WSKD4989B6001C42 scsi-1AMCC_F31122MD4989B6009010 \
| |
| scsi-1AMCC_F312LZKD4989B600445E scsi-1AMCC_F312XYYD4989B600DB04 \
| |
| scsi-1AMCC_F312ZWBD4989B600B860 scsi-1AMCC_F314VEDD4989B600F2F0 \
| |
| scsi-1AMCC_Y38318404989B6000158 scsi-1AMCC_Z240WX6XB8C465007F42 \
| |
| scsi-1AMCC_Z340BGN3B9782E00EC8E scsi-1AMCC_Z340BHW7B8C47900D174 \
| |
| scsi-1AMCC_Z340BJ880001D10074A4 scsi-1AMCC_Z340BJSPB8C456001B6A \
| |
| scsi-1AMCC_Z340BK0L0001CC00DE3A scsi-1AMCC_Z340BM4Z0001D1003858 \
| |
| scsi-1AMCC_Z340BPWDB8C488001BC8 \
| |
| log scsi-3600050e000017c000a9c000092fd0000 \
| |
| cache scsi-3600050e000017c0075c000000ad50000
| |
| </pre>
| |
| | |
| *atime
| |
| zfs set atime=off tank
| |
| | |
| ==Create directories and mount points==
| |
| <pre>
| |
| zfs create tank/data
| |
| zfs set mountpoint=/data tank/data
| |
| | |
| zfs create tank/pve
| |
| zfs set mountpoint=/pve tank/pve
| |
| | |
| zfs create tank/bkup
| |
| zfs set mountpoint=/bkup tank/bkup
| |
| </pre>
| |
| | |
| ==destroy==
| |
| to start over
| |
| zpool destroy zfs-pool
| |
| | |
| ==Setting Properties==
| |
| *atime
| |
| zfs set atime=off tank
| |
| | |
| *Automatically NFS-export all home directories [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
| |
| zfs set sharenfs=rw tank/home
| |
| *check sharenfs
| |
| <pre>
| |
| zfs get sharenfs tank/home
| |
| NAME PROPERTY VALUE SOURCE
| |
| tank/home sharenfs rw local
| |
| </pre>
| |
| *Turn on compression for everything in the pool
| |
| zfs set compression=on tank
| |
| | |
| == kvm tuning==
| |
| see thread on prox forum , per user Nemesiz :
| |
| *pool:
| |
| zfs set primarycache=all tank
| |
| *kvm config:
| |
| * change cache to Write Back
| |
| :You can do it using web GUI or manually. Example:
| |
| ide0: data_zfs:100/vm-100-disk-1.raw,cache=writeback
| |
| if not set this happened:
| |
| <pre>
| |
| qm start 4016
| |
| kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument
| |
| </pre>
| |
| == Install on a high performance system==
| |
| | |
| As of 2013 high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5" disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
| |
| | |
| * If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
| |
| # run 'udevadm trigger' after updating this file
| |
| alias a0 scsi-36848f690e856b10018cdf39854055206
| |
| alias b0 scsi-36848f690e856b10018cdf3ce573fdeb6
| |
| alias a1 scsi-36848f690e856b10018cdf40f5b277cbc
| |
| alias b1 scsi-36848f690e856b10018cdf43a5db1b99b
| |
| alias a2 scsi-36848f690e856b10018cdf4575f652ad0
| |
| alias b2 scsi-36848f690e856b10018cdf47761587cec
| |
| | |
| Use flash for caching/logs. If you have only one SSD, use cfdisk to create a small partion for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition. In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB
| |
| | |
| * edit /etc/modprobe.d/zfs.conf to apply several tuning options for high performance servers:
| |
| | |
| # ZFS tuning for a proxmox machine that reserves 64GB for ZFS
| |
| #
| |
| # Don't let ZFS use less than 4GB and more than 64GB
| |
| options zfs zfs_arc_min=4294967296
| |
| options zfs zfs_arc_max=68719476736
| |
| #
| |
| # disabling prefetch is no longer required
| |
| options zfs l2arc_noprefetch=0
| |
| | |
| * create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
| |
| | |
| zpool create -o compression=on -f tank mirror a0 b0 mirror a1 b1 mirror a2 b2 log /dev/rssda1 cache /dev/rssda2
| |
| | |
| * check the status of the newly created pool:
| |
| | |
| <pre>
| |
| root@proxmox:/# zpool status
| |
| pool: tank
| |
| state: ONLINE
| |
| scan: none requested
| |
| config:
| |
| | |
| NAME STATE READ WRITE CKSUM
| |
| tank ONLINE 0 0 0
| |
| mirror-0 ONLINE 0 0 0
| |
| a0 ONLINE 0 0 0
| |
| b0 ONLINE 0 0 0
| |
| mirror-1 ONLINE 0 0 0
| |
| a1 ONLINE 0 0 0
| |
| b1 ONLINE 0 0 0
| |
| mirror-2 ONLINE 0 0 0
| |
| a2 ONLINE 0 0 0
| |
| b2 ONLINE 0 0 0
| |
| logs
| |
| rssda1 ONLINE 0 0 0
| |
| cache
| |
| rssda2 ONLINE 0 0 0
| |
| | |
| errors: No known data errors
| |
| </pre>
| |
| | |
| Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
| |
| | |
| =zfs links and docs=
| |
| *http://zfsonlinux.org/faq.html
| |
| | |
| *http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
| |
| | |
| and this has some very important information to know before implementing zfs on a production system.
| |
| *http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
| |
| | |
| check these well written manual pages:
| |
| man zfs
| |
|
| |
| man zpool
| |
| | |
| = How to install ZFS-fuse under Proxmox =
| |
| | |
| (Tested with Kernel 2.6.32)
| |
| | |
| apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
| |
| wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2
| |
| tar jxvf zfs-fuse-0.6.0.tar.bz2
| |
| cd zfs-fuse-0.6.0
| |
| cd src
| |
| scons
| |
| scond install
| |
| | |
| '''Note:''' zfs-fuse-0.6.0 is now (Dec 2012) deprecated. Use [http://zfs-fuse.net/releases/0.7.0/zfs-fuse-0.7.0.tar.bz2 zfs-fuse-0.7.0.tar.bz2] instead.
| |
| | |
| Fire up ZFS-fuse daemon
| |
| /usr/local/sbin/zfs-fuse
| |
| | |
| Create Zpool
| |
| zpool create -m /var/lib/vz/images2 /dev/sdb
| |
| | |
| Create ZFS
| |
| zfs create images2/109
| |
| | |
| ZFS clone a vm 109 to vm 110
| |
| zfs snapshot images2/109@master
| |
| zfs clone images2/109@master images2/110
| |
| | |
| =zfs mounting workaround =
| |
| '''The default zfs mount -a script runs too late in the boot process for most system scripts. The following make zfs mounts start on time. '''
| |
| | |
| 2014-01-22 the info below came from this excellent wiki page: http://wiki.complete.org/ConvertingToZFS
| |
| | |
| *Edit /etc/default/zfs and set ZFS_MOUNT='yes'
| |
| | |
| *edit /etc/insserv.conf,
| |
| :and at the end of the $local_fs line,
| |
| :add zfs-mount (without a plus). | |
| | |
| edit /etc/init.d/zfs-mount and find three lines near the top, changing them like this:
| |
| <pre>
| |
| # Required-Start:
| |
| # Required-Stop:
| |
| # Default-Start: S
| |
| </pre> | |
| ''note remove the Required-Start and -Stop entries.''
| |
| | |
| | |
| *Activating init.d changes Then run:
| |
| <pre>
| |
| insserv -v -d zfs-mount
| |
| </pre>
| |
| | |
| I had an issue with pve storage on zfs, before pve would start before zfs and create directories at the zfs mount point. to fix that start single user mode and remove the directories [ make sure they are empty.... ].
| |
| | |
| also see https://github.com/zfsonlinux/pkg-zfs/issues/101
| |