Storage: ZFS: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
 
(63 intermediate revisions by 14 users not shown)
Line 1: Line 1:
=Introduction=
<!--PVE_IMPORT_START_MARKER-->
tbd. (running zfs on Proxmox VE is not officially supported)
<!-- Do not edit - this is autogenerated content -->
 
{{#pvedocs:pve-storage-zfspool-plain.html}}
=Native ZFS for Linux on Proxmox =
[[Category:Reference Documentation]]
 
<pvehide>
2013-03-27:  0.6.1  ZFSOnLinux (ZoL) is now ready for wide scale deployment on everything from
Storage pool type: zfspool
desktops to super computers. [https://groups.google.com/a/zfsonlinux.org/group/zfs-announce/topics?lnk See announcement]
This backend allows you to access local ZFS pools (or ZFS file systems
 
inside such pools).
check  http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
Configuration
 
The backend supports the common storage properties content, nodes,
 
disable, and the following ZFS specific properties:
== using  Debian Wheezy packages from zfsonlinux  ==
pool
'''Important , on each pve kernel upgrade, you'll need to manually install headers, make symlink and rebuild modules. '''
Select the ZFS pool/filesystem. All allocations are done within that
 
pool.
For PVE on Wheezy [ 3.0+ ].
blocksize
 
Set ZFS blocksize parameter.
2013-05-29 - ''Debian Packages  DKMS style packages for Debian are available from the zfsonlinux.org repository. These packages track the latest official upstream tag and are refreshed as new releases are made available''  from  http://zfsonlinux.org/debian.html  . 
sparse
 
Use ZFS thin-provisioning. A sparse volume is a volume whose
also check http://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
reservation is not equal to the volume size.
 
mountpoint
 
The mount point of the ZFS pool/filesystem. Changing this does not
*make sure pve headers are installed. if not :
affect the mountpoint property of the dataset seen by zfs.
aptitude install pve-headers-$(uname -r)
Defaults to /&lt;pool&gt;.
 
Configuration Example (/etc/pve/storage.cfg)
*I think the ln is still needed.
zfspool: vmdata
ln -s /lib/modules/$(uname -r)/build /lib/modules/$(uname -r)/source
        pool tank/vmdata
 
        content rootdir,images
:then follow instructions from http://zfsonlinux.org/debian.html , i copied here , but check link in case these changed:
        sparse
<pre>
File naming conventions
su -
The backend uses the following naming scheme for VM images:
wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_1%7Ewheezy_all.deb
vm-&lt;VMID&gt;-&lt;NAME&gt;      // normal VM images
dpkg -i zfsonlinux_1~wheezy_all.deb
base-&lt;VMID&gt;-&lt;NAME&gt;    // template VM image (read-only)
apt-get update
subvol-&lt;VMID&gt;-&lt;NAME&gt;  // subvolumes (ZFS filesystem for containers)
apt-get install debian-zfs
&lt;VMID&gt;
</pre>
This specifies the owner VM.
 
&lt;NAME&gt;
you should see zfs and spl module builds...    check if OK
This can be an arbitrary name (ascii) without white space. The
<pre>
backend uses disk[N] as default, where [N] is replaced by an
# ls -l /lib/modules/$(uname -r)/updates/dkms/
integer to make the name unique.
total 3128
Storage Features
-rw-r--r-- 1 root root  340944 May 29 10:25 splat.ko
ZFS is probably the most advanced storage type regarding snapshot and
-rw-r--r-- 1 root root  302104 May 29 10:25 spl.ko
cloning. The backend uses ZFS datasets for both VM images (format
-rw-r--r-- 1 root root  13392 May 29 10:27 zavl.ko
raw) and container data (format subvol). ZFS properties are
-rw-r--r-- 1 root root  71232 May 29 10:27 zcommon.ko
inherited from the parent dataset, so you can simply set defaults
-rw-r--r-- 1 root root 1935120 May 29 10:27 zfs.ko
on the parent dataset.
-rw-r--r-- 1 root root  130408 May 29 10:27 znvpair.ko
Table 1. Storage features for backend zfs
-rw-r--r-- 1 root root  40424 May 29 10:27 zpios.ko
Content types
-rw-r--r-- 1 root root  330368 May 29 10:27 zunicode.ko
Image formats
</pre>
Shared
 
Snapshots
TBD:
Clones
:test the above again and update the instructions as necessary.  
images rootdir
:does dkms auto build modules when pve kernel is upgraded?  for that to happen headers would need to be installed and the ln may be needed.
raw subvol
 
no
==pve kernel upgrade==
yes
currently this needs to be done when a kernel upgrade is done.
yes
 
Examples
*before reboot
It is recommended to create an extra ZFS file system to store your VM images:
<pre>
# zfs create tank/vmdata
#
To enable compression on that newly allocated file system:
# 1- you need to replace the XX with numbers from new kernel version:
# zfs set compression=on tank/vmdata
#
You can get a list of available ZFS filesystems with:
aptitude  install pve-headers-2.6.32-XX-pve
# pvesm zfsscan
 
See Also
#
Storage
# 2- you need to replace the XX with numbers from new kernel version:
ZFS on Linux
#
</pvehide>
ln -s /lib/modules/2.6.32-XX-pve/build /lib/modules/2.6.32-XX-pve/source
<!--PVE_IMPORT_END_MARKER-->
 
#
# 3- this will force modules to be  built
#    the more pve kernels installed the longer this takes. [ remove old ones ].
#
aptitude  reinstall spl-dkms  zfs-dkms
 
#
# 4- make sure modules are installed ok:
#
ls -l /lib/modules/2.6.32-XX-pve/updates/dkms/
 
#
# 5- reboot to use new kernel
#
</pre>
 
some might want to put that to motd or save as a text file at /root/
 
== kvm tuning==
see thread on prox forum , per user Nemesiz  :
*pool:
zfs set primarycache=all tank
*kvm config:
* change cache to Write Back
:You can do it using web GUI or manually. Example:
ide0: data_zfs:100/vm-100-disk-1.raw,cache=writeback
if not set this happened:
<pre>
qm start 4016
kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument
</pre>
 
== Install on a high performance system==
 
As of 2013 high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5" disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
 
* If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
# run 'udevadm trigger' after updating this file
alias a0        scsi-36848f690e856b10018cdf39854055206
alias b0        scsi-36848f690e856b10018cdf3ce573fdeb6
alias a1        scsi-36848f690e856b10018cdf40f5b277cbc
alias b1        scsi-36848f690e856b10018cdf43a5db1b99b
alias a2        scsi-36848f690e856b10018cdf4575f652ad0
alias b2        scsi-36848f690e856b10018cdf47761587cec
 
Use flash for caching/logs. If you have only one SSD, use cfdisk to create a small partion for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition. In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB
 
* edit /etc/modprobe.d/zfs.conf​ to apply several tuning options for high performance servers:
 
# ZFS tuning for a proxmox machine that reserves 64GB for ZFS
#
# Don't let ZFS use less than 4GB and more than 64GB
options zfs zfs_arc_min=4294967296
options zfs zfs_arc_max=68719476736
#
# disabling prefetch is no longer required
options zfs l2arc_noprefetch=0
 
* create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
 
zpool create -o compression=on -f tank mirror a0 b0 mirror a1 b1 mirror a2 b2 log /dev/rssda1 cache /dev/rssda2​
 
* ​​​​​​​​check the status of the newly created pool:
 
<pre>
root@proxmox:/# zpool status
  pool: tank
state: ONLINE
  scan: none requested
config:
 
        NAME        STATE    READ WRITE CKSUM
        tank        ONLINE      0    0    0
          mirror-0  ONLINE      0    0    0
            a0      ONLINE      0    0    0
            b0      ONLINE      0    0    0
          mirror-1  ONLINE      0    0    0
            a1      ONLINE      0    0    0
            b1      ONLINE      0    0    0
          mirror-2  ONLINE      0    0    0
            a2      ONLINE      0    0    0
            b2      ONLINE      0    0    0
        logs
          rssda1    ONLINE      0    0    0
        cache
          rssda2    ONLINE      0    0    0
 
errors: No known data errors
</pre>
 
Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
 
=zfs links and docs=
*http://zfsonlinux.org/faq.html
*http://wiki.complete.org/ConvertingToZFS
*http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
 
and this has some very important information to know before implementing zfs on a production  system.
*http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 
check these well written manual pages:
man zfs
man zpool
 
=zfs mounting workaround =
'''The default zfs mount -a script runs too late in the boot process for most system scripts. The following make zfs mounts start on time. '''
 
2014-01-22  the info below came from this excellent wiki page:  http://wiki.complete.org/ConvertingToZFS
 
*Edit /etc/default/zfs and set ZFS_MOUNT='yes'
 
*edit /etc/insserv.conf,
:and at the end of the $local_fs line,
:add zfs-mount (without a plus).
<pre>
#
# All local filesystems are mounted (done during boot phase)
#
$local_fs      +mountall +mountall-bootclean +mountoverflowtmp +umountfs
</pre>
 
edit /etc/init.d/zfs-mount and find three lines near the top, changing them like this:
<pre>
# Required-Start:
# Required-Stop:
# Default-Start: S
</pre>
''note remove the Required-Start and -Stop entries.''
 
 
*Activating init.d changes Then run:
<pre>
insserv -v -d zfs-mount
</pre>
 
I had an issue with pve storage on zfs, before pve would start before zfs and create directories at the zfs mount point. to fix that start single user mode and remove the directories [ make sure  they are empty.... ].
 
also see https://github.com/zfsonlinux/pkg-zfs/issues/101

Latest revision as of 08:08, 6 December 2019

Storage pool type: zfspool

This backend allows you to access local ZFS pools (or ZFS file systems inside such pools).

Configuration

The backend supports the common storage properties content, nodes, disable, and the following ZFS specific properties:

pool

Select the ZFS pool/filesystem. All allocations are done within that pool.

blocksize

Set ZFS blocksize parameter.

sparse

Use ZFS thin-provisioning. A sparse volume is a volume whose reservation is not equal to the volume size.

mountpoint

The mount point of the ZFS pool/filesystem. Changing this does not affect the mountpoint property of the dataset seen by zfs. Defaults to /<pool>.

Configuration Example (/etc/pve/storage.cfg)
zfspool: vmdata
        pool tank/vmdata
        content rootdir,images
        sparse

File naming conventions

The backend uses the following naming scheme for VM images:

vm-<VMID>-<NAME>      // normal VM images
base-<VMID>-<NAME>    // template VM image (read-only)
subvol-<VMID>-<NAME>  // subvolumes (ZFS filesystem for containers)
<VMID>

This specifies the owner VM.

<NAME>

This can be an arbitrary name (ascii) without white space. The backend uses disk[N] as default, where [N] is replaced by an integer to make the name unique.

Storage Features

ZFS is probably the most advanced storage type regarding snapshot and cloning. The backend uses ZFS datasets for both VM images (format raw) and container data (format subvol). ZFS properties are inherited from the parent dataset, so you can simply set defaults on the parent dataset.

Table 1. Storage features for backend zfs
Content types Image formats Shared Snapshots Clones

images rootdir

raw subvol

no

yes

yes

Examples

It is recommended to create an extra ZFS file system to store your VM images:

# zfs create tank/vmdata

To enable compression on that newly allocated file system:

# zfs set compression=on tank/vmdata

You can get a list of available ZFS filesystems with:

# pvesm zfsscan