Difference between revisions of "Storage: ZFS"

From Proxmox VE
Jump to navigation Jump to search
 
(107 intermediate revisions by 15 users not shown)
Line 1: Line 1:
=Introduction=
+
<!--PVE_IMPORT_START_MARKER-->
tbd. (running zfs on Proxmox VE is not officially supported)
+
<!-- Do not edit - this is autogenerated content -->
 
+
{{#pvedocs:pve-storage-zfspool-plain.html}}
=Native ZFS for Linux on Proxmox =
+
[[Category:Reference Documentation]]
 
+
<pvehide>
2013-03-27:  0.6.1  ZFSOnLinux (ZoL) is now ready for wide scale deployment on everything from
+
Storage pool type: zfspool
desktops to super computers. [https://groups.google.com/a/zfsonlinux.org/group/zfs-announce/topics?lnk See announcement]
+
This backend allows you to access local ZFS pools (or ZFS file systems
 
+
inside such pools).
check  http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
+
Configuration
 
+
The backend supports the common storage properties content, nodes,
 
+
disable, and the following ZFS specific properties:
== using  Debian Wheezy packages from zfsonlinux  ==
+
pool
For PVE on Wheezy [ 3.0+ ].
+
Select the ZFS pool/filesystem. All allocations are done within that
 
+
pool.
2013-05-29 - ''Debian Packages  DKMS style packages for Debian are available from the zfsonlinux.org repository. These packages track the latest official upstream tag and are refreshed as new releases are made available''  from  http://zfsonlinux.org/debian.html  . 
+
blocksize
 
+
Set ZFS blocksize parameter.
also check http://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
+
sparse
 
+
Use ZFS thin-provisioning. A sparse volume is a volume whose
*If you are upgrading from the Ubuntu PPA method remove ubuntu packages first.
+
reservation is not equal to the volume size.
aptitude remove ubuntu-zfs
+
mountpoint
:move the old zfs.list from /etc/apt/sources.list.d
+
The mount point of the ZFS pool/filesystem. Changing this does not
*make sure pve headers are installed. if not. [ Note we assume that 2.6.32-20-pve is used. check 1-st with ' uname -a ' and adjust the following lines ].
+
affect the mountpoint property of the dataset seen by zfs.
aptitude install pve-headers-2.6.32-20-pve
+
Defaults to /&lt;pool&gt;.
 
+
Configuration Example (/etc/pve/storage.cfg)
*I think the ln is still needed. 
+
zfspool: vmdata
ln -s /lib/modules/2.6.32-20-pve/build /lib/modules/2.6.32-20-pve/source
+
        pool tank/vmdata
 
+
        content rootdir,images
:then follow instructions from http://zfsonlinux.org/debian.html , i copied here , but check link in case these changed:
+
        sparse
<pre>
+
File naming conventions
su -
+
The backend uses the following naming scheme for VM images:
wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_1%7Ewheezy_all.deb
+
vm-&lt;VMID&gt;-&lt;NAME&gt;      // normal VM images
dpkg -i zfsonlinux_1~wheezy_all.deb
+
base-&lt;VMID&gt;-&lt;NAME&gt;    // template VM image (read-only)
apt-get update
+
subvol-&lt;VMID&gt;-&lt;NAME&gt;  // subvolumes (ZFS filesystem for containers)
apt-get install debian-zfs
+
&lt;VMID&gt;
</pre>
+
This specifies the owner VM.
 
+
&lt;NAME&gt;
you should see zfs and spl module builds...    check if OK
+
This can be an arbitrary name (ascii) without white space. The
<pre>
+
backend uses disk[N] as default, where [N] is replaced by an
# ls -l /lib/modules/2.6.32-20-pve/updates/dkms/
+
integer to make the name unique.
total 3128
+
Storage Features
-rw-r--r-- 1 root root  340944 May 29 10:25 splat.ko
+
ZFS is probably the most advanced storage type regarding snapshot and
-rw-r--r-- 1 root root  302104 May 29 10:25 spl.ko
+
cloning. The backend uses ZFS datasets for both VM images (format
-rw-r--r-- 1 root root  13392 May 29 10:27 zavl.ko
+
raw) and container data (format subvol). ZFS properties are
-rw-r--r-- 1 root root  71232 May 29 10:27 zcommon.ko
+
inherited from the parent dataset, so you can simply set defaults
-rw-r--r-- 1 root root 1935120 May 29 10:27 zfs.ko
+
on the parent dataset.
-rw-r--r-- 1 root root  130408 May 29 10:27 znvpair.ko
+
Table 1. Storage features for backend zfs
-rw-r--r-- 1 root root  40424 May 29 10:27 zpios.ko
+
Content types
-rw-r--r-- 1 root root  330368 May 29 10:27 zunicode.ko
+
Image formats
</pre>
+
Shared
 
+
Snapshots
TBD:  
+
Clones
:test the above again and update the instructions as necessary.
+
images rootdir
:does dkms auto build modules when pve kernel is upgraded?  for that to happen headers would need to be installed and the ln may be needed.
+
raw subvol
 
+
no
== using Ubuntu PPA with Debian Squeeze ==
+
yes
 
+
yes
Note (2013-03-17): this procedure worked well for PVE 2.3
+
Examples
 
+
It is recommended to create an extra ZFS file system to store your VM images:
<pre>
+
# zfs create tank/vmdata
# apt key
+
To enable compression on that newly allocated file system:
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys  F6B0FC61
+
# zfs set compression=on tank/vmdata
 
+
You can get a list of available ZFS filesystems with:
#sources:
+
# pvesm zfsscan
echo "deb http://ppa.launchpad.net/zfs-native/stable/ubuntu lucid  main" > /etc/apt/sources.list.d/zfs.list
+
See Also
 
+
Storage
aptitude update
+
ZFS on Linux
 
+
</pvehide>
aptitude install dkms pve-headers-$(uname -r)
+
<!--PVE_IMPORT_END_MARKER-->
</pre>
 
 
 
*this is needed for dkms to work:
 
<pre>
 
ln -s /lib/modules/2.6.32-10-pve/build /lib/modules/2.6.32-10-pve/source
 
</pre>
 
'''Maybe someone who knows how to write a patch could submit one and suggest this be done when kernel is upgraded?''' Otherwise I think when the Proxmox kernel upgrades, dkms build may fail if this is not done [ using new  pve-headers ).
 
 
 
 
 
then install zfs
 
<pre>
 
aptitude install ubuntu-zfs
 
</pre>
 
 
 
finally make sure that zfs is started at system boot, edit /etc/default/zfs
 
<pre>
 
# ZoL userland configuration.
 
 
 
# Run `zfs mount -a` during system start?
 
# This should be 'no' if zfs-mountall or a systemd generator is available.
 
ZFS_MOUNT='yes'
 
</pre>
 
 
 
 
 
 
 
== kernel upgrade  ==
 
currently this needs to be done when a kernel upgrade occurs.
 
*'''before reboot''':
 
:2.6.32.11 upgrade from 2.6.32.10  did before reboot [ for  you would need to change 2.6.32-11-pve if different version.
 
aptitude install pve-headers-2.6.32-11-pve
 
ln -s /lib/modules/2.6.32-11-pve/build /lib/modules/2.6.32-11-pve/source
 
 
 
*this will force modules to be  built ( there is probably a better way to do this , so correct this if you know how to make dkms do the build ).
 
aptitude reinstall spl-dkms  zfs-dkms
 
 
 
== using git ==
 
*install these
 
aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted  pve-headers-$(uname -r)
 
*get make and install spl
 
<pre>
 
cd /usr/src
 
git clone https://github.com/zfsonlinux/spl.git
 
cd spl
 
./configure
 
make deb
 
dpkg -i *.deb
 
modprobe spl
 
</pre>
 
*get make and install zfs
 
<pre>
 
cd /usr/src
 
git clone https://github.com/zfsonlinux/zfs.git
 
cd zfs
 
./configure
 
make deb
 
dpkg -i *.deb
 
</pre>
 
 
 
* make sure module can load
 
modprobe zfs
 
 
 
* install debs does not do this:
 
update-rc.d zfs defaults
 
 
 
*'''note , the deb files made above can be used to install zfs on another system'''.
 
 
 
*if there is a pre existing zfs pool then simply use the next line to recover it. this can happen if you re-install proxmox and have zfs disks from before.
 
zpoo  </cman>
 
 
 
 
 
*If there is a need or want I could set up a repository with pre build debs. Just send an email .
 
 
 
=Creating Pools and Filesystems=
 
'''use raidz2 or mirror in main data systems.'''
 
 
 
==Create a raidz pool named “tank”==
 
zpool create  -f -o ashift=12 tank  raidz  /dev/sda  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh
 
 
 
*note this happened to 2 or 3 disks the 1-st time. these were disks from a 3-ware raid.
 
<pre>
 
zpool create  -f -o ashift=12 tank  raidz2  /dev/sda  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh
 
invalid vdev specification
 
the following errors must be manually repaired:
 
cannot stat /dev/sdg2: No such file or directory
 
</pre>
 
had to use fdisk to add partitions 1 , 2 etc to those disks .   I forget if I left the partitions there or not. but had to add to get this fixed.
 
<blockquote>
 
'''''NOTE''': This is quite risky if you intend to deploy ZFS on a server. Any changes to the operating system's devices layout will break your pool. Don't create a pool using device names like ''<code>/dev/sda</code>''. For persistence, use devices under ''<code>/dev/disk/by-id</code>''.'' - [[User:Bryan Baldwin|Bryan Baldwin]]
 
</blockquote>
 
 
 
==Create directories and mount points==
 
<pre>
 
zfs create tank/data
 
zfs set mountpoint=/data tank/data
 
</pre>
 
 
 
*mount all
 
zfs mount -a
 
 
 
==destroy==
 
to start over
 
zpool destroy zfs-pool
 
 
 
==Setting Properties==
 
*Automatically NFS-export all home directories  [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
 
zfs set sharenfs=rw tank/home
 
*check sharenfs 
 
<pre>
 
zfs get sharenfs tank/home
 
NAME    PROPERTY  VALUE    SOURCE
 
tank/home  sharenfs  rw        local
 
</pre>
 
*Turn on compression for everything in the pool
 
zfs set compression=on tank
 
 
 
== kvm tuning==
 
see thread on prox forum , per user Nemesiz  :
 
*pool:
 
zfs set primarycache=all tank
 
*kvm config:
 
* change cache to Write Back
 
:You can do it using web GUI or manually. Example:
 
ide0: data_zfs:100/vm-100-disk-1.raw,cache=writeback
 
if not set this happened:
 
<pre>
 
qm start 4016
 
kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument
 
</pre>
 
== Install on a high performance system==
 
 
 
As of 2013 high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5" disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
 
 
 
* If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
 
# run 'udevadm trigger' after updating this file
 
alias a0        scsi-36848f690e856b10018cdf39854055206
 
alias b0        scsi-36848f690e856b10018cdf3ce573fdeb6
 
alias a1        scsi-36848f690e856b10018cdf40f5b277cbc
 
alias b1        scsi-36848f690e856b10018cdf43a5db1b99b
 
alias a2        scsi-36848f690e856b10018cdf4575f652ad0
 
alias b2        scsi-36848f690e856b10018cdf47761587cec
 
 
 
Use flash for caching/logs. If you have only one SSD, use cfdisk to create a small partion for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition.  In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB
 
 
 
* edit /etc/modprobe.d/zfs.conf​ to apply several tuning options for high performance servers:
 
 
 
# ZFS tuning for a proxmox machine that reserves 64GB for ZFS
 
#
 
# Don't let ZFS use less than 4GB and more than 64GB
 
options zfs zfs_arc_min=4294967296
 
options zfs zfs_arc_max=68719476736
 
#
 
# disabling prefetch is no longer required
 
options zfs l2arc_noprefetch=0
 
 
 
* create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
 
 
 
zpool create -o compression=on -f tank mirror a0 b0 mirror a1 b1 mirror a2 b2 log /dev/rssda1 cache /dev/rssda2​
 
 
 
* ​​​​​​​​check the status of the newly created pool:
 
 
 
<pre>
 
root@proxmox:/# zpool status
 
  pool: tank
 
state: ONLINE
 
  scan: none requested
 
config:
 
 
 
        NAME        STATE    READ WRITE CKSUM
 
        tank        ONLINE      0    0    0
 
          mirror-0  ONLINE      0    0    0
 
            a0      ONLINE      0    0    0
 
            b0      ONLINE      0    0    0
 
          mirror-1  ONLINE      0    0    0
 
            a1      ONLINE      0    0    0
 
            b1      ONLINE      0    0    0
 
          mirror-2  ONLINE      0    0    0
 
            a2      ONLINE      0    0    0
 
            b2      ONLINE      0    0    0
 
        logs
 
          rssda1    ONLINE      0    0    0
 
        cache
 
          rssda2    ONLINE      0    0    0
 
 
 
errors: No known data errors
 
</pre>
 
 
 
Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
 
 
 
=zfs links and docs=
 
*http://zfsonlinux.org/faq.html
 
 
 
*http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
 
 
 
and this has some very important information to know before implementing zfs on a production  system.
 
*http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 
 
 
check these well written manual pages:
 
man zfs
 
 
man zpool
 
 
 
= How to install ZFS-fuse under Proxmox =
 
 
 
(Tested with Kernel 2.6.32)
 
 
 
  apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
 
  wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2
 
  tar jxvf zfs-fuse-0.6.0.tar.bz2
 
  cd zfs-fuse-0.6.0
 
  cd src
 
  scons
 
  scond install
 
 
 
'''Note:''' zfs-fuse-0.6.0 is now (Dec 2012) deprecated. Use [http://zfs-fuse.net/releases/0.7.0/zfs-fuse-0.7.0.tar.bz2 zfs-fuse-0.7.0.tar.bz2] instead.
 
 
 
Fire up ZFS-fuse daemon
 
  /usr/local/sbin/zfs-fuse
 
 
 
Create Zpool
 
  zpool create -m /var/lib/vz/images2 /dev/sdb
 
 
 
Create ZFS
 
  zfs create images2/109
 
 
 
ZFS clone a vm 109 to vm 110
 
  zfs snapshot images2/109@master
 
  zfs clone images2/109@master images2/110
 
 
 
= Debian GNU/kFreeBSD =
 
'''As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage.''' When  Debsd has nfs support then I'd use this for storage of iso's and backups to start.  In the mean time this works as a samba server..
 
 
 
zfs works much better using  Debian/GNU kFreeBSD  then fuse.
 
 
 
after installing the system to kvm or hardware:
 
 
 
*add some disks .  in KVM i was only able to get ide to work.
 
 
 
*the scsi disks look like:
 
<pre>
 
ls /dev/ad*
 
/dev/ad10  /dev/ad10s1  /dev/ad10s2  /dev/ad11  /dev/ad12  /dev/ad13
 
</pre>
 
 
 
*  per http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide  - if I read it right - use entire disk not slices for zfs (I assume slices = partitions).
 
 
 
*if the disks have partition tables, then clear them. just not the operating system disk!   I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
 
<pre>
 
fdisk /dev/ad11
 
o
 
w
 
</pre>
 
then same for ad12 and ad13
 
 
 
 
*install this
 
aptitude install zfsutils
 
 
 
*make a pool
 
zpool create -f  zfs-pool /dev/ad11 /dev/ad12  /dev/ad13
 
 
 
df | grep zfs-pool
 
zfs-pool      zfs    1.4T  18K  1.4T  1% /zfs-pool
 
 
 
*http://wiki.debian.org/Debian_GNU/kFreeBSD
 

Latest revision as of 10:08, 6 December 2019

Storage pool type: zfspool

This backend allows you to access local ZFS pools (or ZFS file systems inside such pools).

Configuration

The backend supports the common storage properties content, nodes, disable, and the following ZFS specific properties:

pool

Select the ZFS pool/filesystem. All allocations are done within that pool.

blocksize

Set ZFS blocksize parameter.

sparse

Use ZFS thin-provisioning. A sparse volume is a volume whose reservation is not equal to the volume size.

mountpoint

The mount point of the ZFS pool/filesystem. Changing this does not affect the mountpoint property of the dataset seen by zfs. Defaults to /<pool>.

Configuration Example (/etc/pve/storage.cfg)
zfspool: vmdata
        pool tank/vmdata
        content rootdir,images
        sparse

File naming conventions

The backend uses the following naming scheme for VM images:

vm-<VMID>-<NAME>      // normal VM images
base-<VMID>-<NAME>    // template VM image (read-only)
subvol-<VMID>-<NAME>  // subvolumes (ZFS filesystem for containers)
<VMID>

This specifies the owner VM.

<NAME>

This can be an arbitrary name (ascii) without white space. The backend uses disk[N] as default, where [N] is replaced by an integer to make the name unique.

Storage Features

ZFS is probably the most advanced storage type regarding snapshot and cloning. The backend uses ZFS datasets for both VM images (format raw) and container data (format subvol). ZFS properties are inherited from the parent dataset, so you can simply set defaults on the parent dataset.

Table 1. Storage features for backend zfs
Content types Image formats Shared Snapshots Clones

images rootdir

raw subvol

no

yes

yes

Examples

It is recommended to create an extra ZFS file system to store your VM images:

# zfs create tank/vmdata

To enable compression on that newly allocated file system:

# zfs set compression=on tank/vmdata

You can get a list of available ZFS filesystems with:

# pvesm zfsscan