Difference between revisions of "Storage: ZFS"

From Proxmox VE
Jump to navigation Jump to search
 
(129 intermediate revisions by 15 users not shown)
Line 1: Line 1:
=Native ZFS for Linux on Proxmox 2.X=
+
<!--PVE_IMPORT_START_MARKER-->
 
+
<!-- Do not edit - this is autogenerated content -->
2013-03-27:  0.6.1  ZFSOnLinux (ZoL) is now ready for wide scale deployment on everything from
+
{{#pvedocs:pve-storage-zfspool-plain.html}}
desktops to super computers. [https://groups.google.com/a/zfsonlinux.org/group/zfs-announce/topics?lnk Check announcement]
+
[[Category:Reference Documentation]]
 
+
<pvehide>
check  http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
+
Storage pool type: zfspool
 
+
This backend allows you to access local ZFS pools (or ZFS file systems
+
inside such pools).
I had tried 3 other operating systems including Debian/kFreebsd and for each was able to recover all the zfs  file systems from previous installs. Even with disk name changes [bsd uses ad11, linux sdb ] no data was lost. Today I set up zfsonlinux using the following info on 2 prox 2.0 systems
+
Configuration
 
+
The backend supports the common storage properties content, nodes,
== using dkms  ==
+
disable, and the following ZFS specific properties:
 
+
pool
Note (2013-03-17): this procedure worked well for PVE 2.3
+
Select the ZFS pool/filesystem. All allocations are done within that
 
+
pool.
<pre>
+
blocksize
# apt key
+
Set ZFS blocksize parameter.
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys  F6B0FC61
+
sparse
 
+
Use ZFS thin-provisioning. A sparse volume is a volume whose
#sources:
+
reservation is not equal to the volume size.
echo "deb http://ppa.launchpad.net/zfs-native/stable/ubuntu lucid  main" > /etc/apt/sources.list.d/zfs.list
+
mountpoint
 
+
The mount point of the ZFS pool/filesystem. Changing this does not
aptitude update
+
affect the mountpoint property of the dataset seen by zfs.
 
+
Defaults to /&lt;pool&gt;.
aptitude install dkms pve-headers-$(uname -r)
+
Configuration Example (/etc/pve/storage.cfg)
</pre>
+
zfspool: vmdata
 
+
        pool tank/vmdata
*this is needed for dkms to work:
+
        content rootdir,images
<pre>
+
        sparse
ln -s /lib/modules/2.6.32-10-pve/build /lib/modules/2.6.32-10-pve/source
+
File naming conventions
</pre>
+
The backend uses the following naming scheme for VM images:
'''Maybe someone who knows how to write a patch could submit one and suggest this be done when kernel is upgraded?''' Otherwise I think when the Proxmox kernel upgrades,  dkms build may fail if this is not done [ using new  pve-headers ).
+
vm-&lt;VMID&gt;-&lt;NAME&gt;      // normal VM images
 
+
base-&lt;VMID&gt;-&lt;NAME&gt;    // template VM image (read-only)
 
+
subvol-&lt;VMID&gt;-&lt;NAME&gt;  // subvolumes (ZFS filesystem for containers)
then install zfs
+
&lt;VMID&gt;
<pre>
+
This specifies the owner VM.
aptitude install ubuntu-zfs
+
&lt;NAME&gt;
</pre>
+
This can be an arbitrary name (ascii) without white space. The
 
+
backend uses disk[N] as default, where [N] is replaced by an
finally make sure that zfs is started at system boot, edit /etc/default/zfs
+
integer to make the name unique.
<pre>
+
Storage Features
# ZoL userland configuration.
+
ZFS is probably the most advanced storage type regarding snapshot and
 
+
cloning. The backend uses ZFS datasets for both VM images (format
# Run `zfs mount -a` during system start?
+
raw) and container data (format subvol). ZFS properties are
# This should be 'no' if zfs-mountall or a systemd generator is available.
+
inherited from the parent dataset, so you can simply set defaults
ZFS_MOUNT='yes'
+
on the parent dataset.
</pre>
+
Table 1. Storage features for backend zfs
 
+
Content types
 
+
Image formats
 
+
Shared
=== kernel upgrade  ===
+
Snapshots
currently this needs to be done when a kernel upgrade occurs.
+
Clones
*'''before reboot''':
+
images rootdir
:2.6.32.11 upgrade from 2.6.32.10  did before reboot [ for  you would need to change 2.6.32-11-pve if different version.
+
raw subvol
aptitude install pve-headers-2.6.32-11-pve
+
no
ln -s /lib/modules/2.6.32-11-pve/build /lib/modules/2.6.32-11-pve/source
+
yes
 
+
yes
*this will force modules to be  built ( there is probably a better way to do this , so correct this if you know how to make dkms do the build ).
+
Examples
aptitude reinstall spl-dkms  zfs-dkms
+
It is recommended to create an extra ZFS file system to store your VM images:
 
+
# zfs create tank/vmdata
== using git ==
+
To enable compression on that newly allocated file system:
*install these
+
# zfs set compression=on tank/vmdata
aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted  pve-headers-$(uname -r)
+
You can get a list of available ZFS filesystems with:
*get make and install spl
+
# pvesm zfsscan
<pre>
+
See Also
cd /usr/src
+
Storage
git clone https://github.com/zfsonlinux/spl.git
+
ZFS on Linux
cd spl
+
</pvehide>
./configure
+
<!--PVE_IMPORT_END_MARKER-->
make deb
 
dpkg -i *.deb
 
modprobe spl
 
</pre>
 
*get make and install zfs
 
<pre>
 
cd /usr/src
 
git clone https://github.com/zfsonlinux/zfs.git
 
cd zfs
 
./configure
 
make deb
 
dpkg -i *.deb
 
</pre>
 
 
 
* make sure module can load
 
modprobe zfs
 
 
 
* install debs does not do this:
 
update-rc.d zfs defaults
 
 
 
*'''note , the deb files made above can be used to install zfs on another system'''.
 
 
 
*if there is a pre existing zfs pool then simply use the next line to recover it. this can happen if you re-install proxmox and have zfs disks from before.
 
zpoo  </cman>
 
 
 
 
 
*If there is a need or want I could set up a repository with pre build debs. Just send an email .
 
 
 
=Creating Pools and Filesystems=
 
'''use raidz2 or mirror in main data systems.'''
 
 
 
==Create a raidz pool named “tank”==
 
zpool create  -f -o ashift=12 tank  raidz  /dev/sda  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh
 
 
 
*note this happened to 2 or 3 disks the 1-st time. these were disks from a 3-ware raid.
 
<pre>
 
zpool create  -f -o ashift=12 tank  raidz2  /dev/sda  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh
 
invalid vdev specification
 
the following errors must be manually repaired:
 
cannot stat /dev/sdg2: No such file or directory
 
</pre>
 
had to use fdisk to add partitions 1 , 2 etc to those disks .   I forget if I left the partitions there or not. but had to add to get this fixed.
 
<blockquote>
 
'''''NOTE''': This is quite risky if you intend to deploy ZFS on a server. Any changes to the operating system's devices layout will break your pool. Don't create a pool using device names like ''<code>/dev/sda</code>''. For persistence, use devices under ''<code>/dev/disk/by-id</code>''.'' - [[User:Bryan Baldwin|Bryan Baldwin]]
 
</blockquote>
 
 
 
==Create directories and mount points==
 
<pre>
 
zfs create tank/data
 
zfs set mountpoint=/data tank/data
 
</pre>
 
 
 
*mount all
 
zfs mount -a
 
 
 
==destroy==
 
to start over
 
zpool destroy zfs-pool
 
 
 
==Setting Properties==
 
*Automatically NFS-export all home directories  [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
 
zfs set sharenfs=rw tank/home
 
*check sharenfs 
 
<pre>
 
zfs get sharenfs tank/home
 
NAME    PROPERTY  VALUE    SOURCE
 
tank/home  sharenfs  rw        local
 
</pre>
 
*Turn on compression for everything in the pool
 
zfs set compression=on tank
 
 
 
== kvm tuning==
 
see thread on prox forum , per user Nemesiz  :
 
*pool:
 
zfs set primarycache=all tank
 
*kvm config:
 
* change cache to Write Back
 
:You can do it using web GUI or manually. Example:
 
ide0: data_zfs:100/vm-100-disk-1.raw,cache=writeback
 
if not set this happened:
 
<pre>
 
qm start 4016
 
kvm: -drive file=/data/pve-storage/images/4016/vm-4016-disk-1.raw,if=none,id=drive-virtio1,aio=native,cache=none: could not open disk image /data/pve-storage/images/4016/vm-4016-disk-1.raw: Invalid argument
 
</pre>
 
== Install on a high performance system==
 
 
 
As of 2013 high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5" disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
 
 
 
* If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
 
# run 'udevadm trigger' after updating this file
 
alias a0        scsi-36848f690e856b10018cdf39854055206
 
alias b0        scsi-36848f690e856b10018cdf3ce573fdeb6
 
alias a1        scsi-36848f690e856b10018cdf40f5b277cbc
 
alias b1        scsi-36848f690e856b10018cdf43a5db1b99b
 
alias a2        scsi-36848f690e856b10018cdf4575f652ad0
 
alias b2        scsi-36848f690e856b10018cdf47761587cec
 
 
 
Use flash for caching/logs. If you have only one SSD, use cfdisk to create a small partion for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition.  In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB
 
 
 
* edit /etc/modprobe.d/zfs.conf​ to apply several tuning options for high performance servers:
 
 
 
# ZFS tuning for a proxmox machine that reserves 64GB for ZFS
 
#
 
# Don't let ZFS use less than 4GB and more than 64GB
 
options zfs zfs_arc_min=4294967296
 
options zfs zfs_arc_max=68719476736
 
#
 
# disabling prefetch is no longer required
 
options zfs l2arc_noprefetch=0
 
 
 
* create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
 
 
 
zpool create -o compression=on -f tank mirror a0 b0 mirror a1 b1 mirror a2 b2 log /dev/rssda1 cache /dev/rssda2​
 
 
 
* ​​​​​​​​check the status of the newly created pool:
 
 
 
<pre>
 
root@proxmox:/# zpool status
 
  pool: tank
 
state: ONLINE
 
  scan: none requested
 
config:
 
 
 
        NAME        STATE    READ WRITE CKSUM
 
        tank        ONLINE      0    0    0
 
          mirror-0  ONLINE      0    0    0
 
            a0      ONLINE      0    0    0
 
            b0      ONLINE      0    0    0
 
          mirror-1  ONLINE      0    0    0
 
            a1      ONLINE      0    0    0
 
            b1      ONLINE      0    0    0
 
          mirror-2  ONLINE      0    0    0
 
            a2      ONLINE      0    0    0
 
            b2      ONLINE      0    0    0
 
        logs
 
          rssda1    ONLINE      0    0    0
 
        cache
 
          rssda2    ONLINE      0    0    0
 
 
 
errors: No known data errors
 
</pre>
 
 
 
Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
 
 
 
=zfs links and docs=
 
*http://zfsonlinux.org/faq.html
 
 
 
*http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
 
 
 
and this has some very important information to know before implementing zfs on a production  system.
 
*http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 
 
 
check these well written manual pages:
 
man zfs
 
 
man zpool
 
 
 
= How to install ZFS-fuse under Proxmox =
 
 
 
(Tested with Kernel 2.6.32)
 
 
 
  apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
 
  wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2
 
  tar jxvf zfs-fuse-0.6.0.tar.bz2
 
  cd zfs-fuse-0.6.0
 
  cd src
 
  scons
 
  scond install
 
 
 
'''Note:''' zfs-fuse-0.6.0 is now (Dec 2012) deprecated. Use [http://zfs-fuse.net/releases/0.7.0/zfs-fuse-0.7.0.tar.bz2 zfs-fuse-0.7.0.tar.bz2] instead.
 
 
 
Fire up ZFS-fuse daemon
 
  /usr/local/sbin/zfs-fuse
 
 
 
Create Zpool
 
  zpool create -m /var/lib/vz/images2 /dev/sdb
 
 
 
Create ZFS
 
  zfs create images2/109
 
 
 
ZFS clone a vm 109 to vm 110
 
  zfs snapshot images2/109@master
 
  zfs clone images2/109@master images2/110
 
 
 
= Debian GNU/kFreeBSD =
 
'''As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage.''' When  Debsd has nfs support then I'd use this for storage of iso's and backups to start.  In the mean time this works as a samba server..
 
 
 
zfs works much better using  Debian/GNU kFreeBSD  then fuse.
 
 
 
after installing the system to kvm or hardware:
 
 
 
*add some disks .  in KVM i was only able to get ide to work.
 
 
 
*the scsi disks look like:
 
<pre>
 
ls /dev/ad*
 
/dev/ad10  /dev/ad10s1  /dev/ad10s2  /dev/ad11  /dev/ad12  /dev/ad13
 
</pre>
 
 
 
*  per http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide  - if I read it right - use entire disk not slices for zfs (I assume slices = partitions).
 
 
 
*if the disks have partition tables, then clear them. just not the operating system disk!   I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
 
<pre>
 
fdisk /dev/ad11
 
o
 
w
 
</pre>
 
then same for ad12 and ad13
 
 
 
 
*install this
 
aptitude install zfsutils
 
 
 
*make a pool
 
zpool create -f  zfs-pool /dev/ad11 /dev/ad12  /dev/ad13
 
 
 
df | grep zfs-pool
 
zfs-pool      zfs    1.4T  18K  1.4T  1% /zfs-pool
 
 
 
*http://wiki.debian.org/Debian_GNU/kFreeBSD
 

Latest revision as of 08:08, 6 December 2019

Storage pool type: zfspool

This backend allows you to access local ZFS pools (or ZFS file systems inside such pools).

Configuration

The backend supports the common storage properties content, nodes, disable, and the following ZFS specific properties:

pool

Select the ZFS pool/filesystem. All allocations are done within that pool.

blocksize

Set ZFS blocksize parameter.

sparse

Use ZFS thin-provisioning. A sparse volume is a volume whose reservation is not equal to the volume size.

mountpoint

The mount point of the ZFS pool/filesystem. Changing this does not affect the mountpoint property of the dataset seen by zfs. Defaults to /<pool>.

Configuration Example (/etc/pve/storage.cfg)
zfspool: vmdata
        pool tank/vmdata
        content rootdir,images
        sparse

File naming conventions

The backend uses the following naming scheme for VM images:

vm-<VMID>-<NAME>      // normal VM images
base-<VMID>-<NAME>    // template VM image (read-only)
subvol-<VMID>-<NAME>  // subvolumes (ZFS filesystem for containers)
<VMID>

This specifies the owner VM.

<NAME>

This can be an arbitrary name (ascii) without white space. The backend uses disk[N] as default, where [N] is replaced by an integer to make the name unique.

Storage Features

ZFS is probably the most advanced storage type regarding snapshot and cloning. The backend uses ZFS datasets for both VM images (format raw) and container data (format subvol). ZFS properties are inherited from the parent dataset, so you can simply set defaults on the parent dataset.

Table 1. Storage features for backend zfs
Content types Image formats Shared Snapshots Clones

images rootdir

raw subvol

no

yes

yes

Examples

It is recommended to create an extra ZFS file system to store your VM images:

# zfs create tank/vmdata

To enable compression on that newly allocated file system:

# zfs set compression=on tank/vmdata

You can get a list of available ZFS filesystems with:

# pvesm zfsscan