Difference between revisions of "Storage: ZFS"

From Proxmox VE
Jump to navigation Jump to search
Line 57: Line 57:
 
tank/data/pve-storage
 
tank/data/pve-storage
 
               zfs    2.7T    0  2.7T  0% /data/pve-storage
 
               zfs    2.7T    0  2.7T  0% /data/pve-storage
</pre>
 
 
 
<pre>
 
zfs create tank/home
 
zfs set mountpoint=/export/home tank/home
 
 
zfs create tank/bkup
 
zfs set mountpoint=/bkup tank/bkup
 
 
zfs create tank/ht
 
zfs set mountpoint=/export/ht tank/ht
 
 
</pre>
 
</pre>
  

Revision as of 20:30, 25 February 2012

zfs notes

Native ZFS for Linux on Proxmox 2.0

I had tried 3 other operating systems including Debian/kFreebsd and for each was able to recover all the zfs file systems from previous installs. Even with disk name changes [bsd uses ad11, linux sdb ] no data was lost. Today I set up zfsonlinux using the following info on 2 prox 2.0 systems

  • install these
aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted  pve-headers-$(uname -r) 
  • get make and install spl
cd /usr/src
git clone https://github.com/zfsonlinux/spl.git
cd spl
./configure
make deb
sudo dpkg -i *.deb
modprobe spl
  • get make and install zfs
cd /usr/src
git clone https://github.com/zfsonlinux/zfs.git
cd zfs
./configure
make deb
dpkg -i *.deb
modprobe zfs

if you do not have and zfs pools made then put this to /etc/modules

spl
zfs

reboot to make sure modules load at boot. I found on one system with limited memory [ compared to what was running] that zfs module would not load .

Creating Pools and Filesystems

use raidz2 or mirror in main data systems.

Create a simple pool named “tank”

  • these were extra disks of varying sizes we had around. I cleared the partition tables 1-st.
zpool create  tank   /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

Create filesystems, and mount points

  • for pve backups and templates:
zfs create tank/data
zfs set mountpoint=/data tank/data
zfs create tank/data/pve-storage

results in

df /data/pve-storage
Filesystem    Type    Size  Used Avail Use% Mounted on
tank/data/pve-storage
               zfs    2.7T     0  2.7T   0% /data/pve-storage

Add more space to the pool

zpool add tank mirror c4d0 c5d0

destroy

to start over

zpool destroy zfs-pool

Setting Properties

  • Automatically NFS-export all home directories [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
zfs set sharenfs=rw tank/home
  • check sharenfs
zfs get sharenfs tank/home
NAME     PROPERTY  VALUE     SOURCE
tank/home  sharenfs  rw        local
  • Turn on compression for everything in the pool
zfs set compression=on tank

zfs links and docs

there is a lot of well written information on zfs. here are some links:

check this first

and this has some very important information to know before implementing zfs on a production system.

check these well written manual pages:

man zfs

man zpool

How to install ZFS-fuse under Proxmox

(Tested with Kernel 2.6.32)

 apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
 wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2
 tar jxvf zfs-fuse-0.6.0.tar.bz2
 cd zfs-fuse-0.6.0
 cd src
 scons
 scond install

Fire up ZFS-fuse daemon

 /usr/local/sbin/zfs-fuse

Create Zpool

 zpool create -m /var/lib/vz/images2 /dev/sdb

Create ZFS

 zfs create images2/109

ZFS clone a vm 109 to vm 110

 zfs snapshot images2/109@master
 zfs clone images2/109@master images2/110

Debian GNU/kFreeBSD

As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage. When Debsd has nfs support then I'd use this for storage of iso's and backups to start. In the mean time this works as a samba server..

zfs works much better using Debian/GNU kFreeBSD then fuse.

after installing the system to kvm or hardware:

  • add some disks . in KVM i was only able to get ide to work.
  • the scsi disks look like:
ls /dev/ad*
/dev/ad10  /dev/ad10s1  /dev/ad10s2  /dev/ad11  /dev/ad12  /dev/ad13
  • if the disks have partition tables, then clear them. just not the operating system disk! I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
fdisk /dev/ad11
o
w

then same for ad12 and ad13


  • install this
aptitude install zfsutils
  • make a pool
zpool create -f  zfs-pool /dev/ad11 /dev/ad12  /dev/ad13
df | grep zfs-pool
zfs-pool       zfs    1.4T   18K  1.4T   1% /zfs-pool