Storage: ZFS: Difference between revisions
Bread-baker (talk | contribs) |
Bread-baker (talk | contribs) |
||
Line 59: | Line 59: | ||
tank/data zfs 2.7T 0 2.7T 0% /tank/data | tank/data zfs 2.7T 0 2.7T 0% /tank/data | ||
tank/data/u zfs 2.7T 801M 2.7T 1% /u | tank/data/u zfs 2.7T 801M 2.7T 1% /u | ||
</pre> | |||
===Create filesystems, and mount points=== | |||
<pre> | |||
zfs create tank/home | |||
zfs set mountpoint=/export/home tank/home | |||
zfs create tank/bkup | |||
zfs set mountpoint=/bkup tank/bkup | |||
zfs create tank/ht | |||
zfs set mountpoint=/export/ht tank/ht | |||
====Create home directories for several users=== | |||
*Note: automatically mounted at /export/home/{ahrens,bonwick,billm} thanks to inheritance | |||
<pre> | |||
# zfs create tank/home/ahrens | |||
# zfs create tank/home/bonwick | |||
# zfs create tank/home/billm | |||
</pre> | </pre> | ||
Revision as of 19:42, 25 February 2012
zfs notes
Native ZFS for Linux on Proxmox 2.0
- 2012-02-25. check http://zfsonlinux.org/ . looks like a solid project.
I had tried 3 other operating systems and for each was able to recover all the zfs file systems from previous installs. today I set up zfsonlinux using the following info on 2 prox 2.0 systems
- install these
aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted pve-headers-$(uname -r)
- get make and install spl
cd /usr/src git clone https://github.com/zfsonlinux/spl.git cd spl ./configure make deb sudo dpkg -i *.deb modprobe spl
- get make and install zfs
cd /usr/src git clone https://github.com/zfsonlinux/zfs.git cd zfs ./configure make deb dpkg -i *.deb
modprobe zfs
if you do not have and zfs pools made then put this to /etc/modules
spl zfs
reboot to make sure modules load at boot. I found on one system with limited memory [ compared to what was running] that zfs module would not load .
- then import preexisting pool
zpool import -f tank
df
Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/pve-root ext3 92G 1.1G 86G 2% / tmpfs tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev tmpfs 2.0G 284K 2.0G 1% /dev tmpfs tmpfs 2.0G 3.1M 2.0G 1% /dev/shm /dev/mapper/pve-data ext3 256G 188M 255G 1% /var/lib/vz /dev/sda1 ext3 495M 34M 436M 8% /boot /dev/fuse fuse 30M 12K 30M 1% /etc/pve tank/bkup zfs 3.0T 237G 2.7T 8% /bkup tank/home zfs 2.7T 0 2.7T 0% /export/home tank/ht zfs 2.9T 216G 2.7T 8% /export/ht tank/data/home-fbc zfs 2.7T 3.5G 2.7T 1% /home-fbc tank zfs 2.7T 0 2.7T 0% /tank tank/data zfs 2.7T 0 2.7T 0% /tank/data tank/data/u zfs 2.7T 801M 2.7T 1% /u
Create filesystems, and mount points
zfs create tank/home zfs set mountpoint=/export/home tank/home zfs create tank/bkup zfs set mountpoint=/bkup tank/bkup zfs create tank/ht zfs set mountpoint=/export/ht tank/ht ====Create home directories for several users=== *Note: automatically mounted at /export/home/{ahrens,bonwick,billm} thanks to inheritance <pre> # zfs create tank/home/ahrens # zfs create tank/home/bonwick # zfs create tank/home/billm
zfs links
there is a lot of well written information on zfs. here are some links:
check this first
and this has some very important information to know before implementing zfs on a production system.
How to install ZFS-fuse under Proxmox
(Tested with Kernel 2.6.32)
apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2 tar jxvf zfs-fuse-0.6.0.tar.bz2 cd zfs-fuse-0.6.0 cd src scons scond install
Fire up ZFS-fuse daemon
/usr/local/sbin/zfs-fuse
Create Zpool
zpool create -m /var/lib/vz/images2 /dev/sdb
Create ZFS
zfs create images2/109
ZFS clone a vm 109 to vm 110
zfs snapshot images2/109@master zfs clone images2/109@master images2/110
Debian GNU/kFreeBSD
As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage. When Debsd has nfs support then I'd use this for storage of iso's and backups to start. In the mean time this works as a samba server..
zfs works much better using Debian/GNU kFreeBSD then fuse.
after installing the system to kvm or hardware:
- add some disks . in KVM i was only able to get ide to work.
- the scsi disks look like:
ls /dev/ad* /dev/ad10 /dev/ad10s1 /dev/ad10s2 /dev/ad11 /dev/ad12 /dev/ad13
- per http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide - if I read it right - use entire disk not slices for zfs (I assume slices = partitions).
- if the disks have partition tables, then clear them. just not the operating system disk! I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
fdisk /dev/ad11 o w
then same for ad12 and ad13
- install this
aptitude install zfsutils
- make a pool
zpool create -f zfs-pool /dev/ad11 /dev/ad12 /dev/ad13
df | grep zfs-pool zfs-pool zfs 1.4T 18K 1.4T 1% /zfs-pool