Storage: ZFS: Difference between revisions
Bread-baker (talk | contribs) |
Bread-baker (talk | contribs) |
||
Line 42: | Line 42: | ||
==Create a simple pool named “tank”== | ==Create a simple pool named “tank”== | ||
zpool create tank /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf | zpool create tank /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf | ||
Revision as of 03:16, 28 February 2012
zfs notes
Native ZFS for Linux on Proxmox 2.0
- 2012-02-25. check http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
I had tried 3 other operating systems including Debian/kFreebsd and for each was able to recover all the zfs file systems from previous installs. Even with disk name changes [bsd uses ad11, linux sdb ] no data was lost. Today I set up zfsonlinux using the following info on 2 prox 2.0 systems
- install these
aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted pve-headers-$(uname -r)
- get make and install spl
cd /usr/src git clone https://github.com/zfsonlinux/spl.git cd spl ./configure make deb dpkg -i *.deb modprobe spl
- get make and install zfs
cd /usr/src git clone https://github.com/zfsonlinux/zfs.git cd zfs ./configure make deb dpkg -i *.deb
modprobe zfs
if you do not have and zfs pools made then put this to /etc/modules
spl zfs
reboot to make sure modules load at boot. I found on one system with limited memory [ compared to what was running] that zfs module would not load .
- note , the deb files made above can be used to install zfs on another system.
Creating Pools and Filesystems
use raidz2 or mirror in main data systems.
Create a simple pool named “tank”
zpool create tank /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
- note this happened to 2 or 3 disks the 1-st time. these were disks from a 3-ware raid.
zpool create -f -o ashift=12 tank raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh invalid vdev specification the following errors must be manually repaired: cannot stat /dev/sdg2: No such file or directory
had to use fdisk to add partitions 1 , 2 etc to those disks . I forget if I left the partitions there or not. but had to add to get this fixed.
Create filesystems, and mount points
- as of 2012-02-27 kvm would not start when put directly on a zfs .
qm start 3016 kvm: -drive file=/data/pve-storage/images/3016/vm-3016-disk-1.raw,if=none,id=drive-virtio0,aio=native,cache=none: could not open disk image /data/pve-storage/images/3016/vm-3016-disk-1.raw: Invalid argument start failed: command '/usr/bin/kvm -id 3016 -chardev 'socket,id=monitor,path=/var/run/qemu-server/3016.mon,server,nowait' -mon 'chardev=monitor,mode=readline' -vnc unix:/var/run/qemu-server/3016.vnc,x509,password -pidfile /var/run/qemu-server/3016.pid -daemonize -usbdevice tablet -name fbc16-kvm -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -k en-us -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/data/pve-storage/images/3016/vm-3016-disk-1.raw,if=none,id=drive-virtio0,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -m 1024 -netdev 'type=tap,id=net0,ifname=tap3016i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=BA:42:BC:AC:D8:4B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
using ext2 on zfs worked. see http://zfsonlinux.org/example-zvol.html .
zfs create -V 100G tank/fish # sfdisk did not work , use this: Try `parted` instead, which will create the partition at offset 16 by default, which `mkfs.ext2` should use without complaint. For example: parted /dev/tank/fish (parted) mkpartfs primary ext2 0 -1 (parted) quit mkdir -p /mnt/tank/fish-part1 mount /dev/zvol/tank/fish-part1 /mnt/tank/fish-part1
then use /mnt/tank/fish-part1 for images storage in pve .
- for pve backups and templates:
zfs create tank/data zfs set mountpoint=/data tank/data zfs create tank/data/pve-storage
results in
df /data/pve-storage Filesystem Type Size Used Avail Use% Mounted on tank/data/pve-storage zfs 2.7T 0 2.7T 0% /data/pve-storage
Add more space to the pool
zpool add /dev/sdh /dev/sdi
destroy
to start over
zpool destroy zfs-pool
Setting Properties
- Automatically NFS-export all home directories [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
zfs set sharenfs=rw tank/home
- check sharenfs
zfs get sharenfs tank/home NAME PROPERTY VALUE SOURCE tank/home sharenfs rw local
- Turn on compression for everything in the pool
zfs set compression=on tank
zfs links and docs
and this has some very important information to know before implementing zfs on a production system.
check these well written manual pages:
man zfs man zpool
How to install ZFS-fuse under Proxmox
(Tested with Kernel 2.6.32)
apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2 tar jxvf zfs-fuse-0.6.0.tar.bz2 cd zfs-fuse-0.6.0 cd src scons scond install
Fire up ZFS-fuse daemon
/usr/local/sbin/zfs-fuse
Create Zpool
zpool create -m /var/lib/vz/images2 /dev/sdb
Create ZFS
zfs create images2/109
ZFS clone a vm 109 to vm 110
zfs snapshot images2/109@master zfs clone images2/109@master images2/110
Debian GNU/kFreeBSD
As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage. When Debsd has nfs support then I'd use this for storage of iso's and backups to start. In the mean time this works as a samba server..
zfs works much better using Debian/GNU kFreeBSD then fuse.
after installing the system to kvm or hardware:
- add some disks . in KVM i was only able to get ide to work.
- the scsi disks look like:
ls /dev/ad* /dev/ad10 /dev/ad10s1 /dev/ad10s2 /dev/ad11 /dev/ad12 /dev/ad13
- per http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide - if I read it right - use entire disk not slices for zfs (I assume slices = partitions).
- if the disks have partition tables, then clear them. just not the operating system disk! I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
fdisk /dev/ad11 o w
then same for ad12 and ad13
- install this
aptitude install zfsutils
- make a pool
zpool create -f zfs-pool /dev/ad11 /dev/ad12 /dev/ad13
df | grep zfs-pool zfs-pool zfs 1.4T 18K 1.4T 1% /zfs-pool