Storage: ZFS: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
No edit summary
 
(158 intermediate revisions by 16 users not shown)
Line 1: Line 1:
=Native ZFS for Linux on Proxmox 2.0=
<!--PVE_IMPORT_START_MARKER-->
*2012-02-25.  check  http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
<!-- Do not edit - this is autogenerated content -->
{{#pvedocs:pve-storage-zfspool-plain.html}}
I had tried 3 other operating systems including Debian/kFreebsd and for each was able to recover all the zfs  file systems from previous installs. Even with disk name changes [bsd uses ad11, linux sdb ] no data was lost. Today I set up zfsonlinux using the following info on 2 prox 2.0 systems
[[Category:Reference Documentation]]
 
<pvehide>
*install these
Storage pool type: zfspool
aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted  pve-headers-$(uname -r)
This backend allows you to access local ZFS pools (or ZFS file systems
*get make and install spl
inside such pools).
<pre>
Configuration
cd /usr/src
The backend supports the common storage properties content, nodes,
git clone https://github.com/zfsonlinux/spl.git
disable, and the following ZFS specific properties:
cd spl
pool
./configure
Select the ZFS pool/filesystem. All allocations are done within that
make deb
pool.
dpkg -i *.deb
blocksize
modprobe spl
Set ZFS blocksize parameter.
</pre>
sparse
*get make and install zfs
Use ZFS thin-provisioning. A sparse volume is a volume whose
<pre>
reservation is not equal to the volume size.
cd /usr/src
mountpoint
git clone https://github.com/zfsonlinux/zfs.git
The mount point of the ZFS pool/filesystem. Changing this does not
cd zfs
affect the mountpoint property of the dataset seen by zfs.
./configure
Defaults to /&lt;pool&gt;.
make deb
Configuration Example (/etc/pve/storage.cfg)
dpkg -i *.deb
zfspool: vmdata
</pre>
        pool tank/vmdata
 
        content rootdir,images
* make sure module can load
        sparse
modprobe zfs
File naming conventions
 
The backend uses the following naming scheme for VM images:
* install debs does not do this:
vm-&lt;VMID&gt;-&lt;NAME&gt;      // normal VM images
update-rc.d zfs defaults
base-&lt;VMID&gt;-&lt;NAME&gt;    // template VM image (read-only)
 
subvol-&lt;VMID&gt;-&lt;NAME&gt; // subvolumes (ZFS filesystem for containers)
*'''note , the deb files made above can be used to install zfs on another system'''.
&lt;VMID&gt;
 
This specifies the owner VM.
*if there is a pre existing zfs pool then simply use the next line to recover it. this can happen if you re-install proxmox and have zfs disks from before.
&lt;NAME&gt;
zpoo  </cman>
This can be an arbitrary name (ascii) without white space. The
 
backend uses disk[N] as default, where [N] is replaced by an
 
integer to make the name unique.
*If there is a need or want I could set up a repository with pre build debs. Just send an email .
Storage Features
 
ZFS is probably the most advanced storage type regarding snapshot and
=Creating Pools and Filesystems=
cloning. The backend uses ZFS datasets for both VM images (format
'''use raidz2 or mirror in main data systems.'''
raw) and container data (format subvol). ZFS properties are
 
inherited from the parent dataset, so you can simply set defaults
==Create a raidz pool named “tank”==
on the parent dataset.
zpool create  -f -o ashift=12 tank  raidz  /dev/sda  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh
Table 1. Storage features for backend zfs
 
Content types
*note this happened to 2 or 3 disks the 1-st time. these were disks from a 3-ware raid.
Image formats
<pre>
Shared
zpool create  -f -o ashift=12 tank  raidz2  /dev/sda  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh
Snapshots
invalid vdev specification
Clones
the following errors must be manually repaired:
images rootdir
cannot stat /dev/sdg2: No such file or directory
raw subvol
</pre>
no
had to use fdisk to add partitions 1 , 2 etc to those disks .   I forget if I left the partitions there or not. but had to add to get this fixed.
yes
 
yes
==Create filesystems, and mount points==
Examples
 
It is recommended to create an extra ZFS file system to store your VM images:
*for pve backups and templates:
# zfs create tank/vmdata
<pre>
To enable compression on that newly allocated file system:
zfs create tank/data
# zfs set compression=on tank/vmdata
zfs set mountpoint=/data tank/data
You can get a list of available ZFS filesystems with:
</pre>
# pvesm zfsscan
 
See Also
*mount all
Storage
zfs mount -a
ZFS on Linux
 
</pvehide>
 
<!--PVE_IMPORT_END_MARKER-->
*as of 2012-02-27 kvm would not start when put directly on a zfs .  
<pre>
qm start 3016
kvm: -drive file=/data/pve-storage/images/3016/vm-3016-disk-1.raw,if=none,id=drive-virtio0,aio=native,cache=none: could not open disk image /data/pve-storage/images/3016/vm-3016-disk-1.raw: Invalid argument
start failed: command '/usr/bin/kvm -id 3016 -chardev 'socket,id=monitor,path=/var/run/qemu-server/3016.mon,server,nowait' -mon 'chardev=monitor,mode=readline' -vnc unix:/var/run/qemu-server/3016.vnc,x509,password -pidfile /var/run/qemu-server/3016.pid -daemonize -usbdevice tablet -name fbc16-kvm -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -k en-us -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/data/pve-storage/images/3016/vm-3016-disk-1.raw,if=none,id=drive-virtio0,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -m 1024 -netdev 'type=tap,id=net0,ifname=tap3016i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=BA:42:BC:AC:D8:4B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
</pre>
 
for KVM  ext2 on zfs worked.  see http://zfsonlinux.org/example-zvol.html  .  '''as of 2/2012, that link refers to sfdisk.  use parted instead.'''. search bug reports for more info.
<pre>
zfs create -V 100G tank/fish
 
parted /dev/tank/fish
 
mklabel msdos
mkpartfs primary ext2 0 -1
quit
 
   
mkdir -p /mnt/tank/fish-part1
mount /dev/zvol/tank/fish-part1 /mnt/tank/fish-part1
 
# /etc/fstab
/dev/zvol/tank/fish-part1 /mnt/tank/fish-part1 ext2 noatime 0 0
 
</pre>
then use /mnt/tank/fish-part1  for images storage in pve .
 
==destroy==
to start over
zpool destroy zfs-pool
 
==Setting Properties==
*Automatically NFS-export all home directories  [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
zfs set sharenfs=rw tank/home
*check sharenfs 
<pre>
zfs get sharenfs tank/home
NAME    PROPERTY  VALUE    SOURCE
tank/home  sharenfs  rw        local
</pre>
*Turn on compression for everything in the pool
zfs set compression=on tank
 
=zfs links and docs=
*http://zfsonlinux.org/faq.html
 
*http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
 
and this has some very important information to know before implementing zfs on a production  system.
*http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 
check these well written manual pages:
man zfs
man zpool
 
= How to install ZFS-fuse under Proxmox =
 
(Tested with Kernel 2.6.32)
 
  apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
  wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2
  tar jxvf zfs-fuse-0.6.0.tar.bz2
  cd zfs-fuse-0.6.0
  cd src
  scons
  scond install
 
Fire up ZFS-fuse daemon
  /usr/local/sbin/zfs-fuse
 
Create Zpool
  zpool create -m /var/lib/vz/images2 /dev/sdb
 
Create ZFS
  zfs create images2/109
 
ZFS clone a vm 109 to vm 110
  zfs snapshot images2/109@master
  zfs clone images2/109@master images2/110
 
= Debian GNU/kFreeBSD =
'''As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage.''' When  Debsd has nfs support then I'd use this for storage of iso's and backups to start.  In the mean time this works as a samba server..
 
zfs works much better using  Debian/GNU kFreeBSD  then fuse.
 
after installing the system to kvm or hardware:
 
*add some disks .  in KVM i was only able to get ide to work.
 
*the scsi disks look like:
<pre>
ls /dev/ad*
/dev/ad10  /dev/ad10s1  /dev/ad10s2  /dev/ad11  /dev/ad12  /dev/ad13
</pre>
 
*  per http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide  - if I read it right - use entire disk not slices for zfs (I assume slices = partitions).
 
*if the disks have partition tables, then clear them. just not the operating system disk!   I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
<pre>
fdisk /dev/ad11
o
w
</pre>
then same for ad12 and ad13
 
*install this
aptitude install zfsutils
 
*make a pool
zpool create -f  zfs-pool /dev/ad11 /dev/ad12  /dev/ad13
 
df | grep zfs-pool
zfs-pool      zfs    1.4T  18K  1.4T  1% /zfs-pool
 
*http://wiki.debian.org/Debian_GNU/kFreeBSD

Latest revision as of 08:08, 6 December 2019

Storage pool type: zfspool

This backend allows you to access local ZFS pools (or ZFS file systems inside such pools).

Configuration

The backend supports the common storage properties content, nodes, disable, and the following ZFS specific properties:

pool

Select the ZFS pool/filesystem. All allocations are done within that pool.

blocksize

Set ZFS blocksize parameter.

sparse

Use ZFS thin-provisioning. A sparse volume is a volume whose reservation is not equal to the volume size.

mountpoint

The mount point of the ZFS pool/filesystem. Changing this does not affect the mountpoint property of the dataset seen by zfs. Defaults to /<pool>.

Configuration Example (/etc/pve/storage.cfg)
zfspool: vmdata
        pool tank/vmdata
        content rootdir,images
        sparse

File naming conventions

The backend uses the following naming scheme for VM images:

vm-<VMID>-<NAME>      // normal VM images
base-<VMID>-<NAME>    // template VM image (read-only)
subvol-<VMID>-<NAME>  // subvolumes (ZFS filesystem for containers)
<VMID>

This specifies the owner VM.

<NAME>

This can be an arbitrary name (ascii) without white space. The backend uses disk[N] as default, where [N] is replaced by an integer to make the name unique.

Storage Features

ZFS is probably the most advanced storage type regarding snapshot and cloning. The backend uses ZFS datasets for both VM images (format raw) and container data (format subvol). ZFS properties are inherited from the parent dataset, so you can simply set defaults on the parent dataset.

Table 1. Storage features for backend zfs
Content types Image formats Shared Snapshots Clones

images rootdir

raw subvol

no

yes

yes

Examples

It is recommended to create an extra ZFS file system to store your VM images:

# zfs create tank/vmdata

To enable compression on that newly allocated file system:

# zfs set compression=on tank/vmdata

You can get a list of available ZFS filesystems with:

# pvesm zfsscan