|
|
(173 intermediate revisions by 16 users not shown) |
Line 1: |
Line 1: |
| =zfs notes=
| | <!--PVE_IMPORT_START_MARKER--> |
| | | <!-- Do not edit - this is autogenerated content --> |
| ==Native ZFS for Linux on Proxmox 2.0==
| | {{#pvedocs:pve-storage-zfspool-plain.html}} |
| *2012-02-25. check http://zfsonlinux.org/ See the Documentation and Community Resources for more information and help.
| | [[Category:Reference Documentation]] |
|
| | <pvehide> |
| I had tried 3 other operating systems including Debian/kFreebsd and for each was able to recover all the zfs file systems from previous installs. Even with disk name changes [bsd uses ad11, linux sdb ] no data was lost. Today I set up zfsonlinux using the following info on 2 prox 2.0 systems
| | Storage pool type: zfspool |
| | | This backend allows you to access local ZFS pools (or ZFS file systems |
| *install these
| | inside such pools). |
| aptitude install build-essential gawk alien fakeroot zlib1g-dev uuid uuid-dev libssl-dev parted pve-headers-$(uname -r)
| | Configuration |
| *get make and install spl
| | The backend supports the common storage properties content, nodes, |
| <pre>
| | disable, and the following ZFS specific properties: |
| cd /usr/src
| | pool |
| git clone https://github.com/zfsonlinux/spl.git
| | Select the ZFS pool/filesystem. All allocations are done within that |
| cd spl
| | pool. |
| ./configure
| | blocksize |
| make deb
| | Set ZFS blocksize parameter. |
| dpkg -i *.deb
| | sparse |
| modprobe spl
| | Use ZFS thin-provisioning. A sparse volume is a volume whose |
| </pre> | | reservation is not equal to the volume size. |
| *get make and install zfs
| | mountpoint |
| <pre>
| | The mount point of the ZFS pool/filesystem. Changing this does not |
| cd /usr/src
| | affect the mountpoint property of the dataset seen by zfs. |
| git clone https://github.com/zfsonlinux/zfs.git
| | Defaults to /<pool>. |
| cd zfs
| | Configuration Example (/etc/pve/storage.cfg) |
| ./configure | | zfspool: vmdata |
| make deb
| | pool tank/vmdata |
| dpkg -i *.deb
| | content rootdir,images |
| </pre>
| | sparse |
| | | File naming conventions |
| modprobe zfs
| | The backend uses the following naming scheme for VM images: |
| | | vm-<VMID>-<NAME> // normal VM images |
| if you do not have and zfs pools made then put this to /etc/modules
| | base-<VMID>-<NAME> // template VM image (read-only) |
| spl
| | subvol-<VMID>-<NAME> // subvolumes (ZFS filesystem for containers) |
| zfs
| | <VMID> |
| | | This specifies the owner VM. |
| reboot to make sure modules load at boot. I found on one system with limited memory [ compared to what was running] that zfs module would not load .
| | <NAME> |
| | | This can be an arbitrary name (ascii) without white space. The |
| *'''note , the deb files made above can be used to install zfs on another system'''.
| | backend uses disk[N] as default, where [N] is replaced by an |
| | | integer to make the name unique. |
| =Creating Pools and Filesystems=
| | Storage Features |
| '''use raidz2 or mirror in main data systems.'''
| | ZFS is probably the most advanced storage type regarding snapshot and |
| | | cloning. The backend uses ZFS datasets for both VM images (format |
| ==Create a raidz pool named “tank”==
| | raw) and container data (format subvol). ZFS properties are |
| zpool create -f -o ashift=12 tank raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
| | inherited from the parent dataset, so you can simply set defaults |
| | | on the parent dataset. |
| *note this happened to 2 or 3 disks the 1-st time. these were disks from a 3-ware raid.
| | Table 1. Storage features for backend zfs |
| <pre>
| | Content types |
| zpool create -f -o ashift=12 tank raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
| | Image formats |
| invalid vdev specification
| | Shared |
| the following errors must be manually repaired: | | Snapshots |
| cannot stat /dev/sdg2: No such file or directory
| | Clones |
| </pre>
| | images rootdir |
| had to use fdisk to add partitions 1 , 2 etc to those disks . I forget if I left the partitions there or not. but had to add to get this fixed.
| | raw subvol |
| | | no |
| ==Create filesystems, and mount points==
| | yes |
| *as of 2012-02-27 kvm would not start when put directly on a zfs .
| | yes |
| <pre>
| | Examples |
| qm start 3016
| | It is recommended to create an extra ZFS file system to store your VM images: |
| kvm: -drive file=/data/pve-storage/images/3016/vm-3016-disk-1.raw,if=none,id=drive-virtio0,aio=native,cache=none: could not open disk image /data/pve-storage/images/3016/vm-3016-disk-1.raw: Invalid argument
| | # zfs create tank/vmdata |
| start failed: command '/usr/bin/kvm -id 3016 -chardev 'socket,id=monitor,path=/var/run/qemu-server/3016.mon,server,nowait' -mon 'chardev=monitor,mode=readline' -vnc unix:/var/run/qemu-server/3016.vnc,x509,password -pidfile /var/run/qemu-server/3016.pid -daemonize -usbdevice tablet -name fbc16-kvm -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -k en-us -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/data/pve-storage/images/3016/vm-3016-disk-1.raw,if=none,id=drive-virtio0,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -m 1024 -netdev 'type=tap,id=net0,ifname=tap3016i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=BA:42:BC:AC:D8:4B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
| | To enable compression on that newly allocated file system: |
| </pre>
| | # zfs set compression=on tank/vmdata |
| | | You can get a list of available ZFS filesystems with: |
| using ext2 on zfs worked. see http://zfsonlinux.org/example-zvol.html .
| | # pvesm zfsscan |
| <pre>
| | See Also |
| zfs create -V 100G tank/fish
| | Storage |
| | | ZFS on Linux |
| # sfdisk did not work , use this:
| | </pvehide> |
| Try `parted` instead, which will create the partition at offset 16 by default, which `mkfs.ext2` should use without complaint. For example:
| | <!--PVE_IMPORT_END_MARKER--> |
| | |
| parted /dev/tank/fish | |
| (parted) mkpartfs primary ext2 0 -1
| |
| (parted) quit
| |
|
| |
| mkdir -p /mnt/tank/fish-part1
| |
| mount /dev/zvol/tank/fish-part1 /mnt/tank/fish-part1
| |
| </pre>
| |
| then use /mnt/tank/fish-part1 for images storage in pve .
| |
| | |
| *for pve backups and templates:
| |
| <pre>
| |
| zfs create tank/data
| |
| zfs set mountpoint=/data tank/data
| |
| zfs create tank/data/pve-storage
| |
| </pre>
| |
| | |
| results in
| |
| <pre>
| |
| df /data/pve-storage
| |
| Filesystem Type Size Used Avail Use% Mounted on
| |
| tank/data/pve-storage
| |
| zfs 2.7T 0 2.7T 0% /data/pve-storage
| |
| </pre>
| |
| | |
| ==Add more space to the pool==
| |
| zpool add /dev/sdh /dev/sdi
| |
| | |
| ==destroy==
| |
| to start over
| |
| zpool destroy zfs-pool
| |
| | |
| ==Setting Properties==
| |
| *Automatically NFS-export all home directories [ I do not know how to use zfs nfs yet , or if it works yet in linux...].
| |
| zfs set sharenfs=rw tank/home
| |
| *check sharenfs
| |
| <pre>
| |
| zfs get sharenfs tank/home
| |
| NAME PROPERTY VALUE SOURCE
| |
| tank/home sharenfs rw local
| |
| </pre>
| |
| *Turn on compression for everything in the pool
| |
| zfs set compression=on tank
| |
| | |
| =zfs links and docs=
| |
| *http://zfsonlinux.org/faq.html
| |
| | |
| *http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
| |
| | |
| and this has some very important information to know before implementing zfs on a production system. | |
| *http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
| |
| | |
| check these well written manual pages:
| |
| man zfs
| |
|
| |
| man zpool
| |
| | |
| = How to install ZFS-fuse under Proxmox =
| |
| | |
| (Tested with Kernel 2.6.32) | |
| | |
| apt-get install build-essential libaio-dev libattr1-dev libacl1-dev libz-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
| |
| wget -nd http://zfs-fuse.net/releases/0.6.0/zfs-fuse-0.6.0.tar.bz2
| |
| tar jxvf zfs-fuse-0.6.0.tar.bz2
| |
| cd zfs-fuse-0.6.0
| |
| cd src
| |
| scons
| |
| scond install
| |
| | |
| Fire up ZFS-fuse daemon
| |
| /usr/local/sbin/zfs-fuse
| |
| | |
| Create Zpool
| |
| zpool create -m /var/lib/vz/images2 /dev/sdb
| |
| | |
| Create ZFS
| |
| zfs create images2/109
| |
| | |
| ZFS clone a vm 109 to vm 110 | |
| zfs snapshot images2/109@master
| |
| zfs clone images2/109@master images2/110
| |
| | |
| = Debian GNU/kFreeBSD = | |
| '''As of 2012-02 the squeeze version does not have NFS, so AFAIK can not be used for Proxmox storage.''' When Debsd has nfs support then I'd use this for storage of iso's and backups to start. In the mean time this works as a samba server..
| |
| | |
| zfs works much better using Debian/GNU kFreeBSD then fuse.
| |
| | |
| after installing the system to kvm or hardware:
| |
| | |
| *add some disks . in KVM i was only able to get ide to work.
| |
| | |
| *the scsi disks look like:
| |
| <pre> | |
| ls /dev/ad*
| |
| /dev/ad10 /dev/ad10s1 /dev/ad10s2 /dev/ad11 /dev/ad12 /dev/ad13
| |
| </pre> | |
| | |
| * per http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide - if I read it right - use entire disk not slices for zfs (I assume slices = partitions).
| |
| | |
| *if the disks have partition tables, then clear them. just not the operating system disk! I used fdisk to clear the tables. This may not be necessary , if it is not then please delete this part.
| |
| <pre>
| |
| fdisk /dev/ad11
| |
| o
| |
| w
| |
| </pre>
| |
| then same for ad12 and ad13
| |
| | |
|
| |
| *install this
| |
| aptitude install zfsutils
| |
| | |
| *make a pool
| |
| zpool create -f zfs-pool /dev/ad11 /dev/ad12 /dev/ad13
| |
| | |
| df | grep zfs-pool
| |
| zfs-pool zfs 1.4T 18K 1.4T 1% /zfs-pool
| |
| | |
| *http://wiki.debian.org/Debian_GNU/kFreeBSD
| |