Iscsi/nappit: Difference between revisions
Bread-baker (talk | contribs) |
Bread-baker (talk | contribs) |
||
Line 303: | Line 303: | ||
zfs set sharenfs=on tank/bkup | zfs set sharenfs=on tank/bkup | ||
*enable nfs services > nfs . make sure service is online. | *enable nfs services > nfs . make sure service is online. | ||
==backup omnios== | |||
*TBD add more. | |||
*https://github.com/oetiker/znapzend/blob/master/README.md |
Revision as of 22:00, 15 April 2016
draft
- nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
- this page is about setting up napp-it omnios to serve iscsi for pve to use as storage.
- for mission critical data drbd,ceph,sheepdog, and others should be used so that the data is always written to multiple systems. unplug the nappit hardware and data is off line.
- per comments on forum omnios is a good choice for iscsi nas. I plan to use for 90% of vm's and backups.
rough notes to be formatted and added
do not use without checking line by line
links
- http://www.napp-it.org/index_en.html
- http://www.napp-it.org/manuals/
- install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf
- tuning http://www.napp-it.org/manuals/tuning.html
- http://omnios.omniti.com/wiki.php/GeneralAdministration
- https://www.joyent.com/blog/documentation-for-smf
- https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/
- HardForum: http://hardforum.com/showthread.php?t=1573272
- ServetheHome: https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
kvm notes
Forgot to mention: If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.
hardware
cpu must be compatable. see https://illumos.org/hcl/
http://ptribble.blogspot.dk/2016/03/supermicro-illumos-compatible-server.html
Important - drives must be labeled . you need to have a way to know where a particular drive is in case it needs to be swapped out. make and print a table [ in office or libreoffice ] as the system is put together. trying to find drive with a particular serial number after the system is in production means downtime.
operating system install
http://www.napp-it.org/doc/downloads/napp-it.pdf
there are a few install options. I used this option:
- 4. OS Installation options
- 4.1 manual barebone installation from USB (use this per default)
- Download usb-dd file from http://omnios.omniti.com/wiki.php/Installation
after install
set up network. from the pdf follow 5.2 Setup napp-it storage appliance with manual ip settings
set up mtu 9000 [ move that info here ? ]
access web page
http://<IP>:81/cgi-bin/admin.pl
Info: First login: admin, no password; delete napp-it.cfg to get defaults
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
zpools
1. Raid10 or striped mirror pool (2x2 disks should be sufficient) for I/O intensive servers like database servers.
2. raidz2 pool (the remaining 6 disks) for the rest
see zil [ write-log ] section at https://www.napp-it.org/manuals/concept_en.html
when in doubt use a write-log. see notes and tests at iscsi/tests or forum search.
napp-it refers to zil as 'write-log' << TBD check this
mirror
set up 4 drive mirror
1- create pool 2 drive mirror
2- extend pool with 2 drive mirror
to start do not use a write-log [ zil ] write-log
my 1-st tests show similiar speed to raidz1 + zil .
raidz1
5 ssd drives
1 write-log ssd
with a write-log drive vzrestore was 22 times faster then without.
cli adjustments
- there probably is a way to do this from napp-it
# needed cli for atleast no atime: zfs set atime=off tank zfs set compression=lz4 tank
configure omnios
Network mtu 9000
- Note use <other command > to know which drv file to edit.
vi /kernel/drv/ixgbe.conf default_mtu = 9000;
then reboot
check result
root@sys4:/root# dladm show-linkprop -p mtu ixgbe0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE ixgbe0 mtu rw 9000 1500 1500-15500
ssh keys at pve for iscsi
edit at all cluster nodes: /etc/ssh/sshd_config
GSSAPIAuthentication no UseDNS no
set up ssh keys for iscsi . change 10.2.2.41 as needed
mkdir /etc/pve/priv/zfs ssh-keygen -f /etc/pve/priv/zfs/10.2.2.41_id_rsa ssh-copy-id -i /etc/pve/priv/zfs/10.2.2.41_id_rsa.pub root@10.2.2.41
test it and accept as known host. do so at all pve cluster nodes.
ssh -i /etc/pve/priv/zfs/10.2.2.41_id_rsa root@10.2.2.41
before production use of machine set it so root can use ssh to access the system without-password .
comstar
- I think we use this concept Enable iSCSI via menu ZFS Filesystems at http://www.napp-it.org/doc/downloads/napp-it.pdf
- see answered questions at: https://forum.proxmox.com/threads/omnios-napp-it-for-zfs-questions.19774/#post-101041
- volumes and logical units are created by Proxmox as needed when you create a disk for a VM.
iSCSI target service
I used cli, could not see how to do on gui.
# svcadm enable -r svc:/network/iscsi/target:default svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
check:
# svcs -x svc:/network/iscsi/target:default svc:/network/iscsi/target:default (iscsi target) State: online since Tue Apr 5 17:23:49 2016 See: itadm(1M) See: /var/svc/log/network-iscsi-target:default.log
napp-it gui
Create target: comstar >CREATE ISCSI Target: comstar > target create .
Create target portal group: comstar >create target portal-group . name portal-group-1 use 10.2.2.41
Connect the target portal group to the target: comstar > target portal groups > add target
- Manually create a volume
1a)disks > volumes > create volume . name lxc-vol , size 750G , uncheck thin provisioned.
1b) home > Comstar > Logical Units > create volume LU
- Choose the volume created in 1a to create a LUN from.
1c) home > Comstar > Views > add view
- Choose the LUN created in 1b to add a view to. **
Your LUN should now be visible from proxmox for usage as base volume
iscsi pve
iscsi setup at pve
http://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI
My recommendation would be to create two different storages in proxmox,
- one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.
All the above can be done from a single zfs pool.
Manually create a volume and share this volume through an iscsi target. ?????? where
Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.
The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.
iscsi for kvm
- kvm: per Mir on forum: If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
- using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
- pve > storage > add zfs over iscsi
- ID: iscsi-sys4
- Portal: 10.2.2.41
- Pool: data
- Block Size: 8k [ 8k gives better performance but slightly more space is wasted ]
- Target: iqn.2010-09.org.napp-it:1459891666
- Group: leave blank
- iSCSI provider: comstar
- Thin Provision: Optional but allows to over provision your storage
- Write Cache: If pools option 'sync' is standard or always it is safe to enable write cache for improved performance
iscsi for lxc
- https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
- https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.
- Pay attention to these to important recommendations:
- disable 'use LUNs direcly'
- Enable shared use (recommended)
some of this is also in a prior section. TBD: eliminate better organize this. All the above can be done from a single zfs pool.
- updating 1-3 4/11 for new lxc iscsi set up.
1-Manually create a volume
napp-it disks > volumes > create volume : volume: tank2 nam: lxc-vol size 750GiB block size : left as default 64KB uncheck thin provisioned.
1A - share this volume through an iscsi target.
1b: Choose the volume created in 1a to create a LUN from.
- comstar > Logical Units > create volume LU
- Select volume /dev/zvol/rdsk/ tank2/lxc-vol
1c: Choose the LUN created in 1b to add a view to.
- Comstar > Views > add view
1F: this may be elsehere. to force just 10G IP usage at pve Create a target portal group and only add your 10G interface
- Comstar > Target Portal Groups > create
1G: Connect the target portal group form 1) to the target :
- Comstar > Target Portal Groups > add target .
From pve check all physical connections at pve to ensurethat there is just one target:
This is correct set up :
sys5 ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260 iscsiadm: No portals found sys5 ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260 10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
We want only the 10G network IP to accept icsi connections
2- pve storage > add iscsi >
- ID sys4-lxc-vol ,
- Portal 10.2.2.41
- Target: ...
- disable/un check 'use LUNs direcly'
3- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
- add an LVM group on this target. pve storage
- https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
- Second, add an LVM group on this target.
- pve > storage > add LVM
- As storage name use whatever you want but take care, this name cannot be changed later.
- iscsi-for-lxc
- For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. sys4-lxc-vol (iSCSI)
- For 'Base Volume' select The correct Lun. [ with 750GB in tihs test ]. Picking wrong LUN will cause bad issues.
- For 'Volume Group Name' give a unique name (this name cannot be changed later). lxc-iscsi-vg
- Enable shared use (recommended)
- Click save
- Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's. I'm using a diff pool 4/11 . Will that be an issue? we'll see TBD
- The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
- In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)
storage.cfg result
zfs: iscsi-sys4 target iqn.2010-09.org.napp-it:1459891666 pool data portal 10.2.2.41 iscsiprovider comstar blocksize 8k content images nowritecache iscsi: sys4-lxc-vol portal 10.2.2.41 target iqn.2010-09.org.napp-it:1459891666 content none lvm: iscsi-for-lxc vgname lxc-iscsi-vg shared base sys4-lxc-vol:0.0.3.scsi-3600144f0000008080000570c0bb60001 content rootdir
iscsi nfs for backups.
- create a zfs to store dump files: create zfs tank/bkup
- cli: . could not figure how to do from gui
zfs set sharenfs=on tank/bkup
- enable nfs services > nfs . make sure service is online.