Iscsi/nappit: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
Line 154: Line 154:


===comstar for kvm ===
===comstar for kvm ===
#-Create target: comstar >CREATE ISCSI Target:  comstar  >  target create .  
#-Create target: comstar> targets >CREATE ISCSI Target:  comstar  >  target create .  
#-Create target portal group: comstar >create target portal-group  
#-Create target portal group: comstar > target portal groups > create target portal-group  
#-Connect the target portal group  to the target:  comstar > target portal groups > add target
#-Connect the target portal group  to the target:  comstar > target portal groups > add target


comstart can now be used for KVM.
comstart can now be used for KVM.
===comstar for lxc ===
===comstar for lxc ===
*for LXC: Manually create a volume
*for LXC: Manually create a volume

Revision as of 20:21, 8 May 2016

draft

  • nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
  • this page is about setting up napp-it omnios to serve iscsi for pve to use as storage.
  • for mission critical data drbd,ceph,sheepdog, and others should be used so that the data is always written to multiple systems. unplug the nappit hardware and data is off line.
  • per comments on forum omnios is a good choice for iscsi nas. I plan to use for 90% of vm's and backups.

rough notes to be formatted and added

do not use without checking line by line

links

kvm notes

Forgot to mention: If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.

hardware

cpu must be compatable. see https://illumos.org/hcl/

http://ptribble.blogspot.dk/2016/03/supermicro-illumos-compatible-server.html

Important - drives must be labeled . you need to have a way to know where a particular drive is in case it needs to be swapped out. make and print a table [ in office or libreoffice ] as the system is put together. trying to find drive with a particular serial number after the system is in production means downtime.

operating system install

http://www.napp-it.org/doc/downloads/napp-it.pdf

there are a few install options. I used this option:

4. OS Installation options
4.1 manual barebone installation from USB (use this per default)

after install

from the pdf follow

set up network

install napp-it

access web page

Info: First login: admin, no password; delete napp-it.cfg to get defaults

Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.

zpools

1. Raid10 or striped mirror pool (2x2 disks should be sufficient) for I/O intensive servers like database servers.

2. raidz2 pool (the remaining 6 disks) for the rest


see zil [ write-log ] section at https://www.napp-it.org/manuals/concept_en.html

when in doubt use a write-log. see notes and tests at iscsi/tests or forum search.

napp-it refers to zil as 'write-log' << TBD check this

mirror

set up 4 drive mirror

1- create pool 2 drive mirror

2- extend pool with 2 drive mirror

to start do not use a write-log [ zil ] write-log

my 1-st tests show similiar speed to raidz1 + zil .

raidz1

5 ssd drives

1 write-log ssd

with a write-log drive vzrestore was 22 times faster then without.

configure omnios

Network mtu 9000

Note use <other command > to know which drv file to edit.
vi /kernel/drv/ixgbe.conf
default_mtu = 9000;

then reboot

check result

root@sys4:/root#  dladm show-linkprop -p mtu ixgbe0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
ixgbe0       mtu             rw   9000           1500           1500-15500 

ssh keys at pve for iscsi

edit at all cluster nodes: /etc/ssh/sshd_config

GSSAPIAuthentication no
UseDNS no

set up ssh keys for iscsi . change 10.2.2.41 as needed

mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/10.2.2.41_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/10.2.2.41_id_rsa.pub root@10.2.2.41

test it and accept as known host. do so at all pve cluster nodes.

ssh -i /etc/pve/priv/zfs/10.2.2.41_id_rsa root@10.2.2.41

before production use of machine set it so root can use ssh to access the system without-password .

comstar

  • Mir on forum suggested to do this. I'd take that as just do it:
mir replied to a thread you are watching at Proxmox Support Forum.

        "Shared storage suggestion for a 5 node cluster?"

This is the message they posted:
----------------------------------------------------------------------
----------
Does a different target  need to be created at napp-it > comstar  for   the lvm?   Or is the same target used?
----------
Not necessarily but it will greatly improve administration.
see answered questions at: https://forum.proxmox.com/threads/omnios-napp-it-for-zfs-questions.19774/#post-101041
volumes and logical units are created by Proxmox as needed when you create a disk for a VM.

iSCSI target service

I used cli, could not see how to do on gui.

# svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.

check:

# svcs -x svc:/network/iscsi/target:default
svc:/network/iscsi/target:default (iscsi target)
 State: online since Tue Apr  5 17:23:49 2016
   See: itadm(1M)
   See: /var/svc/log/network-iscsi-target:default.log

comstar for kvm

  1. -Create target: comstar> targets >CREATE ISCSI Target: comstar > target create .
  2. -Create target portal group: comstar > target portal groups > create target portal-group
  3. -Connect the target portal group to the target: comstar > target portal groups > add target

comstart can now be used for KVM.

comstar for lxc

  • for LXC: Manually create a volume
  1. disks > volumes > create volume . name lxc-vol , size xxG , uncheck thin provisioned.
  2. Comstar > Logical Units > create volume LU ** Choose the volume created in 1a to create a LUN from.
  3. Comstar > Views > add view
    • Choose the LUN created in 1b to add a view to. **
force lun: i put auto to autoassign a high number
that way you'll get a number for LUN hard to confure with existing luns later on .

Your LUN should now be visible from proxmox for usage as base volume

iscsi setup at pve

http://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :

make this more clear:

My recommendation would be to create two different storages in proxmox,

- one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.

- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.


All the above can be done from a single zfs pool.


Manually create a volume and share this volume through an iscsi target.  ?????? where

Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.

The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.

In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.

  • make sure multipath is not happening.

From pve check all physical connections at pve to ensure that there is just one target: This is correct set up :

sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666

We want only the 10G network IP to accept icsi connections

iscsi for kvm

  • kvm: per Mir on forum: If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
  • using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
  • pve > storage > add zfs over iscsi
ID: iscsi-sys4
Portal: 10.2.2.41
Pool: data
Block Size: 8k [ 8k gives better performance but slightly more space is wasted ]
Target: iqn.2010-09.org.napp-it:1459891666
Group: leave blank
iSCSI provider: comstar
Thin Provision: Optional but allows to over provision your storage
Write Cache: If pools option 'sync' is standard or always it is safe to enable write cache for improved performance

iscsi for lxc

Pay attention to these to important recommendations:
  1. disable 'use LUNs direcly'
  2. Enable shared use (recommended)

TBD: better organize this.

1- pve storage > add iscsi >

  • ID sys4-lxc-vol ,
  • Portal 10.2.2.41
  • Target: pick target you made for lxc in comstar above.
  • disable/un check use LUNs direcly

2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

  • pve > storage > add LVM
  • storage name: iscsi-for-lxc
  • For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. sys4-lxc-vol (iSCSI)
  • For 'Base Volume' select The correct Lun. Picking wrong LUN will cause bad issues.
  • For 'Volume Group Name' give a unique name (this name cannot be changed later). lxc-iscsi-vg
  • Enable shared use (recommended)
  • Click save

storage.cfg result example

zfs: iscsi-sys4
        target iqn.2010-09.org.napp-it:1459891666
        pool data
        portal 10.2.2.41
        iscsiprovider comstar
        blocksize 8k
        content images
        nowritecache

iscsi: sys4-lxc-vol
        portal 10.2.2.41
        target iqn.2010-09.org.napp-it:1459891666
        content none

lvm: iscsi-for-lxc
        vgname lxc-iscsi-vg
        shared
        base sys4-lxc-vol:0.0.3.scsi-3600144f0000008080000570c0bb60001
        content rootdir

iscsi nfs for backups.

  • In Omnios 'ZFS Filesystems' click the NFS fiels on your datasets which has NFS enabled and add the following after 'sharenfs='
note the nfs has to be turned off , then click nfs again and put in the setting

I used this:

rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16

backup from omnios