Iscsi/nappit: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
Line 222: Line 222:
1A -  share this volume through an iscsi target.
1A -  share this volume through an iscsi target.


1b: Choose the volume created in 1a to create a LUN from.  comstar > Logical Units > create volume LU
1b: Choose the volume created in 1a to create a LUN from.   
                Select volume /dev/zvol/rdsk/          tank2/lxc-vol  
:comstar > Logical Units > create volume LU
:Select volume /dev/zvol/rdsk/          tank2/lxc-vol  
                  
                  
1c: Choose the LUN created in 1b to add a view to.
1c: Choose the LUN created in 1b to add a view to.
    Comstar > Views > add view
:Comstar > Views > add view
      
      
    [ research the starting LUN number.  i assume it will increment from that? if so good to use a diff range for each volume . so 50-60 for lxc-vol , 20-30 for another vol ].
   
1F and 1G were already done and working from prior set up.  i put instructions from forum here:
1F: this may be elsehere.  to force just 10G IP usage at pve
1F: this may be elsehere.  to force just 10G IP usage at pve
Create a target portal group and only add your 10G interface
Create a target portal group and only add your 10G interface
  Comstar >  Target Portal Groups > create
:Comstar >  Target Portal Groups > create


1G: Connect the target portal group form 1) to the target  :
1G: Connect the target portal group form 1) to the target  :
  Comstar > Target Portal Groups > add target .
:Comstar > Target Portal Groups > add target .




From pve check that there is just one target:
From pve check all physical connections at pve to ensurethat there is just one target:
    This is correct set up :
This is correct set up :
<pre>
sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
 
</pre>
We want only the 10G network IP to accept icsi  connections  
We want only the 10G network IP to accept icsi  connections  


2- pve storage > add iscsi >  
2- pve storage > add iscsi >  
    ID sys4-lxc-vol ,  
:ID sys4-lxc-vol ,  
    Portal 10.2.2.41
:Portal 10.2.2.41
    Target: 1-st of 2 .   4/11 only one target presented.
:Target: ...
 
:disable/un check 'use LUNs direcly'
    disable/un check 'use LUNs direcly'




3- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.  
3- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.  
  add an LVM group on this target.  pve  storage   
:add an LVM group on this target.  pve  storage   
https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing  
:https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing  
Second, add an LVM group on this target.
:Second, add an LVM group on this target.
 
: pve > storage >  add LVM  
    pve > storage >  add LVM  
:As storage name use whatever you want but take care, this name cannot be changed later.
 
 
    As storage name use whatever you want but take care, this name cannot be changed later.
 
iscsi-for-lxc
 
    For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  sys4-lxc-vol (iSCSI)
 
 
    For 'Base Volume' select  The Lun with  750GB
 
 
    For 'Volume Group Name' give a unique name (this name cannot be changed later).  lxc-iscsi-vg
 
    Enable shared use (recommended)
 
    Click save
 
 
? Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.  I'm using a diff pool 4/11 .  Will that be an issue? we'll see  TBD


The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
*iscsi-for-lxc
:For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  sys4-lxc-vol (iSCSI)
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)
:For 'Base Volume' select  The correct Lun.  [ with  750GB in tihs test ].  Picking wrong LUN will cause bad issues.
:For 'Volume Group Name' give a unique name (this name cannot be changed later).  lxc-iscsi-vg
:Enable shared use (recommended)
:Click save
*Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.  I'm using a diff pool 4/11 .  Will that be an issue? we'll see  TBD
*The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
*In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)


=== storage.cfg result ===
=== storage.cfg result ===

Revision as of 17:34, 12 April 2016

draft

rough notes to be formatted and added

do not use without checking line by line

links

TBD add more links?

http://www.napp-it.org/manuals/

HardForum: http://hardforum.com/showthread.php?t=1573272

ServetheHome: https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/

hardware

cpu must be compatable see https://illumos.org/hcl/

http://ptribble.blogspot.dk/2016/03/supermicro-illumos-compatible-server.html

operating system install

http://www.napp-it.org/doc/downloads/napp-it.pdf

there are a few install options. I used this option:

4. OS Installation options
4.1 manual barebone installation from USB (use this per default)

after install

set up network. from the pdf follow 5.2 Setup napp-it storage appliance with manual ip settings

set up mtu 9000 [ move that info here  ? ]

access web page

http://<IP>:81/cgi-bin/admin.pl

Info: First login: admin, no password; delete napp-it.cfg to get defaults

Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.

zpools

ten 480GB ssd drives

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/

Mir: I would suggest 2 pools:

1. Raid10 or striped mirror pool (2x2 disks should be sufficient) for I/O intensive servers like database servers.

2. raidz2 pool (the remaining 6 disks) for the rest


see zil [ write-log ] section at https://www.napp-it.org/manuals/concept_en.html

when in doubt use a write-log. see notes and tests at iscsi/tests or forum search.

napp-it refers to zil as 'write-log' << TBD check this

mirror

set up 4 drive mirror

1- create pool 2 drive mirror

2- extend pool with 2 drive mirror

to start do not use a write-log [ zil ] write-log

my 1-st tests show similiar speed to raidz1 + zil .

raidz1

5 ssd drives

1 write-log ssd Intel super one << put model numbers TBD

with a write-log drive vzrestore was 22 times faster then without.

cli adjustments

  • there probably is a way to do this from napp-it
# needed cli for atleast no atime:
zfs set atime=off tank
zfs set compression=lz4 tank

configure omnios

Network mtu 9000

Note use <other command > to know which drv file to edit.
vi /kernel/drv/ixgbe.conf
default_mtu = 9000;

then reboot

check result

root@sys4:/root#  dladm show-linkprop -p mtu ixgbe0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
ixgbe0       mtu             rw   9000           1500           1500-15500 

ssh keys at pve for iscsi

edit at all cluster nodes: /etc/ssh/sshd_config

GSSAPIAuthentication no
UseDNS no

set up ssh keys for iscsi . change 10.2.2.41 as needed

mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/10.2.2.41_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/10.2.2.41_id_rsa.pub root@10.2.2.41

test it and accept as known host. do so at all pve cluster nodes.

ssh -i /etc/pve/priv/zfs/10.2.2.41_id_rsa root@10.2.2.41

before production use of machine set it so root can use ssh to access the system without-password .

comstar

see answered questions at: https://forum.proxmox.com/threads/omnios-napp-it-for-zfs-questions.19774/#post-101041
volumes and logical units are created by Proxmox as needed when you create a disk for a VM.

iSCSI target service

I used cli, could not see how to do on gui.

# svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.

check:

# svcs -x svc:/network/iscsi/target:default
svc:/network/iscsi/target:default (iscsi target)
 State: online since Tue Apr  5 17:23:49 2016
   See: itadm(1M)
   See: /var/svc/log/network-iscsi-target:default.log

napp-it gui

Create target: comstar >CREATE ISCSI Target: comstar > target create .

Create target portal group: comstar >create target portal-group . name portal-group-1 use 10.2.2.41

Connect the target portal group to the target: comstar > target portal groups > add target

  • Manually create a volume

napp-it

  1. disks > volumes > create volume . name lvmvol , size 300G , un check thin provisioned.

1b) home > Comstar > Logical Units > create volume LU

    • Choose the volume created in 1a to create a LUN from.

1c) home > Comstar > Views > add view

    • Choose the LUN created in 1b to add a view to. **

Your LUN should now be visible from proxmox for usage as base volume

iscsi pve

iscsi setup at pve

http://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :

My recommendation would be to create two different storages in proxmox,

- one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.

- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.


All the above can be done from a single zfs pool.


Manually create a volume and share this volume through an iscsi target.  ?????? where

Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.

The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.

In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.

iscsi-kvm

  • kvm: per Mir on forum: If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
  • using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
  • pve > storage > add zfs over iscsi===

ID: iscsi-sys4 Portal: 10.2.2.41 Pool: data Block Size: 8k . 8k gives better performance but slightly more space is wasted Target: iqn.2010-09.org.napp-it:1459891666

Group: leave blank

iSCSI provider: comstar Thin Provision: Optional but allows to over provision your storage

Write Cache: If pools option 'sync' is standard or always it is safe to enable write cache for improved performance

lxc on iscsi

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706

https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.

Pay attention to these to important recommendations:

  1. disable 'use LUNs direcly'
  2. Enable shared use (recommended)

All the above can be done from a single zfs pool.

    • updating 1-3 4/11 for new lxc iscsi set up.

1-Manually create a volume

napp-it  disks > volumes > create volume : 
     volume:  tank2
        nam: lxc-vol
        size  750GiB
        block size : left as default 64KB
        uncheck thin provisioned. 

1A - share this volume through an iscsi target.

1b: Choose the volume created in 1a to create a LUN from.

comstar > Logical Units > create volume LU
Select volume /dev/zvol/rdsk/ tank2/lxc-vol

1c: Choose the LUN created in 1b to add a view to.

Comstar > Views > add view

1F: this may be elsehere. to force just 10G IP usage at pve Create a target portal group and only add your 10G interface

Comstar > Target Portal Groups > create

1G: Connect the target portal group form 1) to the target  :

Comstar > Target Portal Groups > add target .


From pve check all physical connections at pve to ensurethat there is just one target: This is correct set up :

sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666

We want only the 10G network IP to accept icsi connections

2- pve storage > add iscsi >

ID sys4-lxc-vol ,
Portal 10.2.2.41
Target: ...
disable/un check 'use LUNs direcly'


3- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

add an LVM group on this target. pve storage
https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
Second, add an LVM group on this target.
pve > storage > add LVM
As storage name use whatever you want but take care, this name cannot be changed later.
  • iscsi-for-lxc
For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. sys4-lxc-vol (iSCSI)
For 'Base Volume' select The correct Lun. [ with 750GB in tihs test ]. Picking wrong LUN will cause bad issues.
For 'Volume Group Name' give a unique name (this name cannot be changed later). lxc-iscsi-vg
Enable shared use (recommended)
Click save
  • Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's. I'm using a diff pool 4/11 . Will that be an issue? we'll see TBD
  • The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
  • In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)

storage.cfg result

zfs: iscsi-sys4
        target iqn.2010-09.org.napp-it:1459891666
        pool data
        portal 10.2.2.41
        iscsiprovider comstar
        blocksize 8k
        content images
        nowritecache

iscsi: sys4-lxc-vol
        portal 10.2.2.41
        target iqn.2010-09.org.napp-it:1459891666
        content none

lvm: iscsi-for-lxc
        vgname lxc-iscsi-vg
        shared
        base sys4-lxc-vol:0.0.3.scsi-3600144f0000008080000570c0bb60001
        content rootdir