Iscsi/nappit: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
 
(113 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=draft=
== draft ==
 
*nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
*nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
*this page is about setting up napp-it omnios to serve iscsi for pve to use as storage.
*this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage.
*for mission critical data drbd,ceph,sheepdog, and others should be used so that the data is always written to multiple systems. unplug the nappit hardware and data is off line.
*per comments on forum  omnios is a good choice for iscsi nas.
*per comments on forum  omnios is a good choice for iscsi nas.  I plan to use for 90% of vm's and backups.
*TBD set up high availabilty iscsi.


==rough notes to be formatted and added  ==
== operating system install ==
*see http://www.napp-it.org/doc/downloads/napp-it.pdf
*mirror rpool : https://omnios.omniti.com/wiki.php/GeneralAdministration#MirroringARootPool


==do not use without checking line by line==
== network mtu 9000 ==
NOTE: do this before setting up network. it is hard to do after.


== links ==
* from http://www.saqr.org/index.php?option=com_content&view=article&id=100:change-default-mtu&catid=51:solaris&Itemid=72
*http://www.napp-it.org/index_en.html
   
*http://www.napp-it.org/manuals/
use <other command > to know which drv file to edit. see napp-it.pdf
*install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf
*tuning  http://www.napp-it.org/manuals/tuning.html
*http://omnios.omniti.com/wiki.php/GeneralAdministration
*https://www.joyent.com/blog/documentation-for-smf
*https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/
 
*HardForum: http://hardforum.com/showthread.php?t=1573272
*ServetheHome:  https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
 
==kvm notes ==
 
*https://forum.proxmox.com/threads/suggestions-for-san-config.26216/page-2#post-131751
Forgot to mention: If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.
 
== hardware ==
cpu must be compatable. see https://illumos.org/hcl/
 
http://ptribble.blogspot.dk/2016/03/supermicro-illumos-compatible-server.html
 
Important - drives must be labeled . you need to have a way to know where a particular drive is in case it needs to be swapped out. make and print a table [ in office or libreoffice ] as the system is put together.  trying to find drive with a particular serial number after the system is in production means downtime.
 
==operating system install ==
http://www.napp-it.org/doc/downloads/napp-it.pdf
 
there are a few install options. I used this option:
:4. OS Installation options
:4.1 manual barebone installation from USB (use this per default)
* Download usb-dd file from http://omnios.omniti.com/wiki.php/Installation
 
== after install ==
set up network. from the pdf follow ''5.2 Setup napp-it storage appliance with manual ip settings ''
 
set up mtu 9000 [ move that info here  ? ]
 
== access web page ==
http://<IP>:81/cgi-bin/admin.pl
       
Info: First login: admin, no password; delete napp-it.cfg to get defaults
 
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome.
Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
 
== zpools ==
1. Raid10 or striped mirror pool (2x2 disks should be sufficient) for I/O intensive servers like database servers.
 
2. raidz2 pool (the remaining 6 disks) for the rest
 
 
see zil [ write-log ] section at  https://www.napp-it.org/manuals/concept_en.html
 
when in doubt use a write-log. see notes and tests at [[iscsi/tests]]  or forum search.
 
napp-it refers to zil as 'write-log'  << TBD check this
 
=== mirror ===
set up 4 drive mirror
 
1- create pool 2 drive mirror
 
2- extend pool with 2 drive mirror
 
to start do not use a write-log [ zil ]  write-log
 
my 1-st tests show similiar speed to raidz1 + zil .
 
=== raidz1 ===
5 ssd drives
 
1 write-log ssd
 
with a write-log drive vzrestore was 22 times faster then without.
 
=== cli adjustments ===
*there probably is a way to do this from napp-it
<pre>
<pre>
# needed cli for atleast no atime:
zfs set atime=off tank
zfs set compression=lz4 tank
</pre>
==configure omnios==
===Network mtu 9000===
* from http://www.saqr.org/index.php?option=com_content&view=article&id=100:change-default-mtu&catid=51:solaris&Itemid=72
:Note use <other command > to know which drv file to edit.
  vi /kernel/drv/ixgbe.conf
  vi /kernel/drv/ixgbe.conf
  default_mtu = 9000;
  default_mtu = 9000;
</pre>
then reboot
then reboot


Line 107: Line 27:
ixgbe0      mtu            rw  9000          1500          1500-15500  
ixgbe0      mtu            rw  9000          1500          1500-15500  
</pre>
</pre>
== access web page ==
change the IP: http://10.10.10.254:81/cgi-bin/admin.pl
       
INFO: First login: admin, no password; delete napp-it.cfg to get defaults
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome.
Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
== pool and zfs ==
* see napp-it pdf.  create pool and zfs


==ssh keys at pve for iscsi ==
==ssh keys at pve for iscsi ==
*i followed    https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI
'''iscsi  ssh key must be set up.


edit at all cluster nodes: /etc/ssh/sshd_config
follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part.
GSSAPIAuthentication no
UseDNS no


set up ssh keys for iscsi .  change 10.2.2.41 as needed
== comstar ==
mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/10.2.2.41_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/10.2.2.41_id_rsa.pub root@10.2.2.41


test it and accept  as known host. do so at all pve cluster nodes.
=== iSCSI target service ===
ssh -i /etc/pve/priv/zfs/10.2.2.41_id_rsa root@10.2.2.41
home > services > Comstar. Enable Comstar and ISCSI target service


before production use of machine set it so root can use ssh to access the system without-password .
=== comstar for kvm ===
We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.
here '>' means item to click next.
# '''Create target''': comstar > targets > CREATE ISCSI Target
# '''Create target portal group''': comstar > target portal groups > create target portal-group. Choose IP and name
# '''Connect the target portal group to the target''':  comstar > target portal groups > add target. Choose portal group created above
# '''portal group to ISCSI target''': Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.


==comstar==
check that only intended interface serves iscsi. From cli run this:
*Mir on forum suggested to do this. I'd take that as just do it:
itadm list-target -v
<pre>
check 'tpg-tags:'. Make sure 'default' is not set, else iscsi  may be served on every interface.
mir replied to a thread you are watching at Proxmox Support Forum.
comstar/iscsi can now be used for KVM.
 
== lxc and iscsi notes ==
 
NOTE: '''DRAFT'''


        "Shared storage suggestion for a 5 node cluster?"
Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest
only supports Qemu at the moment.  2016-05


This is the message they posted:
Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in
----------------------------------------------------------------------
Proxmox terminology) for each LXC container, you can back those datasets up
----------
individually with zfs-send/receive or pve-zsync. otherwise you can only backup
Does a different target  need to be created at napp-it > comstar  for  the lvm?  Or is the same target used?
individual containers using vzdump.  
----------
Not necessarily but it will greatly improve administration.
</pre>
* I think we use this concept  Enable iSCSI via menu ZFS Filesystems  at http://www.napp-it.org/doc/downloads/napp-it.pdf
:see answered questions at: https://forum.proxmox.com/threads/omnios-napp-it-for-zfs-questions.19774/#post-101041
:volumes and logical units are created by Proxmox as needed when you create a disk for a VM.


===iSCSI target service===
For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small.
I used cli, could not see how to do on gui.
<pre>
# svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
</pre>
check:
<pre>
# svcs -x svc:/network/iscsi/target:default
svc:/network/iscsi/target:default (iscsi target)
State: online since Tue Apr  5 17:23:49 2016
  See: itadm(1M)
  See: /var/svc/log/network-iscsi-target:default.log
</pre>


=== napp-it gui ===
=== 1-disks > volumes > create volume ===
#-Create target: comstar >CREATE ISCSI Targetcomstar  >  target create .
:Name: zvol-8801 [change 8801 to vmid? ] ,  
#-Create target portal group: comstar >create target portal-group
:size xxG [ over estimate a little to start ].
#-Connect the target portal group to the target: comstar > target portal groups > add target
:uncheck thin provisioned.


comstart can now be used for KVM.
=== 2-Comstar > Logical Units > create volume LU ===
:select pool.
:select type o LU: volume
:click submit
:look twords bottom of screen,
:Choose the volume created in step 1 to create a LUN from.
:click create


*for LXC: Manually create a volume
=== 3-Comstar > Views > add view ===
#disks > volumes > create volume .  name lxc-vol ,  size 750G ,  uncheck thin provisioned.
:logical unit GUID : Choose the LUN created above to add a view to.
#Comstar > Logical Units  >  create volume LU  **  Choose the volume created in 1a to create a LUN from.
:to target groupAll
#Comstar > Views >   add view
:and to host group: All
** Choose the LUN created in 1b to add a view to. **
:force lun...: auto
:force lun:   i put auto to autoassign a high number
: that way you'll get a number for LUN hard to confure with existing luns later on .


Your LUN should now be visible from proxmox for usage as base volume
Your LUN should now be visible from proxmox for usage as base volume


== iscsi pve ==
napp-it list view:
<pre>
View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801
View Number  Host group  target group  LUN Number  LU
  0            All            All  6487        600144f07a4c2f000000575c98580001
</pre>


== iscsi setup at pve ==
== iscsi setup at pve ==
Line 181: Line 109:


https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :
https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :
   
 
: '''make this more clear:'''   
My recommendation would be to create two different storages in proxmox,  
My recommendation would be to create two different storages in proxmox,  
      
      
Line 188: Line 117:
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
      
      
 
All the above can be done from a single zfs pool.
All the above can be done from a single zfs pool.


Manually create a volume and share this volume through an iscsi target.  ??????  where  
Manually create a volume and share this volume through an iscsi target.  ??????  where  
Line 201: Line 128:


In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.  
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.  
make sure multipath is not happening.
From pve check all physical connections at pve to ensure that there is just one target:
This is correct set up :
<pre>
sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
</pre>
We want only the 10G network IP to accept icsi  connections
TBD:  how to make it so there is not multipath .


=== iscsi for kvm ===
=== iscsi for kvm ===
Line 206: Line 147:
* using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.  
* using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.  


*pve > storage > add  zfs over iscsi
* pve > storage > add  zfs over iscsi
:ID:  iscsi-sys4
:ID:  iscsi-sys4
:Portal:   10.2.2.41
:Portal: 10.2.2.41
:Pool:  data
:Pool:  data
:Block Size: 8k    [  8k gives better performance but slightly more space is wasted ]
:Block Size: 8k    [  8k gives better performance but slightly more space is wasted ]
:Target:  iqn.2010-09.org.napp-it:1459891666
:Target:  iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ].
iscsiadm -m discovery -t st -p 10.2.2.41:3260
:Group:  leave blank
:Group:  leave blank
:iSCSI provider: comstar
:iSCSI provider: comstar
:Thin Provision: Optional but allows to over provision your storage
:Thin Provision: Optional but allows to over provision your storage
:Write Cache: If pools option 'sync' is standard or always it is safe to enable write cache for improved performance
:Write Cache: Enable.  [If pools option 'sync' is standard or always it is safe to enable write cache for improved performance]


===iscsi for lxc ===
===iscsi for lxc ===
*''' there are bugs or incorrect set up on my part.  TBD:  test again.'''
'''the following has a big issue and should not be used .
*https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
*https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.
:Pay attention to these to important recommendations:
#disable 'use LUNs direcly'
#Enable shared use (recommended)
 
'''some of this is also in a prior section.  TBD: eliminate better organize this.'''
All the above can be done from a single zfs pool.
**  updating 1-3 4/11 for new lxc iscsi set up. 
1-Manually create a volume 
napp-it  disks > volumes > create volume :
      volume:  tank2
        nam: lxc-vol
        size  750GiB
        block size : left as default 64KB
        uncheck thin provisioned.
 
1A -  share this volume through an iscsi target.
 
1b: Choose the volume created in 1a to create a LUN from. 
:comstar > Logical Units > create volume LU
:Select volume /dev/zvol/rdsk/          tank2/lxc-vol
               
1c: Choose the LUN created in 1b to add a view to.
:Comstar > Views > add view
   
1F: this may be elsehere.  to force just 10G IP usage at pve
Create a target portal group and only add your 10G interface
:Comstar >  Target Portal Groups > create
 
1G: Connect the target portal group form 1) to the target  :
:Comstar > Target Portal Groups > add target .


last attempt on 4/23/2016 had same result as 6/11/2016 .  so something needs to be done before to prevent lvm/archive issue.
<pre>
#  2016-04-23 working on it still
#
rm -fv /etc/lvm/archive/turnkey


From pve check all physical connections at pve to ensurethat there is just one target:
removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’
This is correct set up :
removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’
<pre>
removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’
sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’
iscsiadm: No portals found
removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’
removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’
removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’
removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’
removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’
removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’
removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’
removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’
removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’
removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’
removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’
removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’
removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’
removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’
removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’
removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’
removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’
removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’
removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’
</pre>
</pre>
We want only the 10G network IP to accept icsi  connections


2- pve storage > add iscsi >
:ID sys4-lxc-vol ,
:Portal 10.2.2.41
:Target: ...
:disable/un check 'use LUNs direcly'


* https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
* https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.


3- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
:Pay attention to these to important recommendations:
:add an LVM group on this target.  pve  storage 
#disable 'use LUNs direcly'
:https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
#Enable shared use (recommended)
:Second, add an LVM group on this target.
: pve > storage >  add LVM
:As storage name use whatever you want but take care, this name cannot be changed later.
 
*iscsi-for-lxc
:For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  sys4-lxc-vol (iSCSI)
:For 'Base Volume' select  The correct Lun.  [ with  750GB in tihs test ].  Picking wrong LUN will cause bad issues.
:For 'Volume Group Name' give a unique name (this name cannot be changed later).  lxc-iscsi-vg
:Enable shared use (recommended)
:Click save
*Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.  I'm using a diff pool 4/11 .  Will that be an issue? we'll see  TBD
*The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
*In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)
 
=== storage.cfg result ===
<pre>
zfs: iscsi-sys4
        target iqn.2010-09.org.napp-it:1459891666
        pool data
        portal 10.2.2.41
        iscsiprovider comstar
        blocksize 8k
        content images
        nowritecache


iscsi: sys4-lxc-vol
1- pve storage > add iscsi >
        portal 10.2.2.41
:ID  zvol-8801 ,
        target iqn.2010-09.org.napp-it:1459891666
:Portal  10.2.2.21
        content none
:Target:  your choice.
:'''disable use LUNs directly'''


lvm: iscsi-for-lxc
2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
        vgname lxc-iscsi-vg
:pve > storage >  add LVM
        shared
:storage name:  lvm-8801-
        base sys4-lxc-vol:0.0.3.scsi-3600144f0000008080000570c0bb60001
:For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  zvol-8801 (iSCSI)
        content rootdir
:For 'Base Volume' select  The correct Lun. Picking wrong LUN will cause bad issues.
</pre>
:For 'Volume Group Name' give a unique name (this name cannot be changed later). vg-8801
:Content:  LXC Only
:'''Enable shared use (recommended)'''
:Click Add


== kvm notes ==
If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.
* https://forum.proxmox.com/threads/suggestions-for-san-config.26216/page-2#post-131751


== iscsi nfs for backups==
== nfs for backups ==
*create a zfs to store dump files:  create zfs tank/bkup
* create a zfs to store dump files:  create zfs tank/bkup
*click ZFS Filesystems. set NFS to ON for zfs.
* click ZFS Filesystems. set NFS to ON for zfs. '''or put in the 'rw=..' line below . I think that was needed for cli access .
*enable nfs  services > nfs  .  make sure service is online.
* enable nfs  services > nfs  .  make sure service is online.
*at pve add nfs storage
* at pve add nfs storage
*permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/
* permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/


*In Omnios 'ZFS Filesystems' click the NFS fiels on your datasets which has NFS enabled and add the following after 'sharenfs='
* In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
:''note the nfs has to be turned off , then click nfs again and put in the setting''
:''note the nfs has to be turned off , then click nfs again and put in the setting''


Line 327: Line 239:


==backup from omnios==
==backup from omnios==
*TBD add more.  
* TBD add more.  
*http://napp-it.org/extensions/replication_en.html
* http://napp-it.org/extensions/replication_en.html
*https://github.com/oetiker/znapzend/blob/master/README.md
* https://github.com/oetiker/znapzend/blob/master/README.md
 
==support==
omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project.
 
== links ==
*https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/
*http://www.napp-it.org/index_en.html
*http://www.napp-it.org/manuals/
*install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf
*tuning  http://www.napp-it.org/manuals/tuning.html
*http://omnios.omniti.com/wiki.php/GeneralAdministration
*HardForum: http://hardforum.com/showthread.php?t=1573272
*ServetheHome:  https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
*http://virtuallyhyper.com/2013/04/installing-and-configuring-omnios/
 
[[Category: HOWTO]]

Latest revision as of 11:01, 29 November 2016

draft

  • nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
  • this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage.
  • per comments on forum omnios is a good choice for iscsi nas.
  • TBD set up high availabilty iscsi.

operating system install

network mtu 9000

NOTE: do this before setting up network. it is hard to do after.

use <other command > to know which drv file to edit. see napp-it.pdf

 vi /kernel/drv/ixgbe.conf
 default_mtu = 9000;

then reboot

check result

root@sys4:/root#  dladm show-linkprop -p mtu ixgbe0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
ixgbe0       mtu             rw   9000           1500           1500-15500 

access web page

change the IP: http://10.10.10.254:81/cgi-bin/admin.pl

INFO: First login: admin, no password; delete napp-it.cfg to get defaults

Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.

pool and zfs

  • see napp-it pdf. create pool and zfs

ssh keys at pve for iscsi

iscsi ssh key must be set up.

follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part.

comstar

iSCSI target service

home > services > Comstar. Enable Comstar and ISCSI target service

comstar for kvm

We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.

here '>' means item to click next.

  1. Create target: comstar > targets > CREATE ISCSI Target
  2. Create target portal group: comstar > target portal groups > create target portal-group. Choose IP and name
  3. Connect the target portal group to the target: comstar > target portal groups > add target. Choose portal group created above
  4. portal group to ISCSI target: Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.

check that only intended interface serves iscsi. From cli run this:

itadm list-target -v

check 'tpg-tags:'. Make sure 'default' is not set, else iscsi may be served on every interface. comstar/iscsi can now be used for KVM.

lxc and iscsi notes

NOTE: DRAFT

Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest only supports Qemu at the moment. 2016-05

Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in Proxmox terminology) for each LXC container, you can back those datasets up individually with zfs-send/receive or pve-zsync. otherwise you can only backup individual containers using vzdump.

For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small.

1-disks > volumes > create volume

Name: zvol-8801 [change 8801 to vmid? ] ,
size xxG [ over estimate a little to start ].
uncheck thin provisioned.

2-Comstar > Logical Units > create volume LU

select pool.
select type o LU: volume
click submit
look twords bottom of screen,
Choose the volume created in step 1 to create a LUN from.
click create

3-Comstar > Views > add view

logical unit GUID : Choose the LUN created above to add a view to.
to target group: All
and to host group: All
force lun...: auto

Your LUN should now be visible from proxmox for usage as base volume

napp-it list view:

View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801
 View Number  	 Host group  	 target group  	 LUN Number  	 LU 
  0  	          All  	           All  	 6487        	 600144f07a4c2f000000575c98580001 

iscsi setup at pve

http://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :

make this more clear:

My recommendation would be to create two different storages in proxmox,

- one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.

- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.

All the above can be done from a single zfs pool.

Manually create a volume and share this volume through an iscsi target.  ?????? where

Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.

The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.

In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.

make sure multipath is not happening.

From pve check all physical connections at pve to ensure that there is just one target: This is correct set up :

sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666

We want only the 10G network IP to accept icsi connections

TBD: how to make it so there is not multipath .

iscsi for kvm

  • kvm: per Mir on forum: If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
  • using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
  • pve > storage > add zfs over iscsi
ID: iscsi-sys4
Portal: 10.2.2.41
Pool: data
Block Size: 8k [ 8k gives better performance but slightly more space is wasted ]
Target: iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ].
iscsiadm -m discovery -t st -p 10.2.2.41:3260
Group: leave blank
iSCSI provider: comstar
Thin Provision: Optional but allows to over provision your storage
Write Cache: Enable. [If pools option 'sync' is standard or always it is safe to enable write cache for improved performance]

iscsi for lxc

the following has a big issue and should not be used .

last attempt on 4/23/2016 had same result as 6/11/2016 . so something needs to be done before to prevent lvm/archive issue.

#  2016-04-23 working on it still
#
rm -fv /etc/lvm/archive/turnkey

removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’
removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’
removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’
removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’
removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’
removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’
removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’
removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’
removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’
removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’
removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’
removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’
removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’
removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’
removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’
removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’
removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’
removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’
removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’
removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’
removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’
removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’
removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’
removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’
removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’


Pay attention to these to important recommendations:
  1. disable 'use LUNs direcly'
  2. Enable shared use (recommended)

1- pve storage > add iscsi >

ID zvol-8801 ,
Portal 10.2.2.21
Target: your choice.
disable use LUNs directly

2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

pve > storage > add LVM
storage name: lvm-8801-
For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. zvol-8801 (iSCSI)
For 'Base Volume' select The correct Lun. Picking wrong LUN will cause bad issues.
For 'Volume Group Name' give a unique name (this name cannot be changed later). vg-8801
Content: LXC Only
Enable shared use (recommended)
Click Add

kvm notes

If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.

nfs for backups

  • In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
note the nfs has to be turned off , then click nfs again and put in the setting

I used this:

rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16

backup from omnios

support

omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project.

links