Iscsi/nappit: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
 
(76 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=draft=
== draft ==
 
*nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
*nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
*this page is about setting up napp-it omnios to serve iscsi for pve to use as storage.
*this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage.
*for mission critical data drbd,ceph,sheepdog, and others should be used so that the data is always written to multiple systems. unplug the nappit hardware and data is off line.
*per comments on forum  omnios is a good choice for iscsi nas.
*per comments on forum  omnios is a good choice for iscsi nas.  I plan to use for 90% of vm's and backups.
*TBD set up high availabilty iscsi.


==rough notes to be formatted and added  ==
== operating system install ==
*see http://www.napp-it.org/doc/downloads/napp-it.pdf
*mirror rpool : https://omnios.omniti.com/wiki.php/GeneralAdministration#MirroringARootPool


==do not use without checking line by line==
== network mtu 9000 ==
NOTE: do this before setting up network. it is hard to do after.


== links ==
*http://www.napp-it.org/index_en.html
*http://www.napp-it.org/manuals/
*install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf
*tuning  http://www.napp-it.org/manuals/tuning.html
*http://omnios.omniti.com/wiki.php/GeneralAdministration
*https://www.joyent.com/blog/documentation-for-smf
*https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/
*HardForum: http://hardforum.com/showthread.php?t=1573272
*ServetheHome:  https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
==kvm notes ==
*https://forum.proxmox.com/threads/suggestions-for-san-config.26216/page-2#post-131751
Forgot to mention: If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.
== hardware ==
cpu must be compatable. see https://illumos.org/hcl/
http://ptribble.blogspot.dk/2016/03/supermicro-illumos-compatible-server.html
Important - drives must be labeled .  you need to have a way to know where a particular drive is in case it needs to be swapped out. make and print a table [ in office or libreoffice ] as the system is put together.  trying to find drive with a particular serial number after the system is in production means downtime.
==operating system install ==
http://www.napp-it.org/doc/downloads/napp-it.pdf
there are a few install options. I used this option:
:4. OS Installation options
:4.1 manual barebone installation from USB (use this per default)
* Download usb-dd file from http://omnios.omniti.com/wiki.php/Installation
== after install ==
from the pdf follow
set up network
install napp-it
== access web page ==
*change the IP: http://10.10.10.254:81/cgi-bin/admin.pl
       
Info: First login: admin, no password; delete napp-it.cfg to get defaults
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome.
Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
==pool and zfs ==
* see napp-it pdf.  create pool and zfs
* noatime , from cli;
zfs set atime=off tank
==configure omnios==
===Network mtu 9000===
*'''do this before setting up the device with an address etc.'''
* from http://www.saqr.org/index.php?option=com_content&view=article&id=100:change-default-mtu&catid=51:solaris&Itemid=72
* from http://www.saqr.org/index.php?option=com_content&view=article&id=100:change-default-mtu&catid=51:solaris&Itemid=72
:Note use <other command > to know which drv file to edit.
use <other command > to know which drv file to edit. see napp-it.pdf
<pre>
  vi /kernel/drv/ixgbe.conf
  vi /kernel/drv/ixgbe.conf
  default_mtu = 9000;
  default_mtu = 9000;
</pre>
then reboot
then reboot


Line 77: Line 27:
ixgbe0      mtu            rw  9000          1500          1500-15500  
ixgbe0      mtu            rw  9000          1500          1500-15500  
</pre>
</pre>
== access web page ==
change the IP: http://10.10.10.254:81/cgi-bin/admin.pl
       
INFO: First login: admin, no password; delete napp-it.cfg to get defaults
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome.
Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
== pool and zfs ==
* see napp-it pdf.  create pool and zfs


==ssh keys at pve for iscsi ==
==ssh keys at pve for iscsi ==
*i followed    https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI
'''iscsi  ssh key must be set up.
 
follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part.
 
== comstar ==
 
=== iSCSI target service ===
home > services > Comstar. Enable Comstar and ISCSI target service


edit at all cluster nodes: /etc/ssh/sshd_config
=== comstar for kvm ===
  GSSAPIAuthentication no
We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.
UseDNS no
here '>' means item to click next.
# '''Create target''': comstar > targets > CREATE ISCSI Target
# '''Create target portal group''': comstar > target portal groups > create target portal-group. Choose IP and name
# '''Connect the target portal group to the target''': comstar > target portal groups > add target. Choose portal group created above
# '''portal group to ISCSI target''': Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.


set up ssh keys for iscsi . change 10.2.2.41 as needed
check that only intended interface serves iscsi. From cli run this:
  mkdir /etc/pve/priv/zfs
  itadm list-target -v
ssh-keygen -f /etc/pve/priv/zfs/10.2.2.41_id_rsa
check 'tpg-tags:'. Make sure 'default' is not set, else iscsi  may be served on every interface.
ssh-copy-id -i /etc/pve/priv/zfs/10.2.2.41_id_rsa.pub root@10.2.2.41
comstar/iscsi can now be used for KVM.


test it and accept  as known host. do so at all pve cluster nodes.
== lxc and iscsi notes ==
ssh -i /etc/pve/priv/zfs/10.2.2.41_id_rsa root@10.2.2.41


before production use of machine set it so root can use ssh to access the system without-password .
NOTE: '''DRAFT'''


==comstar==
Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest
*see Enable iSCSI via menu ZFS Filesystems  at http://www.napp-it.org/doc/downloads/napp-it.pdf
only supports Qemu at the moment.  2016-05
:see answered questions at: https://forum.proxmox.com/threads/omnios-napp-it-for-zfs-questions.19774/#post-101041
:volumes and logical units are created by Proxmox as needed when you create a disk for a VM.
===iSCSI target service===
I used cli, could not see how to do on gui. ''tbd this may not be needed.  try not doing the following, and if not deeded delete this .
<pre>
# svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
</pre>
check:
<pre>
# svcs -x svc:/network/iscsi/target:default
svc:/network/iscsi/target:default (iscsi target)
State: online since Tue Apr 5 17:23:49 2016
  See: itadm(1M)
  See: /var/svc/log/network-iscsi-target:default.log
</pre>


===comstar for kvm ===
Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in
We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.
Proxmox terminology) for each LXC container, you can back those datasets up
individually with zfs-send/receive or pve-zsync. otherwise you can only backup
here ' >' means item to click  next .
individual containers using vzdump.  
#'''Create target''': comstar> targets >CREATE ISCSI Target
#'''Create target portal group''': comstar > target portal groups > create target portal-group
#'''Connect the target portal group  to the target''':  comstar > target portal groups > add target
#'''portal group to ISCSI target''' ''Without portal group, ISCSI target are published on every valid IP interface'' Comstar > Target Portal Groups > add member .
:TBD  recheck last step. the items to click may be fiferent.


For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small.


from cli run this:
=== 1-disks > volumes > create volume ===
itadm list-target -v
:Name: zvol-8801 [change 8801 to vmid? ] ,  
check 'tpg-tags:'  . if tpg-tags: default ,  that means multipath is set and we do not wat that. means every interface will server isci.
:size xxG [ over estimate a little to start ].
:uncheck thin provisioned.


comstar/iscsi can now be used for KVM.
=== 2-Comstar > Logical Units > create volume LU ===
:select pool.
:select type o LU: volume
:click submit
:look twords bottom of screen,
:Choose the volume created in step 1 to create a LUN from.
:click create


===comstar for lxc ===
=== 3-Comstar > Views > add view ===
*note for pve-zsync, one volume per lxc is needed ?
:logical unit GUID : Choose the LUN created above to add a view to.
*for LXC: Manually create a volume
:to target groupAll
#disks > volumes > create volume .  name lxc-vol ,  size xxG ,  uncheck thin provisioned.
:and to host group: All
#Comstar > Logical Units  >  create volume LU  **  Choose the volume created in 1a to create a LUN from.
:force lun...: auto
#Comstar  >  Views >   add view
** Choose the LUN created in 1b to add a view to. **
:force lun:   i put auto to autoassign a high number
: that way you'll get a number for LUN hard to confure with existing luns later on .


Your LUN should now be visible from proxmox for usage as base volume
Your LUN should now be visible from proxmox for usage as base volume
napp-it list view:
<pre>
View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801
View Number  Host group  target group  LUN Number  LU
  0            All            All  6487        600144f07a4c2f000000575c98580001
</pre>


== iscsi setup at pve ==
== iscsi setup at pve ==
Line 148: Line 110:
https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :
https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :


:''' make this more clear:'''     
: '''make this more clear:'''     
My recommendation would be to create two different storages in proxmox,  
My recommendation would be to create two different storages in proxmox,  
      
      
Line 155: Line 117:
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
      
      
 
All the above can be done from a single zfs pool.
All the above can be done from a single zfs pool.


Manually create a volume and share this volume through an iscsi target.  ??????  where  
Manually create a volume and share this volume through an iscsi target.  ??????  where  
Line 169: Line 129:
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.  
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.  


*make sure multipath is not happening.  
make sure multipath is not happening.  


From pve check all physical connections at pve to ensure that there is just one target:
From pve check all physical connections at pve to ensure that there is just one target:
Line 187: Line 147:
* using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.  
* using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.  


*pve > storage > add  zfs over iscsi
* pve > storage > add  zfs over iscsi
:ID:  iscsi-sys4
:ID:  iscsi-sys4
:Portal:  10.2.2.41
:Portal:  10.2.2.41
:Pool:  data
:Pool:  data
:Block Size: 8k    [  8k gives better performance but slightly more space is wasted ]
:Block Size: 8k    [  8k gives better performance but slightly more space is wasted ]
:Target:  iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is this.  ].
:Target:  iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ].
  iscsiadm -m discovery -t st -p 10.2.2.41:3260
  iscsiadm -m discovery -t st -p 10.2.2.41:3260
:Group:  leave blank
:Group:  leave blank
Line 200: Line 160:


===iscsi for lxc ===
===iscsi for lxc ===
*https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
'''the following has a big issue and should not be used .
*https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
 
last attempt on 4/23/2016 had same result as 6/11/2016 .  so something needs to be done before to prevent lvm/archive issue.
<pre>
#  2016-04-23 working on it still
#
rm -fv /etc/lvm/archive/turnkey
 
removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’
removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’
removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’
removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’
removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’
removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’
removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’
removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’
removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’
removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’
removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’
removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’
removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’
removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’
removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’
removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’
removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’
removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’
removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’
removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’
removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’
removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’
removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’
removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’
removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’
</pre>
 
 
* https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
* https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
 
:Pay attention to these to important recommendations:
:Pay attention to these to important recommendations:
#disable 'use LUNs direcly'
#disable 'use LUNs direcly'
#Enable shared use (recommended)
#Enable shared use (recommended)
'''TBD:  better organize this.'''


1- pve storage > add iscsi >  
1- pve storage > add iscsi >  
*ID sys4-lxc-vol ,  
:ID zvol-8801 ,  
*Portal 10.2.2.41
:Portal 10.2.2.21
*Target:  pick target you made for lxc in comstar above.
:Target:  your choice.
*'''disable/un check use LUNs direcly'''
:'''disable use LUNs directly'''


2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.  
2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.  
*pve > storage >  add LVM  
:pve > storage >  add LVM  
*storage name: iscsi-for-lxc
:storage name:   lvm-8801-
*For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  sys4-lxc-vol (iSCSI)
:For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  zvol-8801 (iSCSI)
*For 'Base Volume' select  The correct Lun. Picking wrong LUN will cause bad issues.
:For 'Base Volume' select  The correct Lun. Picking wrong LUN will cause bad issues.
*For 'Volume Group Name' give a unique name (this name cannot be changed later).  lxc-iscsi-vg
:For 'Volume Group Name' give a unique name (this name cannot be changed later).  vg-8801
*'''Enable shared use (recommended)'''
:Content:  LXC Only
*Click save
:'''Enable shared use (recommended)'''
 
:Click Add
=== storage.cfg result example===
<pre>
zfs: iscsi-sys4
        target iqn.2010-09.org.napp-it:1459891666
        pool data
        portal 10.2.2.41
        iscsiprovider comstar
        blocksize 8k
        content images
        nowritecache


iscsi: sys4-lxc-vol
== kvm notes ==
        portal 10.2.2.41
If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.
        target iqn.2010-09.org.napp-it:1459891666
* https://forum.proxmox.com/threads/suggestions-for-san-config.26216/page-2#post-131751
        content none


lvm: iscsi-for-lxc
== nfs for backups ==
        vgname lxc-iscsi-vg
* create a zfs to store dump files: create zfs tank/bkup
        shared
* click ZFS Filesystems. set NFS to ON for zfs. '''or put in the 'rw=..' line below . I think that was needed for cli access .
        base sys4-lxc-vol:0.0.3.scsi-3600144f0000008080000570c0bb60001
* enable nfs  services > nfs  .  make sure service is online.
        content rootdir
* at pve add nfs storage
</pre>
* permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/


==nfs for backups==
* In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
*create a zfs to store dump files:  create zfs tank/bkup
*click ZFS Filesystems. set NFS to ON for zfs.
*enable nfs  services > nfs  .  make sure service is online.
*at pve add nfs storage
*permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/
 
*In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
:''note the nfs has to be turned off , then click nfs again and put in the setting''
:''note the nfs has to be turned off , then click nfs again and put in the setting''


Line 262: Line 239:


==backup from omnios==
==backup from omnios==
*TBD add more.  
* TBD add more.  
*http://napp-it.org/extensions/replication_en.html
* http://napp-it.org/extensions/replication_en.html
*https://github.com/oetiker/znapzend/blob/master/README.md
* https://github.com/oetiker/znapzend/blob/master/README.md
 
==support==
omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project.
 
== links ==
*https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/
*http://www.napp-it.org/index_en.html
*http://www.napp-it.org/manuals/
*install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf
*tuning  http://www.napp-it.org/manuals/tuning.html
*http://omnios.omniti.com/wiki.php/GeneralAdministration
*HardForum: http://hardforum.com/showthread.php?t=1573272
*ServetheHome:  https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
*http://virtuallyhyper.com/2013/04/installing-and-configuring-omnios/
 
[[Category: HOWTO]]

Latest revision as of 11:01, 29 November 2016

draft

  • nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
  • this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage.
  • per comments on forum omnios is a good choice for iscsi nas.
  • TBD set up high availabilty iscsi.

operating system install

network mtu 9000

NOTE: do this before setting up network. it is hard to do after.

use <other command > to know which drv file to edit. see napp-it.pdf

 vi /kernel/drv/ixgbe.conf
 default_mtu = 9000;

then reboot

check result

root@sys4:/root#  dladm show-linkprop -p mtu ixgbe0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
ixgbe0       mtu             rw   9000           1500           1500-15500 

access web page

change the IP: http://10.10.10.254:81/cgi-bin/admin.pl

INFO: First login: admin, no password; delete napp-it.cfg to get defaults

Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.

pool and zfs

  • see napp-it pdf. create pool and zfs

ssh keys at pve for iscsi

iscsi ssh key must be set up.

follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part.

comstar

iSCSI target service

home > services > Comstar. Enable Comstar and ISCSI target service

comstar for kvm

We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.

here '>' means item to click next.

  1. Create target: comstar > targets > CREATE ISCSI Target
  2. Create target portal group: comstar > target portal groups > create target portal-group. Choose IP and name
  3. Connect the target portal group to the target: comstar > target portal groups > add target. Choose portal group created above
  4. portal group to ISCSI target: Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.

check that only intended interface serves iscsi. From cli run this:

itadm list-target -v

check 'tpg-tags:'. Make sure 'default' is not set, else iscsi may be served on every interface. comstar/iscsi can now be used for KVM.

lxc and iscsi notes

NOTE: DRAFT

Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest only supports Qemu at the moment. 2016-05

Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in Proxmox terminology) for each LXC container, you can back those datasets up individually with zfs-send/receive or pve-zsync. otherwise you can only backup individual containers using vzdump.

For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small.

1-disks > volumes > create volume

Name: zvol-8801 [change 8801 to vmid? ] ,
size xxG [ over estimate a little to start ].
uncheck thin provisioned.

2-Comstar > Logical Units > create volume LU

select pool.
select type o LU: volume
click submit
look twords bottom of screen,
Choose the volume created in step 1 to create a LUN from.
click create

3-Comstar > Views > add view

logical unit GUID : Choose the LUN created above to add a view to.
to target group: All
and to host group: All
force lun...: auto

Your LUN should now be visible from proxmox for usage as base volume

napp-it list view:

View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801
 View Number  	 Host group  	 target group  	 LUN Number  	 LU 
  0  	          All  	           All  	 6487        	 600144f07a4c2f000000575c98580001 

iscsi setup at pve

http://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :

make this more clear:

My recommendation would be to create two different storages in proxmox,

- one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.

- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.

All the above can be done from a single zfs pool.

Manually create a volume and share this volume through an iscsi target.  ?????? where

Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.

The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.

In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.

make sure multipath is not happening.

From pve check all physical connections at pve to ensure that there is just one target: This is correct set up :

sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666

We want only the 10G network IP to accept icsi connections

TBD: how to make it so there is not multipath .

iscsi for kvm

  • kvm: per Mir on forum: If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
  • using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
  • pve > storage > add zfs over iscsi
ID: iscsi-sys4
Portal: 10.2.2.41
Pool: data
Block Size: 8k [ 8k gives better performance but slightly more space is wasted ]
Target: iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ].
iscsiadm -m discovery -t st -p 10.2.2.41:3260
Group: leave blank
iSCSI provider: comstar
Thin Provision: Optional but allows to over provision your storage
Write Cache: Enable. [If pools option 'sync' is standard or always it is safe to enable write cache for improved performance]

iscsi for lxc

the following has a big issue and should not be used .

last attempt on 4/23/2016 had same result as 6/11/2016 . so something needs to be done before to prevent lvm/archive issue.

#  2016-04-23 working on it still
#
rm -fv /etc/lvm/archive/turnkey

removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’
removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’
removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’
removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’
removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’
removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’
removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’
removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’
removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’
removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’
removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’
removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’
removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’
removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’
removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’
removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’
removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’
removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’
removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’
removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’
removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’
removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’
removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’
removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’
removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’


Pay attention to these to important recommendations:
  1. disable 'use LUNs direcly'
  2. Enable shared use (recommended)

1- pve storage > add iscsi >

ID zvol-8801 ,
Portal 10.2.2.21
Target: your choice.
disable use LUNs directly

2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

pve > storage > add LVM
storage name: lvm-8801-
For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. zvol-8801 (iSCSI)
For 'Base Volume' select The correct Lun. Picking wrong LUN will cause bad issues.
For 'Volume Group Name' give a unique name (this name cannot be changed later). vg-8801
Content: LXC Only
Enable shared use (recommended)
Click Add

kvm notes

If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.

nfs for backups

  • In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
note the nfs has to be turned off , then click nfs again and put in the setting

I used this:

rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16

backup from omnios

support

omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project.

links