Iscsi/nappit: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
 
(153 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=draft=
== draft ==
*nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
*this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage.
*per comments on forum  omnios is a good choice for iscsi nas.
*TBD  set up high availabilty iscsi.


==rough notes to be formatted and added  ==
== operating system install ==
==do not use without checking line by line==
*see http://www.napp-it.org/doc/downloads/napp-it.pdf
*mirror rpool : https://omnios.omniti.com/wiki.php/GeneralAdministration#MirroringARootPool


== links ==
== network mtu 9000 ==
TBD add more links?
NOTE: do this before setting up network. it is hard to do after.


http://www.napp-it.org/manuals/
* from http://www.saqr.org/index.php?option=com_content&view=article&id=100:change-default-mtu&catid=51:solaris&Itemid=72
 
   
HardForum: http://hardforum.com/showthread.php?t=1573272
use <other command > to know which drv file to edit. see napp-it.pdf
 
ServetheHome:  https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
 
== hardware ==
cpu must be compatable  see https://illumos.org/hcl/
 
http://ptribble.blogspot.dk/2016/03/supermicro-illumos-compatible-server.html
 
==operating system install ==
http://www.napp-it.org/doc/downloads/napp-it.pdf
 
there are a few install options. I used this option:
:4. OS Installation options
:4.1 manual barebone installation from USB (use this per default)
* Download usb-dd file from http://omnios.omniti.com/wiki.php/Installation
 
== after install ==
set up network. from the pdf follow ''5.2 Setup napp-it storage appliance with manual ip settings ''
 
set up mtu 9000 [ move that info here ? ]
 
== access web page ==
http://<IP>:81/cgi-bin/admin.pl
       
Info: First login: admin, no password; delete napp-it.cfg to get defaults
 
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome.
Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
 
== zpools ==
ten 480GB ssd drives
 
https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/
 
Mir: I would suggest 2 pools:
   
1. Raid10 or striped mirror pool (2x2 disks should be sufficient) for I/O intensive servers like database servers.
 
2. raidz2 pool (the remaining 6 disks) for the rest
 
 
see zil [ write-log ] section at  https://www.napp-it.org/manuals/concept_en.html
 
when in doubt use a write-log. see notes and tests at [[iscsi/tests]]  or forum search.
 
napp-it refers to zil as 'write-log'  << TBD check this
 
=== mirror ===
set up 4 drive mirror
 
1- create pool 2 drive mirror
 
2- extend pool with 2 drive mirror
 
to start do not use a write-log [ zil ]  write-log
 
my 1-st tests show similiar speed to raidz1 + zil .
 
=== raidz1 ===
5 ssd drives
 
1 write-log ssd  Intel super one << put model numbers TBD
 
with a write-log drive vzrestore was 22 times faster then without.
 
=== cli adjustments ===
*there probably is a way to do this from napp-it
<pre>
<pre>
# needed cli for atleast no atime:
zfs set atime=off tank
zfs set compression=lz4 tank
</pre>
==configure omnios==
===Network mtu 9000===
* from http://www.saqr.org/index.php?option=com_content&view=article&id=100:change-default-mtu&catid=51:solaris&Itemid=72
:Note use <other command > to know which drv file to edit.
  vi /kernel/drv/ixgbe.conf
  vi /kernel/drv/ixgbe.conf
  default_mtu = 9000;
  default_mtu = 9000;
</pre>
then reboot
then reboot


Line 97: Line 27:
ixgbe0      mtu            rw  9000          1500          1500-15500  
ixgbe0      mtu            rw  9000          1500          1500-15500  
</pre>
</pre>
== access web page ==
change the IP: http://10.10.10.254:81/cgi-bin/admin.pl
       
INFO: First login: admin, no password; delete napp-it.cfg to get defaults
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome.
Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
== pool and zfs ==
* see napp-it pdf.  create pool and zfs


==ssh keys at pve for iscsi ==
==ssh keys at pve for iscsi ==
*i followed    https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI
'''iscsi  ssh key must be set up.


edit at all cluster nodes: /etc/ssh/sshd_config
follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part.
GSSAPIAuthentication no
UseDNS no


set up ssh keys for iscsi .  change 10.12.12.41 as needed
== comstar ==
mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/10.12.12.41_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/10.12.12.41_id_rsa.pub root@10.12.12.41


test it and accept  as known host. do so at all pve cluster nodes.
=== iSCSI target service ===
ssh -i /etc/pve/priv/zfs/10.12.12.41_id_rsa root@10.12.12.41
home > services > Comstar. Enable Comstar and ISCSI target service
 
before production use of machine set it so root can use ssh to access the system without-password .


=== comstar for kvm ===
We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.
here '>' means item to click next.
# '''Create target''': comstar > targets > CREATE ISCSI Target
# '''Create target portal group''': comstar > target portal groups > create target portal-group. Choose IP and name
# '''Connect the target portal group to the target''':  comstar > target portal groups > add target. Choose portal group created above
# '''portal group to ISCSI target''': Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.


==comstar==
check that only intended interface serves iscsi. From cli run this:
* I think we use this concept Enable iSCSI via menu ZFS Filesystems  at http://www.napp-it.org/doc/downloads/napp-it.pdf
  itadm list-target -v
:see answered questions at: https://forum.proxmox.com/threads/omnios-napp-it-for-zfs-questions.19774/#post-101041
check 'tpg-tags:'. Make sure 'default' is not set, else iscsi  may be served on every interface.
:volumes and logical units are created by Proxmox as needed when you create a disk for a VM.
comstar/iscsi can now be used for KVM.


===iSCSI target service===
== lxc and iscsi notes ==
I used cli, could not see how to do on gui.
<pre>
# svcadm enable -r svc:/network/iscsi/target:default
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
</pre>
check:
<pre>
# svcs -x svc:/network/iscsi/target:default
svc:/network/iscsi/target:default (iscsi target)
State: online since Tue Apr  5 17:23:49 2016
  See: itadm(1M)
  See: /var/svc/log/network-iscsi-target:default.log
</pre>


=== napp-it gui ===
NOTE: '''DRAFT'''
Create target: comstar >CREATE ISCSI Target:  comstar  >  target create .


Create target portal group: comstar >create target portal-group . name portal-group-1  use 10.12.12.41
Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest
only supports Qemu at the moment. 2016-05


Connect the target portal group  to the target: comstar > target portal groups > add target
Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in
Proxmox terminology) for each LXC container, you can back those datasets up
individually with zfs-send/receive or pve-zsync. otherwise you can only backup
individual containers using vzdump.


*Manually create a volume
For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small.


napp-it
=== 1-disks > volumes > create volume ===
# disks > volumes > create volume .  name lvmvol ,  size 300G , un check thin provisioned.
:Name: zvol-8801 [change 8801 to vmid? ] ,   
:size xxG [ over estimate a little to start ].
:uncheck thin provisioned.


1b) home >  Comstar > Logical Units > create volume LU
=== 2-Comstar > Logical Units > create volume LU ===
          **  Choose the volume created in 1a to create a LUN from.
:select pool.
:select type o LU: volume
:click submit
:look twords bottom of screen,
:Choose the volume created in step 1 to create a LUN from.
:click create


1c) home > Comstar > Views >   add view
=== 3-Comstar > Views > add view ===
        ** Choose the LUN created in 1b to add a view to. **
:logical unit GUID : Choose the LUN created above to add a view to.
:to target group:  All
:and to host group: All
:force lun...: auto


Your LUN should now be visible from proxmox for usage as base volume  
Your LUN should now be visible from proxmox for usage as base volume


== iscsi pve ==
napp-it list view:
<pre>
View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801
View Number  Host group  target group  LUN Number  LU
  0            All            All  6487        600144f07a4c2f000000575c98580001
</pre>


== iscsi setup at pve ==
== iscsi setup at pve ==
Line 162: Line 109:


https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :
https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :
   
 
: '''make this more clear:'''   
My recommendation would be to create two different storages in proxmox,  
My recommendation would be to create two different storages in proxmox,  
      
      
Line 169: Line 117:
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.  
      
      
 
All the above can be done from a single zfs pool.
All the above can be done from a single zfs pool.


Manually create a volume and share this volume through an iscsi target.  ??????  where  
Manually create a volume and share this volume through an iscsi target.  ??????  where  
Line 183: Line 129:
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.  
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.  


=== iscsi-kvm ===
make sure multipath is not happening.  
* kvm: per Mir on forum:  If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
* using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.  


*pve > storage >  add  zfs over iscsi===
From pve check all physical connections at pve to ensure that there is just one target:
 
This is correct set up :
ID:  iscsi-sys4
<pre>
Portal:  10.2.2.41
Pool:  data
Block Size: 8k  .  8k gives better performance but slightly more space is wasted
Target:  iqn.2010-09.org.napp-it:1459891666
 
Group:  leave blank
 
iSCSI provider: comstar
Thin Provision: Optional but allows to over provision your storage
 
Write Cache: If pools option 'sync' is standard or always it is safe to enable write cache for improved performance
 
===lxc  on iscsi ===
https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.
Pay attention to these to important recommendations:
 
    disable 'use LUNs direcly'
 
    Enable shared use (recommended)
 
 
All the above can be done from a single zfs pool.
 
**  updating 1-3 4/11 for new lxc iscsi set up. 
1-Manually create a volume 
napp-it  disks > volumes > create volume :
      volume:  tank2
        nam: lxc-vol
        size  750GiB
        block size : left as default 64KB
        uncheck thin provisioned.
 
1A -  share this volume through an iscsi target.
 
1b: Choose the volume created in 1a to create a LUN from.  comstar > Logical Units > create volume LU
                Select volume /dev/zvol/rdsk/          tank2/lxc-vol
               
1c: Choose the LUN created in 1b to add a view to.
    Comstar > Views > add view
   
    [ research the starting LUN number.  i assume it will increment from that? if so good to use a diff range for each volume . so 50-60 for lxc-vol , 20-30 for another vol ].
   
 
1F and 1G were already done and working from prior set up.  i put instructions from forum here:
1F: this may be elsehere.  to force just 10G IP usage at pve
Create a target portal group and only add your 10G interface
  Comstar >  Target Portal Groups > create
 
1G: Connect the target portal group form 1) to the target  :
  Comstar > Target Portal Groups > add target .
 
 
From pve check that there is just one target:
    This is correct set up :
sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
 
</pre>
We want only the 10G network IP to accept icsi  connections  
We want only the 10G network IP to accept icsi  connections  


2- pve storage > add iscsi >
TBD: how to make it so there is not multipath .  
    ID sys4-lxc-vol ,
    Portal 10.2.2.41
    Target: 1-st of 2 .    4/11 only one target presented.


    disable/un check 'use LUNs direcly'
=== iscsi for kvm ===
* kvm: per Mir on forum:  If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
* using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.


* pve > storage > add  zfs over iscsi
:ID:  iscsi-sys4
:Portal:  10.2.2.41
:Pool:  data
:Block Size: 8k    [  8k gives better performance but slightly more space is wasted ]
:Target:  iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ].
iscsiadm -m discovery -t st -p 10.2.2.41:3260
:Group:  leave blank
:iSCSI provider: comstar
:Thin Provision: Optional but allows to over provision your storage
:Write Cache: Enable.  [If pools option 'sync' is standard or always it is safe to enable write cache for improved performance]


3- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
===iscsi for lxc ===
  add an LVM group on this target.  pve  storage 
'''the following has a big issue and should not be used .
https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
Second, add an LVM group on this target.


    pve > storage add LVM
last attempt on 4/23/2016 had same result as 6/11/2016 .  so something needs to be done before to prevent lvm/archive issue.
<pre>
# 2016-04-23 working on it still
#
rm -fv /etc/lvm/archive/turnkey


removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’
removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’
removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’
removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’
removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’
removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’
removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’
removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’
removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’
removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’
removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’
removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’
removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’
removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’
removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’
removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’
removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’
removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’
removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’
removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’
removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’
removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’
removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’
removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’
removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’
</pre>


    As storage name use whatever you want but take care, this name cannot be changed later.


iscsi-for-lxc
* https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
* https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.


    For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  sys4-lxc-vol (iSCSI)
:Pay attention to these to important recommendations:
#disable 'use LUNs direcly'
#Enable shared use (recommended)


1- pve storage > add iscsi >
:ID  zvol-8801 ,
:Portal  10.2.2.21
:Target:  your choice.
:'''disable use LUNs directly'''


    For 'Base Volume' select  The Lun with 750GB
2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
:pve > storage >  add LVM
:storage name:  lvm-8801-
:For 'Base Storage', use the drop down menu to select the previously defined iSCSI target.  zvol-8801 (iSCSI)
:For 'Base Volume' select  The correct Lun. Picking wrong LUN will cause bad issues.
:For 'Volume Group Name' give a unique name (this name cannot be changed later).  vg-8801
:Content: LXC Only
:'''Enable shared use (recommended)'''
:Click Add


== kvm notes ==
If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.
* https://forum.proxmox.com/threads/suggestions-for-san-config.26216/page-2#post-131751


    For 'Volume Group Name' give a unique name (this name cannot be changed later)lxc-iscsi-vg
== nfs for backups ==
* create a zfs to store dump files:  create zfs tank/bkup
* click ZFS Filesystems. set NFS to ON for zfs. '''or put in the 'rw=..' line below . I think that was needed for cli access .
* enable nfs  services > nfs  make sure service is online.
* at pve add nfs storage
* permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/


    Enable shared use (recommended)
* In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
:''note the nfs has to be turned off , then click nfs again and put in the setting''


    Click save
I used this:
<pre>
rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16
</pre>


==backup from omnios==
* TBD add more.
* http://napp-it.org/extensions/replication_en.html
* https://github.com/oetiker/znapzend/blob/master/README.md


? Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.  I'm using a diff pool 4/11 . Will that be an issue? we'll see  TBD
==support==
omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project.


The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
== links ==
*https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. live migration for lxc is still in the making but will enter proxmox before you know it;-)                                               
*http://www.napp-it.org/index_en.html
 
*http://www.napp-it.org/manuals/
 
*install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf
=== storage.cfg result ===
*tuning  http://www.napp-it.org/manuals/tuning.html
<pre>
*http://omnios.omniti.com/wiki.php/GeneralAdministration
zfs: iscsi-sys4
*HardForum: http://hardforum.com/showthread.php?t=1573272
        target iqn.2010-09.org.napp-it:1459891666
*ServetheHome:  https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
        pool data
*http://virtuallyhyper.com/2013/04/installing-and-configuring-omnios/
        portal 10.2.2.41
        iscsiprovider comstar
        blocksize 8k
        content images
        nowritecache


iscsi: sys4-lxc-vol
[[Category: HOWTO]]
        portal 10.2.2.41
        target iqn.2010-09.org.napp-it:1459891666
        content none
 
lvm: iscsi-for-lxc
        vgname lxc-iscsi-vg
        shared
        base sys4-lxc-vol:0.0.3.scsi-3600144f0000008080000570c0bb60001
        content rootdir
</pre>

Latest revision as of 11:01, 29 November 2016

draft

  • nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
  • this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage.
  • per comments on forum omnios is a good choice for iscsi nas.
  • TBD set up high availabilty iscsi.

operating system install

network mtu 9000

NOTE: do this before setting up network. it is hard to do after.

use <other command > to know which drv file to edit. see napp-it.pdf

 vi /kernel/drv/ixgbe.conf
 default_mtu = 9000;

then reboot

check result

root@sys4:/root#  dladm show-linkprop -p mtu ixgbe0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
ixgbe0       mtu             rw   9000           1500           1500-15500 

access web page

change the IP: http://10.10.10.254:81/cgi-bin/admin.pl

INFO: First login: admin, no password; delete napp-it.cfg to get defaults

Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.

pool and zfs

  • see napp-it pdf. create pool and zfs

ssh keys at pve for iscsi

iscsi ssh key must be set up.

follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part.

comstar

iSCSI target service

home > services > Comstar. Enable Comstar and ISCSI target service

comstar for kvm

We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.

here '>' means item to click next.

  1. Create target: comstar > targets > CREATE ISCSI Target
  2. Create target portal group: comstar > target portal groups > create target portal-group. Choose IP and name
  3. Connect the target portal group to the target: comstar > target portal groups > add target. Choose portal group created above
  4. portal group to ISCSI target: Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.

check that only intended interface serves iscsi. From cli run this:

itadm list-target -v

check 'tpg-tags:'. Make sure 'default' is not set, else iscsi may be served on every interface. comstar/iscsi can now be used for KVM.

lxc and iscsi notes

NOTE: DRAFT

Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest only supports Qemu at the moment. 2016-05

Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in Proxmox terminology) for each LXC container, you can back those datasets up individually with zfs-send/receive or pve-zsync. otherwise you can only backup individual containers using vzdump.

For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small.

1-disks > volumes > create volume

Name: zvol-8801 [change 8801 to vmid? ] ,
size xxG [ over estimate a little to start ].
uncheck thin provisioned.

2-Comstar > Logical Units > create volume LU

select pool.
select type o LU: volume
click submit
look twords bottom of screen,
Choose the volume created in step 1 to create a LUN from.
click create

3-Comstar > Views > add view

logical unit GUID : Choose the LUN created above to add a view to.
to target group: All
and to host group: All
force lun...: auto

Your LUN should now be visible from proxmox for usage as base volume

napp-it list view:

View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801
 View Number  	 Host group  	 target group  	 LUN Number  	 LU 
  0  	          All  	           All  	 6487        	 600144f07a4c2f000000575c98580001 

iscsi setup at pve

http://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576  :

make this more clear:

My recommendation would be to create two different storages in proxmox,

- one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.

- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.

All the above can be done from a single zfs pool.

Manually create a volume and share this volume through an iscsi target.  ?????? where

Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.

The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.

In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.

make sure multipath is not happening.

From pve check all physical connections at pve to ensure that there is just one target: This is correct set up :

sys5  ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260
iscsiadm: No portals found
sys5  ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666

We want only the 10G network IP to accept icsi connections

TBD: how to make it so there is not multipath .

iscsi for kvm

  • kvm: per Mir on forum: If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
  • using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
  • pve > storage > add zfs over iscsi
ID: iscsi-sys4
Portal: 10.2.2.41
Pool: data
Block Size: 8k [ 8k gives better performance but slightly more space is wasted ]
Target: iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ].
iscsiadm -m discovery -t st -p 10.2.2.41:3260
Group: leave blank
iSCSI provider: comstar
Thin Provision: Optional but allows to over provision your storage
Write Cache: Enable. [If pools option 'sync' is standard or always it is safe to enable write cache for improved performance]

iscsi for lxc

the following has a big issue and should not be used .

last attempt on 4/23/2016 had same result as 6/11/2016 . so something needs to be done before to prevent lvm/archive issue.

#  2016-04-23 working on it still
#
rm -fv /etc/lvm/archive/turnkey

removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’
removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’
removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’
removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’
removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’
removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’
removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’
removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’
removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’
removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’
removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’
removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’
removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’
removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’
removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’
removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’
removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’
removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’
removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’
removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’
removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’
removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’
removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’
removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’
removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’


Pay attention to these to important recommendations:
  1. disable 'use LUNs direcly'
  2. Enable shared use (recommended)

1- pve storage > add iscsi >

ID zvol-8801 ,
Portal 10.2.2.21
Target: your choice.
disable use LUNs directly

2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.

pve > storage > add LVM
storage name: lvm-8801-
For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. zvol-8801 (iSCSI)
For 'Base Volume' select The correct Lun. Picking wrong LUN will cause bad issues.
For 'Volume Group Name' give a unique name (this name cannot be changed later). vg-8801
Content: LXC Only
Enable shared use (recommended)
Click Add

kvm notes

If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.

nfs for backups

  • In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
note the nfs has to be turned off , then click nfs again and put in the setting

I used this:

rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16

backup from omnios

support

omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project.

links