Iscsi/nappit: Difference between revisions
Bread-baker (talk | contribs) |
|||
(115 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
=draft= | == draft == | ||
*nappit is a http interface for setting up zfs and iscsi on a few different operating systems. | *nappit is a http interface for setting up zfs and iscsi on a few different operating systems. | ||
*this page is about setting up napp-it omnios to serve iscsi for pve to use as storage | *this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage. | ||
*per comments on forum omnios is a good choice for iscsi nas. | |||
*per comments on forum omnios is a good choice for iscsi nas. | *TBD set up high availabilty iscsi. | ||
== | == operating system install == | ||
*see http://www.napp-it.org/doc/downloads/napp-it.pdf | |||
*mirror rpool : https://omnios.omniti.com/wiki.php/GeneralAdministration#MirroringARootPool | |||
== | == network mtu 9000 == | ||
NOTE: do this before setting up network. it is hard to do after. | |||
* from http://www.saqr.org/index.php?option=com_content&view=article&id=100:change-default-mtu&catid=51:solaris&Itemid=72 | |||
*http://www. | |||
use <other command > to know which drv file to edit. see napp-it.pdf | |||
== | |||
= | |||
see | |||
<pre> | <pre> | ||
vi /kernel/drv/ixgbe.conf | vi /kernel/drv/ixgbe.conf | ||
default_mtu = 9000; | default_mtu = 9000; | ||
</pre> | |||
then reboot | then reboot | ||
Line 107: | Line 27: | ||
ixgbe0 mtu rw 9000 1500 1500-15500 | ixgbe0 mtu rw 9000 1500 1500-15500 | ||
</pre> | </pre> | ||
== access web page == | |||
change the IP: http://10.10.10.254:81/cgi-bin/admin.pl | |||
INFO: First login: admin, no password; delete napp-it.cfg to get defaults | |||
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. | |||
Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network. | |||
== pool and zfs == | |||
* see napp-it pdf. create pool and zfs | |||
==ssh keys at pve for iscsi == | ==ssh keys at pve for iscsi == | ||
'''iscsi ssh key must be set up. | |||
follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part. | |||
== comstar == | |||
=== iSCSI target service === | |||
home > services > Comstar. Enable Comstar and ISCSI target service | |||
=== comstar for kvm === | |||
We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches. | |||
here '>' means item to click next. | |||
# '''Create target''': comstar > targets > CREATE ISCSI Target | |||
# '''Create target portal group''': comstar > target portal groups > create target portal-group. Choose IP and name | |||
# '''Connect the target portal group to the target''': comstar > target portal groups > add target. Choose portal group created above | |||
# '''portal group to ISCSI target''': Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above. | |||
check that only intended interface serves iscsi. From cli run this: | |||
itadm list-target -v | |||
check 'tpg-tags:'. Make sure 'default' is not set, else iscsi may be served on every interface. | |||
comstar/iscsi can now be used for KVM. | |||
== lxc and iscsi notes == | |||
NOTE: '''DRAFT''' | |||
Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest | |||
only supports Qemu at the moment. 2016-05 | |||
Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in | |||
Proxmox terminology) for each LXC container, you can back those datasets up | |||
individually with zfs-send/receive or pve-zsync. otherwise you can only backup | |||
individual containers using vzdump. | |||
For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small. | |||
=== 1-disks > volumes > create volume === | |||
:Name: zvol-8801 [change 8801 to vmid? ] , | |||
:size xxG [ over estimate a little to start ]. | |||
:uncheck thin provisioned. | |||
=== 2-Comstar > Logical Units > create volume LU === | |||
:select pool. | |||
:select type o LU: volume | |||
:click submit | |||
:look twords bottom of screen, | |||
:Choose the volume created in step 1 to create a LUN from. | |||
:click create | |||
=== 3-Comstar > Views > add view === | |||
:logical unit GUID : Choose the LUN created above to add a view to. | |||
:to target group: All | |||
:and to host group: All | |||
:force lun...: auto | |||
Your LUN should now be visible from proxmox for usage as base volume | Your LUN should now be visible from proxmox for usage as base volume | ||
napp-it list view: | |||
<pre> | |||
View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801 | |||
View Number Host group target group LUN Number LU | |||
0 All All 6487 600144f07a4c2f000000575c98580001 | |||
</pre> | |||
== iscsi setup at pve == | == iscsi setup at pve == | ||
Line 180: | Line 109: | ||
https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576 : | https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134576 : | ||
: '''make this more clear:''' | |||
My recommendation would be to create two different storages in proxmox, | My recommendation would be to create two different storages in proxmox, | ||
Line 187: | Line 117: | ||
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc. | - and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc. | ||
All the above can be done from a single zfs pool. | All the above can be done from a single zfs pool. | ||
Manually create a volume and share this volume through an iscsi target. ?????? where | Manually create a volume and share this volume through an iscsi target. ?????? where | ||
Line 200: | Line 128: | ||
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. | In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA. | ||
make sure multipath is not happening. | |||
From pve check all physical connections at pve to ensure that there is just one target: | |||
This is correct set up : | |||
<pre> | |||
sys5 ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260 | |||
iscsiadm: No portals found | |||
sys5 ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260 | |||
10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666 | |||
</pre> | |||
We want only the 10G network IP to accept icsi connections | |||
TBD: how to make it so there is not multipath . | |||
=== iscsi for kvm === | === iscsi for kvm === | ||
Line 205: | Line 147: | ||
* using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc. | * using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc. | ||
*pve > storage > | * pve > storage > add zfs over iscsi | ||
:ID: iscsi-sys4 | :ID: iscsi-sys4 | ||
:Portal: | :Portal: 10.2.2.41 | ||
:Pool: data | :Pool: data | ||
:Block Size: 8k [ 8k gives better performance but slightly more space is wasted ] | :Block Size: 8k [ 8k gives better performance but slightly more space is wasted ] | ||
:Target: iqn.2010-09.org.napp-it:1459891666 | :Target: iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ]. | ||
iscsiadm -m discovery -t st -p 10.2.2.41:3260 | |||
:Group: leave blank | :Group: leave blank | ||
:iSCSI provider: comstar | :iSCSI provider: comstar | ||
:Thin Provision: Optional but allows to over provision your storage | :Thin Provision: Optional but allows to over provision your storage | ||
:Write Cache: If pools option 'sync' is standard or always it is safe to enable write cache for improved performance | :Write Cache: Enable. [If pools option 'sync' is standard or always it is safe to enable write cache for improved performance] | ||
===iscsi for lxc === | ===iscsi for lxc === | ||
'''the following has a big issue and should not be used . | |||
last attempt on 4/23/2016 had same result as 6/11/2016 . so something needs to be done before to prevent lvm/archive issue. | |||
<pre> | |||
# 2016-04-23 working on it still | |||
# | |||
rm -fv /etc/lvm/archive/turnkey | |||
removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’ | |||
removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’ | |||
</pre> | </pre> | ||
* https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706 | |||
* https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc. | |||
:Pay attention to these to important recommendations: | |||
: | #disable 'use LUNs direcly' | ||
#Enable shared use (recommended) | |||
iscsi: | 1- pve storage > add iscsi > | ||
:ID zvol-8801 , | |||
:Portal 10.2.2.21 | |||
:Target: your choice. | |||
:'''disable use LUNs directly''' | |||
lvm: | 2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing. | ||
:pve > storage > add LVM | |||
:storage name: lvm-8801- | |||
:For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. zvol-8801 (iSCSI) | |||
:For 'Base Volume' select The correct Lun. Picking wrong LUN will cause bad issues. | |||
:For 'Volume Group Name' give a unique name (this name cannot be changed later). vg-8801 | |||
:Content: LXC Only | |||
:'''Enable shared use (recommended)''' | |||
:Click Add | |||
== kvm notes == | |||
If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol. | |||
* https://forum.proxmox.com/threads/suggestions-for-san-config.26216/page-2#post-131751 | |||
== | == nfs for backups == | ||
*create a zfs to store dump files: create zfs tank/bkup | * create a zfs to store dump files: create zfs tank/bkup | ||
*click ZFS Filesystems. set NFS to ON for zfs. | * click ZFS Filesystems. set NFS to ON for zfs. '''or put in the 'rw=..' line below . I think that was needed for cli access . | ||
*enable nfs services > nfs . make sure service is online. | * enable nfs services > nfs . make sure service is online. | ||
*at pve add nfs storage | * at pve add nfs storage | ||
*permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/ | * permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/ | ||
*In Omnios 'ZFS Filesystems' click the NFS | * In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs=' | ||
:''note the nfs has to be turned off , then click nfs again and put in the setting'' | :''note the nfs has to be turned off , then click nfs again and put in the setting'' | ||
Line 326: | Line 239: | ||
==backup from omnios== | ==backup from omnios== | ||
*TBD add more. | * TBD add more. | ||
*http://napp-it.org/extensions/replication_en.html | * http://napp-it.org/extensions/replication_en.html | ||
*https://github.com/oetiker/znapzend/blob/master/README.md | * https://github.com/oetiker/znapzend/blob/master/README.md | ||
==support== | |||
omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project. | |||
== links == | |||
*https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/ | |||
*http://www.napp-it.org/index_en.html | |||
*http://www.napp-it.org/manuals/ | |||
*install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf | |||
*tuning http://www.napp-it.org/manuals/tuning.html | |||
*http://omnios.omniti.com/wiki.php/GeneralAdministration | |||
*HardForum: http://hardforum.com/showthread.php?t=1573272 | |||
*ServetheHome: https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/ | |||
*http://virtuallyhyper.com/2013/04/installing-and-configuring-omnios/ | |||
[[Category: HOWTO]] |
Latest revision as of 11:01, 29 November 2016
draft
- nappit is a http interface for setting up zfs and iscsi on a few different operating systems.
- this page is about setting up napp-it omnios to serve iscsi and nfs for pve to use as storage.
- per comments on forum omnios is a good choice for iscsi nas.
- TBD set up high availabilty iscsi.
operating system install
- see http://www.napp-it.org/doc/downloads/napp-it.pdf
- mirror rpool : https://omnios.omniti.com/wiki.php/GeneralAdministration#MirroringARootPool
network mtu 9000
NOTE: do this before setting up network. it is hard to do after.
use <other command > to know which drv file to edit. see napp-it.pdf
vi /kernel/drv/ixgbe.conf default_mtu = 9000;
then reboot
check result
root@sys4:/root# dladm show-linkprop -p mtu ixgbe0 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE ixgbe0 mtu rw 9000 1500 1500-15500
access web page
change the IP: http://10.10.10.254:81/cgi-bin/admin.pl
INFO: First login: admin, no password; delete napp-it.cfg to get defaults
Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.
pool and zfs
- see napp-it pdf. create pool and zfs
ssh keys at pve for iscsi
iscsi ssh key must be set up.
follow https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI#Platform_notes ssh exchange part.
comstar
iSCSI target service
home > services > Comstar. Enable Comstar and ISCSI target service
comstar for kvm
We want to make certain that iscsi is served only on one network. If two paths exists to your storage and this is intentional you must install multipath on every proxmox host otherwise chances are eminent that you will mess-up your storage. Alternative you can create a bond to your storage. If the bond is to provide real HA it must span over two switches and to be able to do this it will require stackable switches.
here '>' means item to click next.
- Create target: comstar > targets > CREATE ISCSI Target
- Create target portal group: comstar > target portal groups > create target portal-group. Choose IP and name
- Connect the target portal group to the target: comstar > target portal groups > add target. Choose portal group created above
- portal group to ISCSI target: Without portal group, ISCSI target are published on every valid IP interface Comstar > Target Portal Groups > add member. Select portal group from above.
check that only intended interface serves iscsi. From cli run this:
itadm list-target -v
check 'tpg-tags:'. Make sure 'default' is not set, else iscsi may be served on every interface. comstar/iscsi can now be used for KVM.
lxc and iscsi notes
NOTE: DRAFT
Note that the ZFS-over-iSCSI plugin which directly creates ZFS datasets for each guest only supports Qemu at the moment. 2016-05
Backups: If you have one dataset exported via ISCSI (i.e., one "storage" in Proxmox terminology) for each LXC container, you can back those datasets up individually with zfs-send/receive or pve-zsync. otherwise you can only backup individual containers using vzdump.
For LXC: Manually create one volume per lxc. Had to do same for ceph, check notes to make sure size is not too small.
1-disks > volumes > create volume
- Name: zvol-8801 [change 8801 to vmid? ] ,
- size xxG [ over estimate a little to start ].
- uncheck thin provisioned.
2-Comstar > Logical Units > create volume LU
- select pool.
- select type o LU: volume
- click submit
- look twords bottom of screen,
- Choose the volume created in step 1 to create a LUN from.
- click create
3-Comstar > Views > add view
- logical unit GUID : Choose the LUN created above to add a view to.
- to target group: All
- and to host group: All
- force lun...: auto
Your LUN should now be visible from proxmox for usage as base volume
napp-it list view:
View for LU : 600144f07a4c2f000000575c98580001 /dev/zvol/rdsk/tank/zvol-8801 View Number Host group target group LUN Number LU 0 All All 6487 600144f07a4c2f000000575c98580001
iscsi setup at pve
http://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI
- make this more clear:
My recommendation would be to create two different storages in proxmox,
- one using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
- and one using LVM with network backing as explained here: https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.
All the above can be done from a single zfs pool.
Manually create a volume and share this volume through an iscsi target. ?????? where
Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
Use the same zfs pool for configuration of the zfs_over_iscsi storage to use for KVM's.
The zfs_over_iscsi plugin will not overwrite the zvol used for your iscsi target for lvm storage.
In this way you have options to running cluster wide VM's using both kvm and lxc which can be life migrated across the cluster either manually or through HA.
make sure multipath is not happening.
From pve check all physical connections at pve to ensure that there is just one target: This is correct set up :
sys5 ~ # iscsiadm -m discovery -t st -p 10.1.10.41:3260 iscsiadm: No portals found sys5 ~ # iscsiadm -m discovery -t st -p 10.2.2.41:3260 10.2.2.41:3260,2 iqn.2010-09.org.napp-it:1459891666
We want only the 10G network IP to accept icsi connections
TBD: how to make it so there is not multipath .
iscsi for kvm
- kvm: per Mir on forum: If disks is running on top of ZFS you get best performance by using cache = nocache (the default).
- using zfs_over_iscsi for KVM which provides all zfs features like (linked) clones, life snapshots, etc.
- pve > storage > add zfs over iscsi
- ID: iscsi-sys4
- Portal: 10.2.2.41
- Pool: data
- Block Size: 8k [ 8k gives better performance but slightly more space is wasted ]
- Target: iqn.2010-09.org.napp-it:1459891666 [ one way to get target name to copy and paste is run the following ].
iscsiadm -m discovery -t st -p 10.2.2.41:3260
- Group: leave blank
- iSCSI provider: comstar
- Thin Provision: Optional but allows to over provision your storage
- Write Cache: Enable. [If pools option 'sync' is standard or always it is safe to enable write cache for improved performance]
iscsi for lxc
the following has a big issue and should not be used .
last attempt on 4/23/2016 had same result as 6/11/2016 . so something needs to be done before to prevent lvm/archive issue.
# 2016-04-23 working on it still # rm -fv /etc/lvm/archive/turnkey removed ‘/etc/lvm/archive/turnkey_00000-1307945933.vg’ removed ‘/etc/lvm/archive/turnkey_00001-199645953.vg’ removed ‘/etc/lvm/archive/turnkey_00002-31014016.vg’ removed ‘/etc/lvm/archive/turnkey_00003-2116062829.vg’ removed ‘/etc/lvm/archive/turnkey_00004-1380013922.vg’ removed ‘/etc/lvm/archive/turnkey_00005-993676611.vg’ removed ‘/etc/lvm/archive/turnkey_00006-1713215890.vg’ removed ‘/etc/lvm/archive/turnkey_00007-1175412519.vg’ removed ‘/etc/lvm/archive/turnkey_00008-1970149466.vg’ removed ‘/etc/lvm/archive/turnkey_00009-856308435.vg’ removed ‘/etc/lvm/archive/turnkey_00010-926217773.vg’ removed ‘/etc/lvm/archive/turnkey_00011-1017541994.vg’ removed ‘/etc/lvm/archive/turnkey_00012-1432515756.vg’ removed ‘/etc/lvm/archive/turnkey_00013-118498318.vg’ removed ‘/etc/lvm/archive/turnkey_00014-1298808030.vg’ removed ‘/etc/lvm/archive/turnkey_00015-839679495.vg’ removed ‘/etc/lvm/archive/turnkey_00016-1161424480.vg’ removed ‘/etc/lvm/archive/turnkey_00017-1942660863.vg’ removed ‘/etc/lvm/archive/turnkey_00018-184747485.vg’ removed ‘/etc/lvm/archive/turnkey_00019-1017670881.vg’ removed ‘/etc/lvm/archive/turnkey_00020-1950107248.vg’ removed ‘/etc/lvm/archive/turnkey_00021-729446236.vg’ removed ‘/etc/lvm/archive/turnkey_00022-1595834097.vg’ removed ‘/etc/lvm/archive/turnkey_00023-753888356.vg’ removed ‘/etc/lvm/archive/turnkey_00024-1984875033.vg’
- https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/page-2#post-134706
- https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing for lxc.
- Pay attention to these to important recommendations:
- disable 'use LUNs direcly'
- Enable shared use (recommended)
1- pve storage > add iscsi >
- ID zvol-8801 ,
- Portal 10.2.2.21
- Target: your choice.
- disable use LUNs directly
2- Using this target for the iscsi plugin to create a shared LUN for proxmox on which you create a lvm storage with network backing.
- pve > storage > add LVM
- storage name: lvm-8801-
- For 'Base Storage', use the drop down menu to select the previously defined iSCSI target. zvol-8801 (iSCSI)
- For 'Base Volume' select The correct Lun. Picking wrong LUN will cause bad issues.
- For 'Volume Group Name' give a unique name (this name cannot be changed later). vg-8801
- Content: LXC Only
- Enable shared use (recommended)
- Click Add
kvm notes
If you use thin provisioned Zvols you should choose scsi disks and virtio-scsi controller because comstar (the iscsi daemon in Omnios) supports the scsi unmap command. This means the trim command is honored by comstar and therefore trimmed blocks will be released to the pool from the Zvol.
nfs for backups
- create a zfs to store dump files: create zfs tank/bkup
- click ZFS Filesystems. set NFS to ON for zfs. or put in the 'rw=..' line below . I think that was needed for cli access .
- enable nfs services > nfs . make sure service is online.
- at pve add nfs storage
- permissions : see https://forum.proxmox.com/threads/nappit-omnios-how-to-set-up-for-backup-storage.26921/
- In Omnios 'ZFS Filesystems' click the NFS on your datasets which has NFS enabled and add the following after 'sharenfs='
- note the nfs has to be turned off , then click nfs again and put in the setting
I used this:
rw=@10.2.2.0/24:@10.1.0.0/16,root=@10.2.2.0/24:10.1.0.0/16
backup from omnios
- TBD add more.
- http://napp-it.org/extensions/replication_en.html
- https://github.com/oetiker/znapzend/blob/master/README.md
support
omnios has commercial support and modules. if your company will use napp-it / omnios then consider supporting the project.
links
- https://forum.proxmox.com/threads/iscsi-san-presented-as-nfs-using-freenas.26679/
- http://www.napp-it.org/index_en.html
- http://www.napp-it.org/manuals/
- install & initial set up: http://www.napp-it.org/doc/downloads/napp-it.pdf
- tuning http://www.napp-it.org/manuals/tuning.html
- http://omnios.omniti.com/wiki.php/GeneralAdministration
- HardForum: http://hardforum.com/showthread.php?t=1573272
- ServetheHome: https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/
- http://virtuallyhyper.com/2013/04/installing-and-configuring-omnios/