Iscsi/nappit: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
Line 25: Line 25:
* Download usb-dd file from http://omnios.omniti.com/wiki.php/Installation
* Download usb-dd file from http://omnios.omniti.com/wiki.php/Installation


== after install ==
set up network. follow 5.2 Setup napp-it storage appliance with manual ip settings
set up mtu 9000 [ move that info here  ? ]


== access web page ==
== access web page ==

Revision as of 02:15, 12 April 2016

draft

rough notes to be formatted and added

links

TBD add more links?

http://www.napp-it.org/manuals/

HardForum: http://hardforum.com/showthread.php?t=1573272

ServetheHome: https://forums.servethehome.com/index.php?forums/solaris-nexenta-openindiana-and-napp-it.26/

hardware

cpu must be compatable see https://illumos.org/hcl/

http://ptribble.blogspot.dk/2016/03/supermicro-illumos-compatible-server.html

operating system install

http://www.napp-it.org/doc/downloads/napp-it.pdf

there are a few install options. I used this option:

4. OS Installation options
4.1 manual barebone installation from USB (use this per default)

after install

set up network. follow 5.2 Setup napp-it storage appliance with manual ip settings

set up mtu 9000 [ move that info here  ? ]

access web page

http://<IP>:81/cgi-bin/admin.pl

Info: First login: admin, no password; delete napp-it.cfg to get defaults

Please do not disable Javascript. For realtime monitoring you need a HTML5 websocket capable browser like a newer Firefox or Google Chrome. Logins are valid for current day on all appliances with the same password. It is suggested to use a secure SAN or management network.

zpools

ten 480GB ssd drives

https://forum.proxmox.com/threads/shared-storage-suggestion-for-a-5-node-cluster.26727/

Mir: I would suggest 2 pools:

1. Raid10 or striped mirror pool (2x2 disks should be sufficient) for I/O intensive servers like database servers.

2. raidz2 pool (the remaining 6 disks) for the rest


see zil [ write-log ] section at https://www.napp-it.org/manuals/concept_en.html

when in doubt use a write-log. see notes and tests at iscsi/tests or forum search.

napp-it refers to zil as 'write-log' << TBD check this

mirror

set up 4 drive mirror

1- create pool 2 drive mirror

2- extend pool with 2 drive mirror

to start do not use a write-log [ zil ] write-log

my 1-st tests show similiar speed to raidz1 + zil .

raidz1

5 ssd drives

1 write-log ssd Intel super one << put model numbers TBD

with a write-log drive vzrestore was 22 times faster then without.

cli adjustments

  • there probably is a way to do this from napp-it
# needed cli for atleast no atime:
zfs set atime=off tank
zfs set compression=lz4 tank

configure omnios

Network mtu 9000

Note use <other command > to know which drv file to edit.
vi /kernel/drv/ixgbe.conf
default_mtu = 9000;

then reboot

check result

root@sys4:/root#  dladm show-linkprop -p mtu ixgbe0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
ixgbe0       mtu             rw   9000           1500           1500-15500 

ssh keys at pve for iscsi

edit at all cluster nodes: /etc/ssh/sshd_config

GSSAPIAuthentication no
UseDNS no

set up ssh keys for iscsi . change 10.12.12.41 as needed

mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/10.12.12.41_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/10.12.12.41_id_rsa.pub root@10.12.12.41

test it and accept as known host . ** do so at all pve cluster nodes.

ssh -i /etc/pve/priv/zfs/10.12.12.41_id_rsa root@10.12.12.41

before production use of machine set it so root can use ssh to access the system without-password .