Difference between revisions of "ISCSI Multipath"

From Proxmox VE
Jump to navigation Jump to search
(fix path to scsi_id binary)
(Complete rework and update)
Line 1: Line 1:
In order to have multipath working :
+
=Introduction=
 +
Main purpose of multipath connectivity is to provide redundant access to the storage devices, i.e to have access to the storage device when one or more of the components in a path fail. Another advantage of multipathing is the increased throughput by way of load balancing.
 +
Common example for the use of multipathing is a iSCSI SAN connected storage device. You have redundancy and maximum performance.
  
* Add the debian repositories to ''/etc/apt/sources.list'' and install the ''multipath-tools''
+
If you use iSCSI, multipath is recommended - this works without configurations on the switches. (If you use NFS or CIFS, use bonding, e.g. 802.ad)
# aptitude update; aptitude install multipath-tools
+
 
 +
The connection from the Proxmox VE host through the iSCSI SAN is referred as a path. When multiple paths exists to a storage device (LUN) on a storage subsystem, it is referred as multipath connectivity.
 +
Therefore you need to make sure that you got at least two NICs dedicated for iSCSI, using separate networks (and switches to be protected against switch failures).
 +
 
 +
This is a generic how-to. Please consult the storage vendor documentation for vendor specific settings.
 +
 
 +
=Update your iSCSI configuration=
 +
 
 +
It is important to start all required iSCSI connections at boot time. You can do that by
 +
setting 'node.startup' to 'automatic'.
 +
 
 +
The default 'node.session.timeo.replacement_timeout' is 120 seconds. We recommend using a
 +
much smaller value of 15 seconds instead.
 +
 
 +
You can set those values in '/etc/iscsi/iscsid.conf' (defaults). If you are already connected to
 +
the iSCSI target, you need to modify the target specific defaults in '/etc/iscsi/nodes/<TARGET>/<PORTAL>/default'
 +
 
 +
A modified 'iscsid.conf' file contains the following lines:
  
* Modify ''/etc/iscsi/iscsid.conf'' to allow automatic login to the targets by uncommenting/commenting the following lines
 
 
  node.startup = automatic
 
  node.startup = automatic
  #node.startup = manual
+
  node.session.timeo.replacement_timeout = 15
 +
 
 +
Please configure your iSCSI storage on the GUI if you have not done that already ("Datacenter/Storage: Add iSCSI target").
 +
 
 +
=Install multipath tools=
 +
 
 +
The default installation does not include this package, so you first need to install the multipath-tools package:
 +
 +
# aptitude install multipath-tools
 +
 
 +
=Multipath configuration=
 +
 
 +
Then you need to create the multipath configuration file '/etc/multipath.conf'. You can find details about all setting on the manual page:
 +
 
 +
# man multipath.conf
 +
 
 +
We recommend to use 'wwid' to identify disks (World Wide Identification). You can use the 'scsi_id' command to get the 'wwid' for a specific device. For example, the following command returns the 'wwid' for device '/dev/sda'
 +
 
 +
# /lib/udev/scsi_id -g -u -d /dev/sda
 +
 
 +
We normally blacklist all devices, and only allow specific devices using 'blacklist_exceptions':
 +
 
 +
<pre>
 +
blacklist {
 +
        wwid *
 +
}
 +
 
 +
blacklist_exceptions {
 +
        wwid "3600144f028f88a0000005037a95d0001"
 +
        wwid "3600144f028f88a0000005037a95d0002"
 +
}
 +
</pre>
 +
 
 +
We also use the 'alias' directive to name the device, but this is optional:
 +
 
 +
<pre>
 +
multipaths {
 +
  multipath {
 +
        wwid "3600144f028f88a0000005037a95d0001"
 +
        alias mpath0
 +
  }
 +
  multipath {
 +
        wwid "3600144f028f88a0000005037a95d0002"
 +
        alias mpath1
 +
  }
 +
}
 +
</pre>
 +
 
 +
And finally you need reasonable defaults. We normally use the following multibus configuration:
 +
 
 +
<pre>
 +
defaults {
 +
        polling_interval        2
 +
        selector                "round-robin 0"
 +
        path_grouping_policy    multibus
 +
        getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
 +
        rr_min_io              100
 +
        failback                immediate
 +
        no_path_retry          30
 +
}
 +
</pre>
 +
 
 +
Also check your SAN vendor documentation.
  
* Depending on your SAN system you may have to configure things in ''/etc/iscsi/iscsid.conf'' like the following (check your documentation):
+
To activate those settings you need to restart they multipath daemon with:
# To specify the length of time to wait for session re-establishment
 
# before failing SCSI commands back to the application when running
 
# the Linux SCSI Layer error handler, edit the line.
 
# The value is in seconds and the default is 120 seconds.
 
node.session.timeo.replacement_timeout = 60
 
  
* You may also need to create a file ''/etc/multipath.conf'' to give user-readable names to the connections and to choose the way you want to use the multipath (multibus, failover...). Once again check your SAN documentation. For example with SanMelody :
+
  # service multipath-tools restart
defaults {
 
type = ["device-mapper", 1]
 
filter = ["a\|/dev/disk/by-id.*\|", "r\|.*\|" ]
 
polling_interval 10
 
devices {
 
device {
 
      vendor            "DataCore"
 
      product          "SAN*"
 
      path_checker      tur
 
  #     path_grouping_policy      failover
 
      path_grouping_policy      multibus
 
      path_checker      tur
 
      path_selector    "round-robin 0"
 
      failback          30
 
        getuid_callout "/lib/udev/scsi_id -g -u -p 0x80 -s /block/%n"
 
        }
 
}
 
}
 
  
* Add on the GUI the iSCSI targets you need to connect to.
+
=Query device status=
* Check the multipath is working :
+
You can view the status with:
# multipath -l
 
* You should see something similar to :
 
# multipath -l
 
  SDataCoreSANmelody_Proxmox-Testdm-3 DataCore,SANmelody   
 
  [size=2.0T][features=0][hwhandler=0]
 
  \_ round-robin 0 [prio=0][active]
 
    \_ 4:0:0:0 sdc 8:32  [active][undef]
 
    \_ 3:0:0:0 sdb 8:16  [active][undef]
 
  SDataCoreSANmelody_KVM-SRdm-4 DataCore,SANmelody   
 
  [size=2.0T][features=0][hwhandler=0]
 
  \_ round-robin 0 [prio=0][active]
 
    \_ 4:0:0:1 sde 8:64  [active][undef]
 
    \_ 3:0:0:1 sdd 8:48  [active][undef]
 
  
 +
# multipath -ll
  
 +
mpath0 (3600144f028f88a0000005037a95d0001) dm-3 NEXENTA,NEXENTASTOR
 +
size=64G features='1 queue_if_no_path' hwhandler='0' wp=rw
 +
`-+- policy='round-robin 0' prio=2 status=active
 +
  |- 4:0:0:0 sdb 8:16 active ready running
 +
  `- 3:0:0:0 sdc 8:32 active ready running
  
 +
To get more information about used devices use:
  
 +
# multipath -v3
  
 +
=Vendor specific settings=
 +
Please add vendor specific recommendations here.
  
 +
=External links=
  
 
[[Category: HOWTO]]
 
[[Category: HOWTO]]

Revision as of 12:05, 4 September 2012

Introduction

Main purpose of multipath connectivity is to provide redundant access to the storage devices, i.e to have access to the storage device when one or more of the components in a path fail. Another advantage of multipathing is the increased throughput by way of load balancing. Common example for the use of multipathing is a iSCSI SAN connected storage device. You have redundancy and maximum performance.

If you use iSCSI, multipath is recommended - this works without configurations on the switches. (If you use NFS or CIFS, use bonding, e.g. 802.ad)

The connection from the Proxmox VE host through the iSCSI SAN is referred as a path. When multiple paths exists to a storage device (LUN) on a storage subsystem, it is referred as multipath connectivity. Therefore you need to make sure that you got at least two NICs dedicated for iSCSI, using separate networks (and switches to be protected against switch failures).

This is a generic how-to. Please consult the storage vendor documentation for vendor specific settings.

Update your iSCSI configuration

It is important to start all required iSCSI connections at boot time. You can do that by setting 'node.startup' to 'automatic'.

The default 'node.session.timeo.replacement_timeout' is 120 seconds. We recommend using a much smaller value of 15 seconds instead.

You can set those values in '/etc/iscsi/iscsid.conf' (defaults). If you are already connected to the iSCSI target, you need to modify the target specific defaults in '/etc/iscsi/nodes/<TARGET>/<PORTAL>/default'

A modified 'iscsid.conf' file contains the following lines:

node.startup = automatic
node.session.timeo.replacement_timeout = 15

Please configure your iSCSI storage on the GUI if you have not done that already ("Datacenter/Storage: Add iSCSI target").

Install multipath tools

The default installation does not include this package, so you first need to install the multipath-tools package:

# aptitude install multipath-tools

Multipath configuration

Then you need to create the multipath configuration file '/etc/multipath.conf'. You can find details about all setting on the manual page:

# man multipath.conf

We recommend to use 'wwid' to identify disks (World Wide Identification). You can use the 'scsi_id' command to get the 'wwid' for a specific device. For example, the following command returns the 'wwid' for device '/dev/sda'

# /lib/udev/scsi_id -g -u -d /dev/sda

We normally blacklist all devices, and only allow specific devices using 'blacklist_exceptions':

blacklist {
        wwid *
}

blacklist_exceptions {
        wwid "3600144f028f88a0000005037a95d0001"
        wwid "3600144f028f88a0000005037a95d0002"
}

We also use the 'alias' directive to name the device, but this is optional:

multipaths {
  multipath {
        wwid "3600144f028f88a0000005037a95d0001"
        alias mpath0
  }
  multipath {
        wwid "3600144f028f88a0000005037a95d0002"
        alias mpath1
  }
}

And finally you need reasonable defaults. We normally use the following multibus configuration:

defaults {
        polling_interval        2
        selector                "round-robin 0"
        path_grouping_policy    multibus
        getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
        rr_min_io               100
        failback                immediate
        no_path_retry           30
}

Also check your SAN vendor documentation.

To activate those settings you need to restart they multipath daemon with:

# service multipath-tools restart

Query device status

You can view the status with:

# multipath -ll
mpath0 (3600144f028f88a0000005037a95d0001) dm-3 NEXENTA,NEXENTASTOR
size=64G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
  |- 4:0:0:0 sdb 8:16 active ready running
  `- 3:0:0:0 sdc 8:32 active ready running

To get more information about used devices use:

# multipath -v3

Vendor specific settings

Please add vendor specific recommendations here.

External links