ISCSI Multipath: Difference between revisions
Line 78: | Line 78: | ||
defaults { | defaults { | ||
polling_interval 2 | polling_interval 2 | ||
path_selector "round-robin 0" | |||
path_grouping_policy multibus | path_grouping_policy multibus | ||
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" | getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" |
Revision as of 08:49, 16 July 2013
Introduction
Main purpose of multipath connectivity is to provide redundant access to the storage devices, i.e to have access to the storage device when one or more of the components in a path fail. Another advantage of multipathing is the increased throughput by way of load balancing. Common example for the use of multipathing is a iSCSI SAN connected storage device. You have redundancy and maximum performance.
If you use iSCSI, multipath is recommended - this works without configurations on the switches. (If you use NFS or CIFS, use bonding, e.g. 802.ad)
The connection from the Proxmox VE host through the iSCSI SAN is referred as a path. When multiple paths exists to a storage device (LUN) on a storage subsystem, it is referred as multipath connectivity. Therefore you need to make sure that you got at least two NICs dedicated for iSCSI, using separate networks (and switches to be protected against switch failures).
This is a generic how-to. Please consult the storage vendor documentation for vendor specific settings.
Update your iSCSI configuration
It is important to start all required iSCSI connections at boot time. You can do that by setting 'node.startup' to 'automatic'.
The default 'node.session.timeo.replacement_timeout' is 120 seconds. We recommend using a much smaller value of 15 seconds instead.
You can set those values in '/etc/iscsi/iscsid.conf' (defaults). If you are already connected to the iSCSI target, you need to modify the target specific defaults in '/etc/iscsi/nodes/<TARGET>/<PORTAL>/default'
A modified 'iscsid.conf' file contains the following lines:
node.startup = automatic node.session.timeo.replacement_timeout = 15
Please configure your iSCSI storage on the GUI if you have not done that already ("Datacenter/Storage: Add iSCSI target").
Install multipath tools
The default installation does not include this package, so you first need to install the multipath-tools package:
# aptitude update # aptitude install multipath-tools
Multipath configuration
Then you need to create the multipath configuration file '/etc/multipath.conf'. You can find details about all setting on the manual page:
# man multipath.conf
We recommend to use 'wwid' to identify disks (World Wide Identification). You can use the 'scsi_id' command to get the 'wwid' for a specific device. For example, the following command returns the 'wwid' for device '/dev/sda'
# /lib/udev/scsi_id -g -u -d /dev/sda
We normally blacklist all devices, and only allow specific devices using 'blacklist_exceptions':
blacklist { wwid * } blacklist_exceptions { wwid "3600144f028f88a0000005037a95d0001" wwid "3600144f028f88a0000005037a95d0002" }
We also use the 'alias' directive to name the device, but this is optional:
multipaths { multipath { wwid "3600144f028f88a0000005037a95d0001" alias mpath0 } multipath { wwid "3600144f028f88a0000005037a95d0002" alias mpath1 } }
And finally you need reasonable defaults. We normally use the following multibus configuration:
defaults { polling_interval 2 path_selector "round-robin 0" path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" rr_min_io 100 failback immediate no_path_retry queue }
Note If you run multipath on V3.0 and later, you need to adapt your multipath.conf - 'selector' is now called 'path_selector'
Also check your SAN vendor documentation.
To activate those settings you need to restart they multipath daemon with:
# service multipath-tools restart
Example multipath.conf
# cat /etc/mulitpath.conf
defaults { polling_interval 2 patch_selector "round-robin 0" path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" rr_min_io 100 failback immediate no_path_retry queue } blacklist { wwid * } blacklist_exceptions { wwid "3600144f028f88a0000005037a95d0001" } multipaths { multipath { wwid "3600144f028f88a0000005037a95d0001" alias nexenta0 } }
Query device status
You can view the status with:
# multipath -ll
mpath0 (3600144f028f88a0000005037a95d0001) dm-3 NEXENTA,NEXENTASTOR size=64G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=2 status=active |- 5:0:0:0 sdb 8:16 active ready running `- 6:0:0:0 sdc 8:32 active ready running
To get more information about used devices use:
# multipath -v3
Performance test with fio
In order to check the performance, you can use fio.
Example read test:
fio --filename=/dev/mapper/mpath0 --direct=1 --rw=read --bs=1m --size=20G --numjobs=200 --runtime=60 --group_reporting --name=file1
Vendor specific settings
Please add vendor specific recommendations here.
Dell
You need to load a Dell specific module scsi_dh_rdac permanently, in order to do this, just edit:
nano /etc/modules
# /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. # Parameters can be specified after the module name. scsi_dh_rdac
defaults { polling_interval 2 selector "round-robin 0" path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" rr_min_io 100 failback immediate no_path_retry queue } blacklist { wwid * } blacklist_exceptions { wwid 3690b22c00008da2c000008a35098b0dc } devices { device { vendor "DELL" product "MD32xxi" path_grouping_policy group_by_prio prio rdac polling_interval 5 path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 } } multipaths { multipath { wwid 3690b22c00008da2c000008a35098b0dc alias md3200i } }
And you need to configure a suitable filter in /etc/lvm/lvm.conf in order to avoid error messages.
See also: