[PVE-User] feature request - automatically add all iSCSI targets which were discovered

Tomasz Chmielewski mangoo at wpkg.org
Sun Oct 25 12:17:52 CET 2009


Dietmar Maurer wrote:
>> Say, you have a SAN machine with LVM as a "backend" for iSCSI targets
>> (for guests).
>> Exporting each logical volume as a separate target lets you to:
>>
>> - add new targets as you wish - on initiators, you just "rediscover",
>>
>> - resize the targets as you wish - on initiators, you just have to
>> disconnect one target (the one which is about to be resized), and
>> connect it again,
>>
>> - you can kill unneeded targets as you wish.
> 
> But you get exactly the same features when you export one target with one lun (and use lvm on pve)?
> 
> And it is quite easy to manage it that way.

But such scenario assumes that you only use Proxmox for virtualization, and you only use virtualization.


What if you also have VMware, or Xen? LVM is not cluster aware (at least not in standard setups).
Or you just want to use single targets on bare-metal hardware?

Then, you're going into a lot of problems with managing all that on a SAN: you have to decide what resources are assigned to PVE,
and what resources are assigned to other things (VMware, Xen, bare metal), and generally, you won't be able to mix the two (i.e. 
migrate from VMware or Xen to PVE).



It is a matter of what one is used to, yes.

I just want to say that such approach (separate target per guest) is widely used. It not only lets you do the changes easily; 
also, it is clear what guest uses what /dev/sd* iSCSI disk on the Proxmox machine.

For example, with different guests using different luns on one target, it's not intuitive to say what guest uses different luns:

# ls /dev/disk/by-path/*test* -l
lrwxrwxrwx 1 root root  9 2009-10-24 02:03 /dev/disk/by-path/ip-192.168.112.68:3260-iscsi-iqn.2009-03.net.syneticon:san3.test-lun-1 -> ../../sdr
lrwxrwxrwx 1 root root 10 2009-10-24 02:03 /dev/disk/by-path/ip-192.168.112.68:3260-iscsi-iqn.2009-03.net.syneticon:san3.test-lun-1-part1 -> ../../sdr1
lrwxrwxrwx 1 root root 10 2009-10-24 02:03 /dev/disk/by-path/ip-192.168.112.68:3260-iscsi-iqn.2009-03.net.syneticon:san3.test-lun-1-part2 -> ../../sdr2
lrwxrwxrwx 1 root root 10 2009-10-24 02:03 /dev/disk/by-path/ip-192.168.112.68:3260-iscsi-iqn.2009-03.net.syneticon:san3.test-lun-1-part5 -> ../../sdr5
lrwxrwxrwx 1 root root 10 2009-10-25 11:57 /dev/disk/by-path/ip-192.168.112.68:3260-iscsi-iqn.2009-03.net.syneticon:san3.test-lun-2 -> ../../sdai


With one guest per target, it is clear what disk uses what /dev/sd* iSCSI disk:

lrwxrwxrwx 1 root root  9 2009-10-24 02:03 ip-192.168.112.66:3260-iscsi-iqn.2009-03.net.syneticon:san2.spla-dc1-lun-1 -> ../../sdj
lrwxrwxrwx 1 root root 10 2009-10-24 02:03 ip-192.168.112.66:3260-iscsi-iqn.2009-03.net.syneticon:san2.spla-dc1-lun-1-part1 -> ../../sdj1
lrwxrwxrwx 1 root root  9 2009-10-24 02:03 ip-192.168.112.66:3260-iscsi-iqn.2009-03.net.syneticon:san2.spla-ts1-lun-1 -> ../../sdk
lrwxrwxrwx 1 root root 10 2009-10-24 02:03 ip-192.168.112.66:3260-iscsi-iqn.2009-03.net.syneticon:san2.spla-ts1-lun-1-part1 -> ../../sdk1
lrwxrwxrwx 1 root root  9 2009-10-24 02:03 ip-192.168.112.66:3260-iscsi-iqn.2009-04.net.syneticon:san2.opennms1-data-lun-1 -> ../../sds
lrwxrwxrwx 1 root root 10 2009-10-24 02:03 ip-192.168.112.66:3260-iscsi-iqn.2009-04.net.syneticon:san2.opennms1-data-lun-1-part1 -> ../../sds1
lrwxrwxrwx 1 root root 10 2009-10-24 02:04 ip-192.168.112.66:3260-iscsi-iqn.2009-05.net.syneticon:san2.synnet-sbs-lun-1 -> ../../sdag
lrwxrwxrwx 1 root root 11 2009-10-24 02:04 ip-192.168.112.66:3260-iscsi-iqn.2009-05.net.syneticon:san2.synnet-sbs-lun-1-part1 -> ../../sdag1


When is it useful? Some of the uses are:
- to mount the disk on the Proxmox machine (assuming no guest uses it) to repair the guest or copy data from it,
- to dd guest image from one SAN to another (or locally),
- to dd guest image within one san machine from the Proxmox machine (doing this on a SAN would require removing a given lun prior to copying, because of corruption
  occurring because of both initiator and SAN caching the data differently),

and so on.


To end the discussion:

I guess not necessarily critical to most Proxmox VE users, but generally required for bigger setups
(and as Proxmox VE improves, so the setups which are based on it grow).


-- 
Tomasz Chmielewski
http://wpkg.org



More information about the pve-user mailing list