[PVE-User] Cluster network via directly connected interfaces?

Ronny Aasen ronny+pve-user at aasen.cx
Fri Nov 23 13:55:41 CET 2018


keep in mind that this is just a mental experiment... but i think more 
standardized then the spanning tree or bond0 hack.

That being said. I am absolutely testing this when i get some available 
hardware :)


if you could do it with ipv6 it would probably be less complexity.
a single loopback and the bird routing daemon... DONE!

with ipv4 is is a mess as usually. you would also need /30 (or/31 ptp) 
link networks on every single link.

multicast would not work. unless you used some sort of multicast proxy 
or routing daemon. have no idea how that would work.
All my clusters are small enough so i use ip targeted corosync with udp

Ronny Aasen



On 11/23/18 12:30 PM, Stefan M. Radman wrote:
> Hi Ronny
> 
> That's the first time I hear of a routing protocol in the corosync context.
> Doesn't that add a whole lot of complexity in the setup?
> Would it work with corosync multicast?
> 
> Stefan
> 
>> On Nov 23, 2018, at 12:00 PM, Ronny Aasen <ronny+pve-user at aasen.cx> wrote:
>>
>> Personally if i was to try and experiment with something non-default I would try to use ospf+bfd either with bird or quagga.
>>
>> -you get quick failovers due to bfd.
>> -you can equal cost multipath links to utillize multiple ports between servers.
>> -All links are active, so you do not have a "passive" link, as you have with STP
>> -and there is no needless duplication of data, so you do not get the 50% bandwith loss of a broadcast bond.
>> -you need to use corosync with targeted udp towards spesific loopback addresses.
>> -traffic goes shortest path. so allways towards the correct server.
>> - you can very easily expand beyond 3 nodes if you have enough ports. Or move the ospf domain onto a switch if needed. this also easily converts to a multiple switch config to maintain HA and no SPOF
>>
>> Happy experimentation!
>>
>> mvh
>> Ronny Aasen
>>
>>
>>
>>
>>
>> On 11/22/18 7:29 PM, Frank Thommen wrote:
>>> Please excuse, if this is too basic, but after reading https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the cluster/corosync network could be built by directly connected network interfaces.  I.e not like this:
>>>   +-------+
>>>   | pve01 |----------+
>>>   +-------+          |
>>>                      |
>>>   +-------+     +----------------+
>>>   | pve02 |-----| network switch |
>>>   +-------+     +----------------+
>>>                      |
>>>   +-------+          |
>>>   | pve03 |----------+
>>>   +-------+
>>> but like this:
>>>   +-------+
>>>   | pve01 |---+
>>>   +-------+   |
>>>       |       |
>>>   +-------+   |
>>>   | pve02 |   |
>>>   +-------+   |
>>>       |       |
>>>   +-------+   |
>>>   | pve03 |---+
>>>   +-------+
>>> (all connections 1Gbit, there are currently not plans to extend over three nodes)
>>> I can't see any drawback in that solution.  It would remove one layer of hardware dependency and potential spof (the switch).  If we don't trust the interfaces, we might be able to configure a second network with the three remaining interfaces.
>>> Is such a "direct-connection" topology feasible?  Recommended? Strictly not recommended?
>>> I am currently just planning and thinking and there is no cluster (or even a PROXMOX server) in place.
>>> Cheers
>>> frank
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user at pve.proxmox.com
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 




More information about the pve-user mailing list