Full Mesh Network for Ceph Server: Difference between revisions
No edit summary |
(Recommend routing instead of broadcast) |
||
Line 6: | Line 6: | ||
There a two possible method to achieve a full mesh: | There a two possible method to achieve a full mesh: | ||
# | # Routed: Each packet is sent to the addressed node only | ||
# | # Broadcast: Each packet is sent to both other nodes | ||
The routed setup is generally recommended. It uses less bandwidth and detects error states more reliably. | |||
The advantage of the broadcast method is an easier setup process. | |||
== Example == | == Example == | ||
Line 42: | Line 43: | ||
</pre> | </pre> | ||
== | == Recommended: Routed Setup == | ||
Corresponding to the above described setup example the 3 nodes have to be configured as described in the following sections. | Corresponding to the above described setup example the 3 nodes have to be configured as described in the following sections. | ||
Note that multicast is not possible with this method. | |||
=== Node1 === | === Node1 === | ||
Line 266: | Line 202: | ||
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.52 | 192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.52 | ||
</pre> | </pre> | ||
== Broadcast setup == | |||
Create a "broadcast" bond with the given interfaces on every node. | |||
This can be done over the GUI or on the command-line. | |||
=== GUI === | |||
On the GUI go to the node level -> System -> Network. | |||
Then click on "Create" and select "Linux Bond". | |||
In the Wizard make your configuration without a gateway and set mode to "broadcast". | |||
Reboot the node to activate the new network settings. | |||
=== Command-Line === | |||
Add the following lines to '/etc/network/interfaces'. | |||
<pre> | |||
auto bond<No> | |||
iface bond<No> inet static | |||
address <IP> | |||
netmask <Netmask> | |||
slaves <Nic1> <Nic2> | |||
bond_miimon 100 | |||
bond_mode broadcast | |||
#Full Mesh | |||
</pre> | |||
Then start the bond | |||
<pre> | |||
ifup bond<No> | |||
</pre> | |||
In Node1 of the above described setup example /etc/network/interface will look like as follows: | |||
<pre> | |||
iface lo inet loopback | |||
iface ens20 inet manual | |||
auto ens21 | |||
iface ens21 inet static | |||
address 10.14.14.50 | |||
netmask 255.255.255.0 | |||
iface ens18 inet manual | |||
iface ens19 inet manual | |||
auto bond0 | |||
iface bond0 inet static | |||
address 10.15.15.50 | |||
netmask 255.255.255.0 | |||
slaves ens18 ens19 | |||
bond_miimon 100 | |||
bond_mode broadcast | |||
auto vmbr0 | |||
iface vmbr0 inet static | |||
address 192.168.2.50 | |||
netmask 255.255.240.0 | |||
gateway 192.168.2.1 | |||
bridge_ports ens20 | |||
bridge_stp off | |||
bridge_fd 0 | |||
</pre> | |||
[[Category: HOWTO]] [[Category: Cluster]] | [[Category: HOWTO]] [[Category: Cluster]] |
Revision as of 10:06, 14 May 2021
Introduction
This wiki page describes how to configure a three node "Meshed Network" Proxmox VE (or any other Debian based Linux distribution), which can be, for example, used for connecting Ceph Servers or nodes in a Proxmox VE Cluster with the maximum possible bandwidth and without using a switch. This can also work with bigger clusters, in general you need (number_of_nodes - 1) NIC ports on each node, e.g.: you'd need 4 NIC ports for a 5-node full mesh. A big advantage of this setup is the fact, that you can achieve a fast network connection with 10, 40, or more GBit/s bandwidth WITHOUT buying an expensive fast-enough network switch.
There a two possible method to achieve a full mesh:
- Routed: Each packet is sent to the addressed node only
- Broadcast: Each packet is sent to both other nodes
The routed setup is generally recommended. It uses less bandwidth and detects error states more reliably. The advantage of the broadcast method is an easier setup process.
Example
3 servers:
- Node1 with IP addresses x.x.x.50
- Node2 with IP addresses x.x.x.51
- Node3 with IP addresses x.x.x.52
3 to 4 Network ports in each server:
- ens18, ens19 will be used for the actual full mesh. Physical direct connections to the other two servers, 10.15.15.y/24
- ens20 connection to WAN (internet/router), using at vmbr0 192.168.2.y
- ens21 (optional) LAN (for cluster traffic, etc.) 10.14.14.y
Direct connections between servers:
- Node1/ens18 - Node2/ens19
- Node2/ens18 - Node3/ens19
- Node3/ens18 - Node1/ens19
+-----------+ | Node1 | +-----------+ |ens18|ens19| +--+------+-+ | | +-----+ | | +-----+ +-------+ens18+-----------+ +-------+ens18+-------+ | Node2 +-----+ | | +-----+ Node3 | +-------+ens19+--------+ +-----------+ens19+-------+ +-----+ +-----+
Recommended: Routed Setup
Corresponding to the above described setup example the 3 nodes have to be configured as described in the following sections. Note that multicast is not possible with this method.
Node1
/etc/network/interface
auto lo iface lo inet loopback iface ens20 inet manual auto ens21 iface ens21 inet static address 10.14.14.50 netmask 255.255.255.0 # Connected to Node2 (.51) auto ens18 iface ens18 inet static address 10.15.15.50 netmask 255.255.255.0 up ip route add 10.15.15.51/32 dev ens18 down ip route del 10.15.15.51/32 # Connected to Node3 (.52) auto ens19 iface ens19 inet static address 10.15.15.50 netmask 255.255.255.0 up ip route add 10.15.15.52/32 dev ens19 down ip route del 10.15.15.52/32 auto vmbr0 iface vmbr0 inet static address 192.168.2.50 netmask 255.255.240.0 gateway 192.168.2.1 bridge_ports ens20 bridge_stp off bridge_fd 0
route
root@pve-2-50:~# ip route default via 192.168.2.1 dev vmbr0 onlink 10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.50 10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.50 10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.50 10.15.15.52 dev ens19 scope link 10.15.15.51 dev ens18 scope link 192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.50
Node2
/etc/network/interface
auto lo iface lo inet loopback iface ens20 inet manual auto ens21 iface ens21 inet static address 10.14.14.51 netmask 255.255.255.0 # Connected to Node3 (.52) auto ens18 iface ens18 inet static address 10.15.15.51 netmask 255.255.255.0 up ip route add 10.15.15.52/32 dev ens18 down ip route del 10.15.15.52/32 # Connected to Node1 (.50) auto ens19 iface ens19 inet static address 10.15.15.51 netmask 255.255.255.0 up ip route add 10.15.15.50/32 dev ens19 down ip route del 10.15.15.50/32 auto vmbr0 iface vmbr0 inet static address 192.168.2.51 netmask 255.255.240.0 gateway 192.168.2.1 bridge_ports ens20 bridge_stp off bridge_fd 0
route
root@pve-2-51:/# ip route default via 192.168.2.1 dev vmbr0 onlink 10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.51 10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.51 10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.51 10.15.15.52 dev ens18 scope link 10.15.15.50 dev ens19 scope link 192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.51
Node3
/etc/network/interface
auto lo iface lo inet loopback iface ens20 inet manual auto ens21 iface ens21 inet static address 10.14.14.52 netmask 255.255.255.0 # Connected to Node1 (.50) auto ens18 iface ens18 inet static address 10.15.15.52 netmask 255.255.255.0 up ip route add 10.15.15.50/32 dev ens18 down ip route del 10.15.15.50/32 # Connected to Node2 (.51) auto ens19 iface ens19 inet static address 10.15.15.52 netmask 255.255.255.0 up ip route add 10.15.15.51/32 dev ens19 down ip route del 10.15.15.51/32 auto vmbr0 iface vmbr0 inet static address 192.168.2.52 netmask 255.255.240.0 gateway 192.168.2.1 bridge_ports ens20 bridge_stp off bridge_fd 0
route
root@pve-2-52:~# ip route default via 192.168.2.1 dev vmbr0 onlink 10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.52 10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.52 10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.52 10.15.15.51 dev ens19 scope link 10.15.15.50 dev ens18 scope link 192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.52
Broadcast setup
Create a "broadcast" bond with the given interfaces on every node. This can be done over the GUI or on the command-line.
GUI
On the GUI go to the node level -> System -> Network. Then click on "Create" and select "Linux Bond". In the Wizard make your configuration without a gateway and set mode to "broadcast".
Reboot the node to activate the new network settings.
Command-Line
Add the following lines to '/etc/network/interfaces'.
auto bond<No> iface bond<No> inet static address <IP> netmask <Netmask> slaves <Nic1> <Nic2> bond_miimon 100 bond_mode broadcast #Full Mesh
Then start the bond
ifup bond<No>
In Node1 of the above described setup example /etc/network/interface will look like as follows:
iface lo inet loopback iface ens20 inet manual auto ens21 iface ens21 inet static address 10.14.14.50 netmask 255.255.255.0 iface ens18 inet manual iface ens19 inet manual auto bond0 iface bond0 inet static address 10.15.15.50 netmask 255.255.255.0 slaves ens18 ens19 bond_miimon 100 bond_mode broadcast auto vmbr0 iface vmbr0 inet static address 192.168.2.50 netmask 255.255.240.0 gateway 192.168.2.1 bridge_ports ens20 bridge_stp off bridge_fd 0