Difference between revisions of "Full Mesh Network for Ceph Server"

From Proxmox VE
Jump to navigation Jump to search
(Example configuration also shown for Method 1)
(→‎Introduction: rework wording + grammar)
Line 1: Line 1:
 
== Introduction ==
 
== Introduction ==
  
This wiki page describes how to configure in Proxmox VE (or any other Debian based LINUX distribution) a three node [https://en.wikipedia.org/wiki/Mesh_networking "Meshed Network"] (instead of a network switch) as it can be used e.g. for connecting [[Ceph Server | Ceph Servers]] or nodes in a [[Proxmox VE 4.x Cluster | Proxmox VE Cluster]]. This should also work with a 5-node cluster, general you need nodes_total - 1 = nic ports. The basic idea is running a small 3 node cluster with 10 Gbit network WITHOUT buying an expensive 10 Gbit network switch.
+
This wiki page describes how to configure a three node [https://en.wikipedia.org/wiki/Mesh_networking "Meshed Network"] Proxmox VE (or any other Debian based Linux distribution), which can be,for example, used for connecting [[Ceph Server | Ceph Servers]] or nodes in a [[Proxmox VE 4.x Cluster | Proxmox VE Cluster]] with the maximum possible bandwidth and without using a switch. This can also work with bigger clusters, in general you need (number_of_nodes - 1) NIC ports on each node, e.g.: you'd need 4 NIC ports for a 5-node full mesh.
 +
A big advantage of this setup is the fact, that you can achieve a fast network connection with 10, 40, or more GBit/s bandwidth WITHOUT buying an expensive fast-enough network switch.
  
This should work with any kind of ethernet NICs, i.e. also 40 Gbit or even 100 Gbit ones. But for verifying this article, 10 Gbit Intel NICs were used.
+
There a two possible method to achieve a full mesh:
 
 
There a two possible method to achieve a full mesh.
 
 
 
In case of method 1 each packet is sent to both other nodes, in case of method 2 only to the addressed node (controlled by routing), therfore
 
multicast is not possible.
 
 
 
General the fist one is the recommended one, because it is easier to set up and supports multicast.
 
  
 +
# each packet is sent to both other nodes
 +
# each packet is sent to the addressed node only (controlled by routing), therefore multicast is not possible.
  
 +
General the fist one is recommended, as it is easier to set up and supports multicast. If multicast is not needed on this network, method 2 can provide a more efficient use of the total bandwidth.
  
 
== Example ==
 
== Example ==

Revision as of 08:17, 15 May 2018

Introduction

This wiki page describes how to configure a three node "Meshed Network" Proxmox VE (or any other Debian based Linux distribution), which can be,for example, used for connecting Ceph Servers or nodes in a Proxmox VE Cluster with the maximum possible bandwidth and without using a switch. This can also work with bigger clusters, in general you need (number_of_nodes - 1) NIC ports on each node, e.g.: you'd need 4 NIC ports for a 5-node full mesh. A big advantage of this setup is the fact, that you can achieve a fast network connection with 10, 40, or more GBit/s bandwidth WITHOUT buying an expensive fast-enough network switch.

There a two possible method to achieve a full mesh:

  1. each packet is sent to both other nodes
  2. each packet is sent to the addressed node only (controlled by routing), therefore multicast is not possible.

General the fist one is recommended, as it is easier to set up and supports multicast. If multicast is not needed on this network, method 2 can provide a more efficient use of the total bandwidth.

Example

3 servers:

  • Node1 with IP addresses x.x.x.50
  • Node2 with IP addresses x.x.x.51
  • Node3 with IP addresses x.x.x.52

4 Network cards in each server:

  • ens18, ens19 direct conections to the 2 other servers 10.15.15.y
  • ens20 connection to internet (router), port at vmbr0 192.168.2.y
  • ens21 LAN (for fileserver etc.) 10.14.14.y

Direct connections between servers:

  • Node1/ens18 - Node3/ens19
  • Node2/ens18 - Node1/ens19
  • Node3/ens18 - Node2/ens19


Method 1

Create a "broadcast" bond with the given interfaces on every node. This can be done over the GUI or on the command-line.

GUI

On the GUI go to the node level -> System -> Network. Then click on "Create" and select "Linux Bond". In the Wizard make your configuration without a gateway and set mode to "broadcast".

Reboot the node to activate the new network settings.

Command-Line

Add the following lines to '/etc/network/interfaces'.

auto bond<No>
iface bond<No> inet static
	address  <IP>
	netmask  <Netmask>
	slaves <Nic1> <Nic2>
	bond_miimon 100
	bond_mode broadcast
#Full Mesh

Then start the bond

ifup bond<No>

In Node1 of the above described setup example /etc/network/interface will look like as follows:

iface lo inet loopback

iface ens20 inet manual

auto ens21
iface ens21 inet static
        address  10.14.14.50
        netmask  255.255.255.0


iface ens18 inet manual

iface ens19 inet manual

auto bond0
iface bond0 inet static
       address 10.15.15.50
       netmask 255.255.255.0
       slaves ens18 ens19
       bond_miimon 100
       bond_mode broadcast


auto vmbr0
iface vmbr0 inet static
        address  192.168.2.50
        netmask  255.255.240.0
        gateway  192.168.2.1
        bridge_ports ens20
        bridge_stp off
        bridge_fd 0


Method 2

Corresponding to the above described setup example the 3 nodes have to be configured as described in the following sections.

Node1

/etc/network/interface

auto lo
iface lo inet loopback

iface ens20 inet manual

auto ens21
iface ens21 inet static
        address  10.14.14.50
        netmask  255.255.255.0

# Connected to Node3 (.52)
auto ens18
iface ens18 inet static
        address  10.15.15.50
        netmask  255.255.255.0
        up ip route add 10.15.15.52/32 dev ens18
        down ip route del 10.15.15.52/32

# Connected to Node2 (.51)
auto ens19
iface ens19 inet static
        address  10.15.15.50
        netmask  255.255.255.0
        up ip route add 10.15.15.51/32 dev ens19
        down ip route del 10.15.15.51/32

auto vmbr0
iface vmbr0 inet static
        address  192.168.2.50
        netmask  255.255.240.0
        gateway  192.168.2.1
        bridge_ports ens20
        bridge_stp off
        bridge_fd 0

route

root@pve-2-50:~# ip route
default via 192.168.2.1 dev vmbr0 onlink 
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.50 
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.50 
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.50 
10.15.15.51 dev ens19 scope link 
10.15.15.52 dev ens18 scope link 
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.50 

Node2

/etc/network/interface

auto lo
iface lo inet loopback

iface ens20 inet manual

auto ens21
iface ens21 inet static
        address  10.14.14.51
        netmask  255.255.255.0

# Connected to Node1 (.50)
auto ens18
iface ens18 inet static
        address  10.15.15.51
        netmask  255.255.255.0
        up ip route add 10.15.15.50/32 dev ens18
        down ip route del 10.15.15.50/32

# Connected to Node3 (.52)
auto ens19
iface ens19 inet static
        address  10.15.15.51
        netmask  255.255.255.0
        up ip route add 10.15.15.52/32 dev ens19
        down ip route del 10.15.15.52/32

auto vmbr0
iface vmbr0 inet static
        address  192.168.2.51
        netmask  255.255.240.0
        gateway  192.168.2.1
        bridge_ports ens20
        bridge_stp off
        bridge_fd 0

route

root@pve-2-51:/# ip route
default via 192.168.2.1 dev vmbr0 onlink 
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.51 
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.51 
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.51 
10.15.15.50 dev ens18 scope link 
10.15.15.52 dev ens19 scope link 
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.51 

Node3

/etc/network/interface

auto lo
iface lo inet loopback

iface ens20 inet manual

auto ens21
iface ens21 inet static
        address  10.14.14.52
        netmask  255.255.255.0

# Connected to Node2 (.51)
auto ens18
iface ens18 inet static
        address  10.15.15.52
        netmask  255.255.255.0
        up ip route add 10.15.15.51/32 dev ens18
        down ip route del 10.15.15.51/32

# Connected to Node1 (.50)
auto ens19
iface ens19 inet static
        address  10.15.15.52
        netmask  255.255.255.0
        up ip route add 10.15.15.50/32 dev ens19
        down ip route del 10.15.15.50/32

auto vmbr0
iface vmbr0 inet static
        address  192.168.2.52
        netmask  255.255.240.0
        gateway  192.168.2.1
        bridge_ports ens20
        bridge_stp off
        bridge_fd 0

route

root@pve-2-52:~# ip route
default via 192.168.2.1 dev vmbr0 onlink 
10.14.14.0/24 dev ens21 proto kernel scope link src 10.14.14.52 
10.15.15.0/24 dev ens18 proto kernel scope link src 10.15.15.52 
10.15.15.0/24 dev ens19 proto kernel scope link src 10.15.15.52 
10.15.15.50 dev ens19 scope link 
10.15.15.51 dev ens18 scope link 
192.168.0.0/20 dev vmbr0 proto kernel scope link src 192.168.2.52