[PVE-User] networking adjustment | hope to get some feedback

mj lists at merit.unu.edu
Mon Jun 18 11:56:20 CEST 2018


Hi all,

After having bought some new networking equipment, and gaining more 
insight over the last two years, I am planning to make some adjustments 
to our proxmox/ceph setup, and I would *greatly* appreciate some 
feedback :-)

We are running a three-identical-server proxmox/ceph setup, with on each 
server:

NIC1 Ceph cluster and monitors on 10.10.89.10/11/12 (10G ethernet)
NIC2 clients and public ip a.b.c.10/11/12 (1G ethernet)

Since we bought new hardware, I can connect each server to our HP 
chassis, over a dual 10G bonded LACP connection.

I obviously need to keep the (NIC1) public IP, but since the ceph 
monitors ip is difficult to change, I'd like to keep the (NIC2) 
10.10.89.x as well.

I also need to keep the (tagged and untagged) VLAN's for proxmox and the 
VMs running on it.

I realise that it used to be recommened to split cluster and client 
traffic, but consensus nowadays on the ceph mailinglist seems to be: 
keep it simple and don't split, unless specifically required. With this 
in mind, I would also like to consolidate networking and run all traffic 
over this dual lacp-bonded 10G connection to our HP chassis, including 
the VLANs.

But how to achieve this..? :-) (and here come the questions...)

My idea is to first enable (active) LACP on our ProCurve 5400 chassis 
ports, trunk type "LACP", but unsure about the "Trunk Group". Do I need 
to select a different Truck Group (Trk1, Trk2 & Trk3) for each 
dual-cable-connection to a server..?

And will the port-configured VLANs on the lacp-member-ports (both tagged 
and untagged) continue to flow normally through this lacp bond..?

Then, about configuration on proxmox, would something like below do the 
trick..?

auto bond0
iface bond0 inet manual
       slaves eth0 eth1
       bond_miimon 100
       bond_mode 802.3ad
       bond_xmit_hash_policy layer2+3

auto vmbr0
iface vmbr0 inet static
       address  a.b.c.10/11/12 (public IPs)
       netmask  255.255.255.0
       gateway  a.b.c.1
       bridge_ports bond0
       bridge_stp off
       bridge_fd 0
       up ip addr add 10.10.89.10/11/12 dev vmbr0 || true (ceph mon IPs)
       down ip addr del 10.100.222.1/24 dev vmbr0 || true

Any feedback on the above? As this is production, I'd like to be 
reasonably sure that this would work, before trying.

Your comments will be very much appreciated!

MJ



More information about the pve-user mailing list