[PVE-User] Proxmox Kernel / Ceph Integration

Marcus Haarmann marcus.haarmann at midoco.de
Fri Jul 27 17:42:13 CEST 2018


Hi Adam, 

here is the setup: 

auto lo 
iface lo inet loopback 

iface eth0 inet manual 

iface eth1 inet manual 

iface eth2 inet manual 

iface eth3 inet manual 

iface eth4 inet manual 

iface eth5 inet manual 

auto bond0 
iface bond0 inet manual 
slaves eth0 eth1 
bond_miimon 100 
bond_mode balance-alb 
#frontside 

auto bond1 
iface bond1 inet static 
address 192.168.16.31 
netmask 255.255.255.0 
slaves eth2 eth3 
bond_miimon 100 
bond_mode balance-alb 
pre-up (ifconfig eth2 mtu 8996 && ifconfig eth3 mtu 8996) 
mtu 8996 
#corosync 

auto bond2 
iface bond2 inet static 
address 192.168.17.31 
netmask 255.255.255.0 
slaves eth4 eth5 
bond_miimon 100 
bond_mode balance-alb 
pre-up (ifconfig eth4 mtu 8996 && ifconfig eth5 mtu 8996) 
mtu 8996 
#ceph 

auto vmbr0 
iface vmbr0 inet static 
address 192.168.19.31 
netmask 255.255.255.0 
gateway 192.168.19.1 
bridge_ports bond0 
bridge_stp off 
bridge_fd 0 



bond0/vmbr0 is for using the VMs (frontend side) 
bond1 is ceph public net 
bond2 is ceph cluster net 
corosync is running in 192.168.16.x 

Marcus Haarmann 



Von: "Adam Thompson" <athompso at athompso.net> 
An: "pve-user" <pve-user at pve.proxmox.com> 
CC: "Marcus Haarmann" <marcus.haarmann at midoco.de> 
Gesendet: Freitag, 27. Juli 2018 15:24:37 
Betreff: Re: [PVE-User] Proxmox Kernel / Ceph Integration 

On 2018-07-27 04:02, Marcus Haarmann wrote: 
> Hi experts, 
> 
> we are using a Proxmox cluster with an underlying ceph storage. 
> Versions are pve 5.2-2 with kernel 4.15.18-1-pve and ceph luminous 
> 12.2.5 
> We are running a couple of VM and also Containers there. 
> 3 virtual NIC (as bond balance-alb), ceph uses 2 bonded 10GBit 
> interfaces (public/cluster separated) 

I have a thought, but need to know which network subnets are attached to 
which bondX interfaces. 
Also, you mention you have 3 "virtual NIC" in ALB mode. Is this a 
V-in-V situation? 
What bonding mode are you using for the two 10GE interfaces you dedicate 
to CEPH? 
(Feel free to just paste /etc/network/interfaces if that's easier than 
typing it all out - just make notes about which i/f does what.) 
-Adam 



More information about the pve-user mailing list