[PVE-User] PVE 6.1 incorrect MTU

Stefan M. Radman smr at kmi.com
Sun Feb 2 23:11:09 CET 2020


Hi Ronny

The issue was the mix of MTUs of vmbr0 (MTU 1500) and its member port bond0 (MTU 9000).
None of the other bridges with tagged members of bond0 had any issue.

My tests revealed that bond0 and its slaves eno1 and eno2 had an MTU of 9000 before vmbr0 was initialized but an MTU of 1500 after vmbr0 had been initialized (post-up).

So you were right that "setting mtu 1500 on vmbr0 may propagate to member interfaces". It even trickles down to the bond slaves of a member port.

The solution is a post-up script on the vmbr0 interface, setting a different MTU on bond0 and vmbr0.

post-up ip link set dev bond0 mtu 9000 && ip link set dev vmbr0 mtu 1500

That leads to the expected result, so no need to tag the native VLAN.

root at pve61:~# ip link show | egrep ': (eno1|eno2|bond0|vmbr0):'
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000

See the final configuration below. The only change is the post-up script on vmbr0.
The other vmbr interfaces originally shown were just omitted because none of them had an issue.

Thanks for pointing me into the right direction.

Cheers

Stefan

iface eno1 inet manual
mtu 9000
#Gb1 - Trunk - Jumbo Frames

iface eno2 inet manual
mtu 9000
#Gb2 - Trunk - Jumbo Frames

auto bond0
iface bond0 inet manual
slaves eno1 eno2
bond_miimon 100
bond_mode active-backup
mtu 9000
#HA Bundle Gb1/Gb2 - Trunk - Jumbo Frames

auto vmbr0
iface vmbr0 inet static
address  172.21.54.204
netmask  255.255.255.0
gateway  172.21.54.254
bridge_ports bond0
bridge_stp off
bridge_fd 0
mtu 1500
post-up ip link set dev bond0 mtu 9000 && ip link set dev vmbr0 mtu 1500
#PRIVATE - VLAN 682 - Native


On Jan 22, 2020, at 01:12, Stefan M. Radman <smr at kmi.com<mailto:smr at kmi.com>> wrote:

Hi Ronny

Thanks for the input.

setting mtu 1500 on vmbr0 may propagate to
member interfaces, in more recent kernels. I belive member ports need to
have the same mtu as the bridge

Hmm. That might be the point with the native bond0 interface.

Can you refer me to the place where this is documented or at least discussed?
Maybe I can find a configuration item to switch this enforcement off (and configure the MTU manually).

It seems that I'll have to do some more testing to find out at which point the MTUs change (or don't).

Thanks
Stefan

On Jan 21, 2020, at 09:17, Ronny Aasen <ronny+pve-user at aasen.cx<mailto:ronny+pve-user at aasen.cx>> wrote:

I do not know...  But i am interested in this case since i have a very
similar config on some of my own clusters, and am planning upgrades.
Hoppe you can post your solution when you find it.

I am guesstimating that setting mtu 1500 on vmbr0 may propagate to
member interfaces, in more recent kernels. I belive member ports need to
have the same mtu as the bridge, but probably not activly enforced in
previous kernels.

Personaly i never use native unless it is a phone or accesspoint on the
end of a cable.
So in my config vmbr0 is a vlan that is vlan-aware  without any ip
addresses on. with mtu 9000 and bond0 as member.

and my vlans are vmbr0.686 instead of bond0.686 only defined for vlans
where proxmox need an ip mtu 1500 and 9000 depending on needs.

my vm's are attached to vmbr0 with a tag in the config. This way I do
not have to edit proxmox config to take a new vlan in use.
and all mtu 1500 stancas go on vlan interfaces, and not on bridges, and
that probably do not propagate the same way.



In your shoes i would try to TAG the native, and move vmbr0 to vmbr682
on bond0.682. This mean you need to change the vm's using vmbr0 tho.
So if you have lots of vm's attached to vmbr0 perhaps just TAG and make
the member port bond0.682 to avoid changing vm configs. this should make
a bond0.682 vlan as the member port and hopefully allow the mtu.

disclaimer: just wild guesses, i have not tested this on 6.1.



good luck
Ronny Aasen





CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession.



More information about the pve-user mailing list