[PVE-User] PVE 6.1 incorrect MTU

Stefan M. Radman smr at kmi.com
Wed Jan 22 01:12:51 CET 2020


Hi Flo

I am setting the MTU on all interface levels (nic,bond,vlan,bridge) but not all of them get set this way.
It used to work until 5.4 but no longer under 6.1

Stefan


> On Jan 21, 2020, at 12:39, Florian König <florian.koenig at elster.de> wrote:
>
> Hi,
>
> we also have Jumbo Frames enables on our PVE6.1 Cluster.
> /etc/network/interfaces:
> ...
> iface ens7f0 inet manual
>   mtu 9000
> iface ens7f1 inet manual
>   mtu 9000
> iface ens1f0 inet manual
>   mtu 9000
> iface ens1f1 inet manual
>   mtu 9000
> auto bond1
> iface bond1 inet ...
>   bond-slaves ens1f0 ens7f1
>   bond-primary ens1f0
>   bond-miimon 100
>   bond-mode active-backup
>   post-up ip link set bond1 mtu 9000
> auto bond2
> iface bond2 inet6 ...
>   bond-slaves ens7f0 ens1f1
>   bond-primary ens7f0
>   bond-miimon 100
>   bond-mode active-backup
>   post-up ip link set bond2 mtu 9000
> ...
>
> As you can see, the MTU is set both at interface and at bond level.
> I don't know if this is a hard requirement but at least it's working.
>
> Flo
>
> Am Dienstag, den 21.01.2020, 09:17 +0100 schrieb Ronny Aasen:
>> On 21.01.2020 00:44, Stefan M. Radman via pve-user wrote:
>>> PVE 6.1 incorrect MTU.eml
>>>
>>> Subject:
>>> PVE 6.1 incorrect MTU
>>> From:
>>> "Stefan M. Radman" <smr at kmi.com>
>>> Date:
>>> 21.01.2020, 00:44
>>>
>>> To:
>>> "pve-user at pve.proxmox.com" <pve-user at pve.proxmox.com>
>>>
>>>
>>> Recently I upgraded the first node of a 5.4 cluster to 6.1.
>>> Everything went smooth (thanks for pve5to6!) until I restarted the
>>> first node after the upgrade and started to get weird problems with
>>> the LVM/multipath/iSCSI based storage (hung PVE and LVM processes
>>> etc).
>>>
>>> After a while of digging I found that the issues were due to an
>>> incorrect MTU on the storage interface (Jumbo frames got
>>> truncated).
>>>
>>> The network configuration (see /etc/network/interfaces further
>>> below) worked very well with 5.4 but it does not work with 6.1.
>>>
>>> After booting with PVE 6.1, some of the MTUs are not as configured.
>>> Here is what "ip link" shows after boot.
>>>
>>> root at seisram04:~# ip link | fgrep mtu
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>>> mode DEFAULT group default qlen 1000
>>> 2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
>>> master bond0 state UP mode DEFAULT group default qlen 1000
>>> 3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
>>> master bond0 state UP mode DEFAULT group default qlen 1000
>>> 4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>>> UP mode DEFAULT group default qlen 1000
>>> 5: eno4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>>> UP mode DEFAULT group default qlen 1000
>>> 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
>>> noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
>>> 7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>>> state UP mode DEFAULT group default qlen 1000
>>> 8: vmbr533: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>> noqueue state UP mode DEFAULT group default qlen 1000
>>> 9: bond0.533 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc noqueue master vmbr533 state UP mode DEFAULT group default
>>> qlen 1000
>>> 10: vmbr683: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>> noqueue state UP mode DEFAULT group default qlen 1000
>>> 11: bond0.683 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc noqueue master vmbr683 state UP mode DEFAULT group default
>>> qlen 1000
>>> 12: vmbr686: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc
>>> noqueue state UP mode DEFAULT group default qlen 1000
>>> 13: bond0.686 at bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>> qdisc noqueue master vmbr686 state UP mode DEFAULT group default
>>> qlen 1000
>>>
>>> The only interface that is correctly configured for Jumbo frames
>>> is  vmbr686, the bridge serving the storage VLAN but the underlying
>>> bond0 interface and it's slaves eno1 and eno2 have the default MTU
>>> of 1500 and Jumbo frames get truncated on the way.
>>>
>>> Why did this configuration work on 5.4 but not on 6.1?
>>> In the interface configuration, all MTU values are explicitly
>>> configured.
>>>
>>> Why are some of them ignored in 6.1? What am I missing here?
>>>
>>> How can I make this configuration work in PVE 6.1 (and later)?
>>>
>>> Any suggestions welcome.
>>>
>>> Thank you
>>>
>>> Stefan
>>>
>>> /etc/network/interfaces:
>>>
>>> auto lo
>>> iface lo inet loopback
>>>
>>> iface eno1 inet manual
>>> mtu 9000
>>> #Gb1 - Trunk - Jumbo Frames
>>>
>>> iface eno2 inet manual
>>> mtu 9000
>>> #Gb2 - Trunk - Jumbo Frames
>>>
>>> auto eno3
>>> iface eno3 inet static
>>> address 192.168.84.4
>>> netmask 255.255.255.0
>>> mtu 1500
>>> #Gb3 - COROSYNC1 - VLAN684
>>>
>>> auto eno4
>>> iface eno4 inet static
>>> address 192.168.85.4
>>> netmask 255.255.255.0
>>> mtu 1500
>>> #Gb4 - COROSYNC2 - VLAN685
>>>
>>> auto bond0
>>> iface bond0 inet manual
>>> slaves eno1 eno2
>>> bond_miimon 100
>>> bond_mode active-backup
>>> mtu 9000
>>> #HA Bundle Gb1/Gb2 - Trunk - Jumbo Frames
>>>
>>> auto vmbr0
>>> iface vmbr0 inet static
>>> address  172.21.54.204
>>> netmask  255.255.255.0
>>> gateway  172.21.54.254
>>> bridge_ports bond0
>>> bridge_stp off
>>> bridge_fd 0
>>> mtu 1500
>>> #PRIVATE - VLAN 682 - Native
>>>
>>> auto vmbr533
>>> iface vmbr533 inet manual
>>> bridge_ports bond0.533
>>> bridge_stp off
>>> bridge_fd 0
>>> mtu 1500
>>> #PUBLIC - VLAN 533
>>>
>>> auto vmbr683
>>> iface vmbr683 inet manual
>>>        bridge_ports bond0.683
>>>        bridge_stp off
>>>        bridge_fd 0
>>>        mtu 1500
>>> #VPN - VLAN 683
>>>
>>> auto vmbr686
>>> iface vmbr686 inet static
>>> address  192.168.86.204
>>> netmask  255.255.255.0
>>> bridge_ports bond0.686
>>> bridge_stp off
>>> bridge_fd 0
>>> mtu 9000
>>> #STORAGE - VLAN 686 - Jumbo Frames
>>
>> I do not know...  But i am interested in this case since i have a
>> very
>> similar config on some of my own clusters, and am planning upgrades.
>> Hoppe you can post your solution when you find it.
>>
>> I am guesstimating that setting mtu 1500 on vmbr0 may propagate to
>> member interfaces, in more recent kernels. I belive member ports need
>> to
>> have the same mtu as the bridge, but probably not activly enforced
>> in
>> previous kernels.
>>
>> Personaly i never use native unless it is a phone or accesspoint on
>> the
>> end of a cable.
>> So in my config vmbr0 is a vlan that is vlan-aware  without any ip
>> addresses on. with mtu 9000 and bond0 as member.
>>
>> and my vlans are vmbr0.686 instead of bond0.686 only defined for
>> vlans
>> where proxmox need an ip mtu 1500 and 9000 depending on needs.
>>
>> my vm's are attached to vmbr0 with a tag in the config. This way I
>> do
>> not have to edit proxmox config to take a new vlan in use.
>> and all mtu 1500 stancas go on vlan interfaces, and not on bridges,
>> and
>> that probably do not propagate the same way.
>>
>>
>>
>> In your shoes i would try to TAG the native, and move vmbr0 to
>> vmbr682
>> on bond0.682. This mean you need to change the vm's using vmbr0 tho.
>> So if you have lots of vm's attached to vmbr0 perhaps just TAG and
>> make
>> the member port bond0.682 to avoid changing vm configs. this should
>> make
>> a bond0.682 vlan as the member port and hopefully allow the mtu.
>>
>> disclaimer: just wild guesses, i have not tested this on 6.1.
>>
>>
>>
>> good luck
>> Ronny Aasen
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> --
> Florian König
>
> Bayerisches Landesamt für Steuern
> IuK 16
> Katharina-von-Bora-Str. 6
> 80333 München
>
> Telefon: 089 9991-3630
> E-Mail:  Florian.Koenig at elster.de
> Internet: http://www.lfst.bayern.de


CONFIDENTIALITY NOTICE: This communication may contain privileged and confidential information, or may otherwise be protected from disclosure, and is intended solely for use of the intended recipient(s). If you are not the intended recipient of this communication, please notify the sender that you have received this communication in error and delete and destroy all copies in your possession.


More information about the pve-user mailing list