[PVE-User] How to increase mtu on gre_sys interface [ Was: How I use lxc container behind GRE tunnel ]

Jean-mathieu CHANTREIN jean-mathieu.chantrein at univ-angers.fr
Wed Sep 12 09:06:51 CEST 2018


Hello.

Some time ago, I wrote this email on the use of containers connected by a GRE tunnel (with openvswitch). I was wrong in my analysis and my problem also impacted VMs.
In fact, I forgot to edit the mtu (at 9000) explicitly on my OVSBridge and OVSTunnel virtual interfaces. But a problem remains with a default mtu of a gre_sys interface, and this interface is not explicitly defined in my /etc/network/interfaces file(I guess it's up by a openvswitch daemon). I submitted my question to the openvswitch mailing list but that was not answered. I forward my mails below.

If someone has already encountered this problem, thank you for telling me how you solved it ? Or is my way good ? I doubt it, because there are interfaces(gretap0 and gre0) open by openvswitch with mtu which seems too low, but they are in a down state ... :

root at pve1:~# ip addr show gretap0 
17: gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
root at pve1:~# ip addr show gre0
16: gre0 at NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0


____________________________________________________________________________________
    De: "Jean-mathieu CHANTREIN" <jean-mathieu.chantrein at univ-angers.fr>
    À: "ovs-discuss" <ovs-discuss at openvswitch.org>
    Envoyé: Mardi 11 Septembre 2018 15:58:20
    Objet: Re: [ovs-discuss] How to increase mtu on gre_sys interface ?

    Hello.

    It's work if I add a post-up instruction on the OVSBridge

    iface vmbr1 inet manual
        ovs_type OVSBridge
        ovs_ports gre1 gre2 gre3
        mtu 9000
        post-up ( ovs-vsctl set Bridge vmbr1 rstp_enable=true && sleep 10 && ip link set gre_sys mtu 8872 )

    But I'm not sure it's the best way ...

    Regards.

    Jean-Mathieu


        De: "Jean-mathieu CHANTREIN" <jean-mathieu.chantrein at univ-angers.fr>
        À: "ovs-discuss" <ovs-discuss at openvswitch.org>
        Envoyé: Lundi 10 Septembre 2018 13:13:59
        Objet: [ovs-discuss] How to increase mtu on gre_sys interface ?

        Hello.

        I use OVS on a Proxmox cluster and I configure GRE Tunnel like this: http://docs.openvswitch.org/en/latest/howto/tunneling/

        I activated the jumbo frame (mtu 9000) on physical switch, physical and virtual interfaces (OVSBridge and OVSTunnel gre). By default, the gre_sys mtu is set to 1472 (I suppose to prevent the overhead of 28 bytes of GRE(8 bytes) and new IP header(20 bytes)) .

        I can modify it manually with:
        ip link set gre_sys mtu 8872 #(9000-28)

        So, 2VM on 2 hosts can communicate correctly with a 1500 mtu

        user at VM1:$ ping -M do -s 1472 $IP_VM2 # 1500 - (ICMP(8 bytes) + IPV4(20 bytes))=1472
        PING IP_VM2 1472(1500) bytes of data.
        1480 bytes from IP_VM2: icmp_seq=1 ttl=64 time=0.227 ms
        ...

        I would like to know what is the best practice to make it persistent in /etc/network/interfaces file ?

        Best regards.

        Jean-Mathieu


        	

        _______________________________________________
        discuss mailing list
        discuss at openvswitch.org
        https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


    _______________________________________________
    discuss mailing list
    discuss at openvswitch.org
    https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
____________________________________________________________________________________


----- Mail original -----
> De: "Jean-mathieu CHANTREIN" <jean-mathieu.chantrein at univ-angers.fr>
> À: "pve-user" <pve-user at pve.proxmox.com>
> Envoyé: Vendredi 18 Mai 2018 14:59:07
> Objet: [PVE-User] How I use lxc container behind GRE tunnel

> Hello.
> 
> I work with a cluster of 4 node proxmox. I use GRE Tunnel and OVS rather like
> this: http://docs.openvswitch.org/en/latest/howto/tunneling/
> 
> GRE encapsulates packet (network layer) with a header of 4 bytes and another
> header ip of 20 bytes so you have a 24 bytes overhead compared to a "classic"
> packet IP. In my case, it's my "classic" packet IP which are encapsuled by GRE.
> 
> Normally, mtu is set by default to 1500 bytes, but as I use GRE, I have 2
> possibility:
> 1) increase MTU to 1524 or more
> 2) decrease MTU to 1476 or less
> 
> In the first case, I have to set up to my physical network to use jumbo frame,
> but I don't have to think anymore at the mtu of my VM (mtu 1500 by default)
> 
> In the second case, I always have to set MTU to 1476 or less on my VM.
> 
> I have choose the first case and it's work perfectly with VM (ping is limited to
> 1472 bytes, it's 1500- IPv4 (20 bytes)- UDP (8 bytes)=1472. I don't have vlan
> on this interface)
> 
> But, when I use a LXC container, that is the output of ip link:
> ---------------------------------------------------------------------------------------
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: gre0 at NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default
> qlen 1000
> link/gre 0.0.0.0 brd 0.0.0.0
> 3: gretap0 at NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode
> DEFAULT group default qlen 1000
> link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
> 78: eth0 at if79: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
> mode DEFAULT group default qlen 1000
> link/ether f2:55:a3:98:c2:31 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> ---------------------------------------------------------------------------------------
> ( I don't understand mtu of 1462 on interface gretap0 ...)
> 
> LXC get GRE interface (I think it's because on the same kernel as the host) and
> the default MTU of GRE is set to 1476 but default MTU of eth0 is set to 1500.
> In consequence, 2 LXC container on 2 hosts linked by GRE can't communicate
> fine:
> _ I can ping containers each other, but not use tcp traffic (ssh by example)
> (ping is limited to 1444 bytes, it's 1500- IPv4 (20 bytes)- GRE (4 bytes)-
> 802.1q VLAN (4 bytes)- IPv4 (20 bytes)- UDP (8 bytes)=1444 bytes )
> 
> I have to manually decrease the mtu of the container to 1476 or less to use
> application based on tcp protocols.
> 
> ip link set eth0 mtu 1476 # ephemeral method
> 
> or
> 
> add mtu=1476 in the description of the LXC container in /etc/pve/lxc/ID.conf
> behind line define interface network (netX) (persistant method)
> 
> It's would great if LXC can have the same comportement as the VM.
> 
> Best regards.
> 
> Jean-Mathieu
> 
> 
> 
>	
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



More information about the pve-user mailing list