Difference between revisions of "Network Configuration"

From Proxmox VE
Jump to: navigation, search
 
(4 intermediate revisions by the same user not shown)
Line 4: Line 4:
 
[[Category:Reference Documentation]]
 
[[Category:Reference Documentation]]
 
<pvehide>
 
<pvehide>
Proxmox VE uses a bridged networking model. Each host can have up to 4094
+
Network configuration can be done either via the GUI, or by manually
bridges. Bridges are like physical network switches implemented in
+
editing the file /etc/network/interfaces, which contains the
software. All VMs can share a single bridge, as if
+
whole network configuration. The  interfaces(5) manual page contains the
virtual network cables from each guest were all plugged into the same
+
complete format description. All Proxmox VE tools try hard to keep direct
switch. But you can also create multiple bridges to separate network
+
user modifications, but using the GUI is still preferable, because it
domains.
+
protects you from errors.
For connecting VMs to the outside world, bridges are attached to
+
Once the network is configured, you can use the Debian traditional tools ifup
physical network cards. For further flexibility, you can configure
+
and ifdown commands to bring interfaces up and down.
VLANs (IEEE 802.1q) and network bonding, also known as "link
 
aggregation". That way it is possible to build complex and flexible
 
virtual networks.
 
Debian traditionally uses the ifup and ifdown commands to
 
configure the network. The file /etc/network/interfaces contains the
 
whole network setup. Please refer to to manual page (man interfaces)
 
for a complete format description.
 
 
Proxmox VE does not write changes directly to
 
Proxmox VE does not write changes directly to
 
/etc/network/interfaces. Instead, we write into a temporary file
 
/etc/network/interfaces. Instead, we write into a temporary file
 
called /etc/network/interfaces.new, and commit those changes when
 
called /etc/network/interfaces.new, and commit those changes when
 
you reboot the node.
 
you reboot the node.
It is worth mentioning that you can directly edit the configuration
 
file. All Proxmox VE tools tries hard to keep such direct user
 
modifications. Using the GUI is still preferable, because it
 
protect you from errors.
 
 
Naming Conventions
 
Naming Conventions
 
We currently use the following naming conventions for device names:
 
We currently use the following naming conventions for device names:
Ethernet devices: eth[N], where 0 ≤ N (eth0, eth1, &#8230;)
+
Ethernet devices: en*, systemd network interface names. This naming scheme is
 +
used for new Proxmox VE installations since version 5.0.
 +
Ethernet devices: eth[N], where 0 ≤ N (eth0, eth1, &#8230;) This naming
 +
scheme is used for Proxmox VE hosts which were installed before the 5.0
 +
release. When upgrading to 5.0, the names are kept as-is.
 
Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (vmbr0 - vmbr4094)
 
Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (vmbr0 - vmbr4094)
 
Bonds: bond[N], where 0 ≤ N (bond0, bond1, &#8230;)
 
Bonds: bond[N], where 0 ≤ N (bond0, bond1, &#8230;)
 
VLANs: Simply add the VLAN number to the device name,
 
VLANs: Simply add the VLAN number to the device name,
   separated by a period (eth0.50, bond1.30)
+
   separated by a period (eno1.50, bond1.30)
 
This makes it easier to debug networks problems, because the device
 
This makes it easier to debug networks problems, because the device
names implies the device type.
+
name implies the device type.
 +
Systemd Network Interface Names
 +
Systemd uses the two character prefix en for Ethernet network
 +
devices. The next characters depends on the device driver and the fact
 +
which schema matches first.
 +
o&lt;index&gt;[n&lt;phys_port_name&gt;|d&lt;dev_port&gt;] — devices on board
 +
s&lt;slot&gt;[f&lt;function&gt;][n&lt;phys_port_name&gt;|d&lt;dev_port&gt;] — device by hotplug id
 +
[P&lt;domain&gt;]p&lt;bus&gt;s&lt;slot&gt;[f&lt;function&gt;][n&lt;phys_port_name&gt;|d&lt;dev_port&gt;] — devices by bus id
 +
x&lt;MAC&gt; — device by MAC address
 +
The most common patterns are:
 +
eno1 — is the first on board NIC
 +
enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
 +
For more information see Predictable Network Interface Names.
 +
Choosing a network configuration
 +
Depending on your current network organization and your resources you can
 +
choose either a bridged, routed, or masquerading networking setup.
 +
Proxmox VE server in a private LAN, using an external gateway to reach the internet
 +
The Bridged model makes the most sense in this case, and this is also
 +
the default mode on new Proxmox VE installations.
 +
Each of your Guest system will have a virtual interface attached to the
 +
Proxmox VE bridge. This is similar in effect to having the Guest network card
 +
directly connected to a new switch on your LAN, the Proxmox VE host playing the role
 +
of the switch.
 +
Proxmox VE server at hosting provider, with public IP ranges for Guests
 +
For this setup, you can use either a Bridged or Routed model, depending on
 +
what your provider allows.
 +
Proxmox VE server at hosting provider, with a single public IP address
 +
In that case the only way to get outgoing network accesses for your guest
 +
systems is to use Masquerading. For incoming network access to your guests,
 +
you will need to configure Port Forwarding.
 +
For further flexibility, you can configure
 +
VLANs (IEEE 802.1q) and network bonding, also known as "link
 +
aggregation". That way it is possible to build complex and flexible
 +
virtual networks.
 
Default Configuration using a Bridge
 
Default Configuration using a Bridge
 +
Bridges are like physical network switches implemented in software.
 +
All VMs can share a single bridge, or you can create multiple bridges to
 +
separate network domains. Each host can have up to 4094 bridges.
 
The installation program creates a single bridge named vmbr0, which
 
The installation program creates a single bridge named vmbr0, which
is connected to the first ethernet card eth0. The corresponding
+
is connected to the first Ethernet card. The corresponding
configuration in /etc/network/interfaces looks like this:
+
configuration in /etc/network/interfaces might look like this:
 
auto lo
 
auto lo
 
iface lo inet loopback
 
iface lo inet loopback
iface eth0 inet manual
+
iface eno1 inet manual
 
auto vmbr0
 
auto vmbr0
 
iface vmbr0 inet static
 
iface vmbr0 inet static
Line 48: Line 77:
 
         netmask 255.255.255.0
 
         netmask 255.255.255.0
 
         gateway 192.168.10.1
 
         gateway 192.168.10.1
         bridge_ports eth0
+
         bridge_ports eno1
 
         bridge_stp off
 
         bridge_stp off
 
         bridge_fd 0
 
         bridge_fd 0
Line 59: Line 88:
 
reasons, they disable networking as soon as they detect multiple MAC
 
reasons, they disable networking as soon as they detect multiple MAC
 
addresses on a single interface.
 
addresses on a single interface.
Some providers allows you to register additional MACs on there
+
Some providers allows you to register additional MACs on their
 
management interface. This avoids the problem, but is clumsy to
 
management interface. This avoids the problem, but is clumsy to
 
configure because you need to register a MAC for each of your VMs.
 
configure because you need to register a MAC for each of your VMs.
Line 65: Line 94:
 
interface. This makes sure that all network packets use the same MAC
 
interface. This makes sure that all network packets use the same MAC
 
address.
 
address.
A common scenario is that you have a public IP (assume 192.168.10.2
+
A common scenario is that you have a public IP (assume 198.51.100.5
 
for this example), and an additional IP block for your VMs
 
for this example), and an additional IP block for your VMs
(10.10.10.1/255.255.255.0). We recommend the following setup for such
+
(203.0.113.16/29). We recommend the following setup for such
 
situations:
 
situations:
 
auto lo
 
auto lo
 
iface lo inet loopback
 
iface lo inet loopback
auto eth0
+
auto eno1
iface eth0 inet static
+
iface eno1 inet static
         address  192.168.10.2
+
         address  198.51.100.5
 
         netmask  255.255.255.0
 
         netmask  255.255.255.0
         gateway  192.168.10.1
+
         gateway  198.51.100.1
         post-up echo 1 &gt; /proc/sys/net/ipv4/conf/eth0/proxy_arp
+
        post-up echo 1 &gt; /proc/sys/net/ipv4/ip_forward
 +
         post-up echo 1 &gt; /proc/sys/net/ipv4/conf/eno1/proxy_arp
 
auto vmbr0
 
auto vmbr0
 
iface vmbr0 inet static
 
iface vmbr0 inet static
         address  10.10.10.1
+
         address  203.0.113.17
         netmask  255.255.255.0
+
         netmask  255.255.255.248
 
         bridge_ports none
 
         bridge_ports none
 
         bridge_stp off
 
         bridge_stp off
 
         bridge_fd 0
 
         bridge_fd 0
 
Masquerading (NAT) with iptables
 
Masquerading (NAT) with iptables
In some cases you may want to use private IPs behind your Proxmox
+
Masquerading allows guests having only a private IP address to access the
host&#8217;s true IP, and masquerade the traffic using NAT:
+
network by using the host IP address for outgoing traffic. Each outgoing
 +
packet is rewritten by iptables to appear as originating from the host,
 +
and responses are rewritten accordingly to be routed to the original sender.
 
auto lo
 
auto lo
 
iface lo inet loopback
 
iface lo inet loopback
auto eth0
+
auto eno1
#real IP adress
+
#real IP address
iface eth0 inet static
+
iface eno1 inet static
         address  192.168.10.2
+
         address  198.51.100.5
 
         netmask  255.255.255.0
 
         netmask  255.255.255.0
         gateway  192.168.10.1
+
         gateway  198.51.100.1
 
auto vmbr0
 
auto vmbr0
 
#private sub network
 
#private sub network
Line 104: Line 136:
 
         bridge_fd 0
 
         bridge_fd 0
 
         post-up echo 1 &gt; /proc/sys/net/ipv4/ip_forward
 
         post-up echo 1 &gt; /proc/sys/net/ipv4/ip_forward
         post-up  iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
+
         post-up  iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
         post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
+
         post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
 
Linux Bond
 
Linux Bond
 
Bonding (also called NIC teaming or Link Aggregation) is a technique
 
Bonding (also called NIC teaming or Link Aggregation) is a technique
Line 148: Line 180:
 
another slave takes over the MAC address of the failed receiving
 
another slave takes over the MAC address of the failed receiving
 
slave.
 
slave.
Adaptive load balancing (balanceIEEE 802.3ad Dynamic link
+
Adaptive load balancing (balance-alb): Includes balance-tlb plus receive
aggregation (802.3ad)(LACP):-alb): Includes balance-tlb plus receive
 
 
load balancing (rlb) for IPV4 traffic, and does not require any
 
load balancing (rlb) for IPV4 traffic, and does not require any
 
special network switch support. The receive load balancing is achieved
 
special network switch support. The receive load balancing is achieved
Line 158: Line 189:
 
network-peers use different MAC addresses for their network packet
 
network-peers use different MAC addresses for their network packet
 
traffic.
 
traffic.
For the most setups the active-backup are the best choice or if your
+
If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
switch support LACP "IEEE 802.3ad" this mode should be preferred.
+
the corresponding bonding mode (802.3ad). Otherwise you should generally use the
 +
active-backup mode.
 +
If you intend to run your cluster network on the bonding interfaces, then you
 +
have to use active-passive mode on the bonding interfaces, other modes are
 +
unsupported.
 
The following bond configuration can be used as distributed/shared
 
The following bond configuration can be used as distributed/shared
 
storage network. The benefit would be that you get more speed and the
 
storage network. The benefit would be that you get more speed and the
Line 166: Line 201:
 
auto lo
 
auto lo
 
iface lo inet loopback
 
iface lo inet loopback
iface eth1 inet manual
+
iface eno1 inet manual
iface eth2 inet manual
+
iface eno2 inet manual
 
auto bond0
 
auto bond0
 
iface bond0 inet static
 
iface bond0 inet static
       slaves eth1 eth2
+
       slaves eno1 eno2
 
       address  192.168.1.2
 
       address  192.168.1.2
 
       netmask  255.255.255.0
 
       netmask  255.255.255.0
Line 181: Line 216:
 
         netmask  255.255.255.0
 
         netmask  255.255.255.0
 
         gateway  10.10.10.1
 
         gateway  10.10.10.1
         bridge_ports eth0
+
         bridge_ports eno1
 
         bridge_stp off
 
         bridge_stp off
 
         bridge_fd 0
 
         bridge_fd 0
Line 189: Line 224:
 
auto lo
 
auto lo
 
iface lo inet loopback
 
iface lo inet loopback
iface eth1 inet manual
+
iface eno1 inet manual
iface eth2 inet manual
+
iface eno2 inet manual
 
auto bond0
 
auto bond0
iface bond0 inet maunal
+
iface bond0 inet manual
       slaves eth1 eth2
+
       slaves eno1 eno2
 
       bond_miimon 100
 
       bond_miimon 100
 
       bond_mode 802.3ad
 
       bond_mode 802.3ad
Line 202: Line 237:
 
         netmask  255.255.255.0
 
         netmask  255.255.255.0
 
         gateway  10.10.10.1
 
         gateway  10.10.10.1
 +
        bridge_ports bond0
 +
        bridge_stp off
 +
        bridge_fd 0
 +
VLAN 802.1Q
 +
A virtual LAN (VLAN) is a broadcast domain that is partitioned and
 +
isolated in the network at layer two.  So it is possible to have
 +
multiple networks (4096) in a physical network, each independent of
 +
the other ones.
 +
Each VLAN network is identified by a number often called tag.
 +
Network packages are then tagged to identify which virtual network
 +
they belong to.
 +
VLAN for Guest Networks
 +
Proxmox VE supports this setup out of the box. You can specify the VLAN tag
 +
when you create a VM. The VLAN tag is part of the guest network
 +
configuration. The networking layer supports different modes to
 +
implement VLANs, depending on the bridge configuration:
 +
VLAN awareness on the Linux bridge:
 +
In this case, each guest&#8217;s virtual network card is assigned to a VLAN tag,
 +
which is transparently supported by the Linux bridge.
 +
Trunk mode is also possible, but that makes configuration
 +
in the guest necessary.
 +
"traditional" VLAN on the Linux bridge:
 +
In contrast to the VLAN awareness method, this method is not transparent
 +
and creates a VLAN device with associated bridge for each VLAN.
 +
That is, creating a guest on VLAN 5 for example, would create two
 +
interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
 +
Open vSwitch VLAN:
 +
This mode uses the OVS VLAN feature.
 +
Guest configured VLAN:
 +
VLANs are assigned inside the guest. In this case, the setup is
 +
completely done inside the guest and can not be influenced from the
 +
outside. The benefit is that you can use more than one VLAN on a
 +
single virtual NIC.
 +
VLAN on the Host
 +
To allow host communication with an isolated network. It is possible
 +
to apply VLAN tags to any network device (NIC, Bond, Bridge). In
 +
general, you should configure the VLAN on the interface with the least
 +
abstraction layers between itself and the physical NIC.
 +
For example, in a default configuration where you want to place
 +
the host management address on a separate VLAN.
 +
Example: Use VLAN 5 for the Proxmox VE management IP with traditional Linux bridge
 +
auto lo
 +
iface lo inet loopback
 +
iface eno1 inet manual
 +
iface eno1.5 inet manual
 +
auto vmbr0v5
 +
iface vmbr0v5 inet static
 +
        address  10.10.10.2
 +
        netmask  255.255.255.0
 +
        gateway  10.10.10.1
 +
        bridge_ports eno1.5
 +
        bridge_stp off
 +
        bridge_fd 0
 +
auto vmbr0
 +
iface vmbr0 inet manual
 +
        bridge_ports eno1
 +
        bridge_stp off
 +
        bridge_fd 0
 +
Example: Use VLAN 5 for the Proxmox VE management IP with VLAN aware Linux bridge
 +
auto lo
 +
iface lo inet loopback
 +
iface eno1 inet manual
 +
auto vmbr0.5
 +
iface vmbr0.5 inet static
 +
        address  10.10.10.2
 +
        netmask  255.255.255.0
 +
        gateway  10.10.10.1
 +
auto vmbr0
 +
iface vmbr0 inet manual
 +
        bridge_ports eno1
 +
        bridge_stp off
 +
        bridge_fd 0
 +
        bridge_vlan_aware yes
 +
The next example is the same setup but a bond is used to
 +
make this network fail-safe.
 +
Example: Use VLAN 5 with bond0 for the Proxmox VE management IP with traditional Linux bridge
 +
auto lo
 +
iface lo inet loopback
 +
iface eno1 inet manual
 +
iface eno2 inet manual
 +
auto bond0
 +
iface bond0 inet manual
 +
      slaves eno1 eno2
 +
      bond_miimon 100
 +
      bond_mode 802.3ad
 +
      bond_xmit_hash_policy layer2+3
 +
iface bond0.5 inet manual
 +
auto vmbr0v5
 +
iface vmbr0v5 inet static
 +
        address  10.10.10.2
 +
        netmask  255.255.255.0
 +
        gateway  10.10.10.1
 +
        bridge_ports bond0.5
 +
        bridge_stp off
 +
        bridge_fd 0
 +
auto vmbr0
 +
iface vmbr0 inet manual
 
         bridge_ports bond0
 
         bridge_ports bond0
 
         bridge_stp off
 
         bridge_stp off

Latest revision as of 11:23, 16 July 2019

Network configuration can be done either via the GUI, or by manually editing the file /etc/network/interfaces, which contains the whole network configuration. The interfaces(5) manual page contains the complete format description. All Proxmox VE tools try hard to keep direct user modifications, but using the GUI is still preferable, because it protects you from errors.

Once the network is configured, you can use the Debian traditional tools ifup and ifdown commands to bring interfaces up and down.

Note Proxmox VE does not write changes directly to /etc/network/interfaces. Instead, we write into a temporary file called /etc/network/interfaces.new, and commit those changes when you reboot the node.

Naming Conventions

We currently use the following naming conventions for device names:

  • Ethernet devices: en*, systemd network interface names. This naming scheme is used for new Proxmox VE installations since version 5.0.

  • Ethernet devices: eth[N], where 0 ≤ N (eth0, eth1, …) This naming scheme is used for Proxmox VE hosts which were installed before the 5.0 release. When upgrading to 5.0, the names are kept as-is.

  • Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (vmbr0 - vmbr4094)

  • Bonds: bond[N], where 0 ≤ N (bond0, bond1, …)

  • VLANs: Simply add the VLAN number to the device name, separated by a period (eno1.50, bond1.30)

This makes it easier to debug networks problems, because the device name implies the device type.

Systemd Network Interface Names

Systemd uses the two character prefix en for Ethernet network devices. The next characters depends on the device driver and the fact which schema matches first.

  • o<index>[n<phys_port_name>|d<dev_port>] — devices on board

  • s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id

  • [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id

  • x<MAC> — device by MAC address

The most common patterns are:

  • eno1 — is the first on board NIC

  • enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.

For more information see Predictable Network Interface Names.

Choosing a network configuration

Depending on your current network organization and your resources you can choose either a bridged, routed, or masquerading networking setup.

Proxmox VE server in a private LAN, using an external gateway to reach the internet

The Bridged model makes the most sense in this case, and this is also the default mode on new Proxmox VE installations. Each of your Guest system will have a virtual interface attached to the Proxmox VE bridge. This is similar in effect to having the Guest network card directly connected to a new switch on your LAN, the Proxmox VE host playing the role of the switch.

Proxmox VE server at hosting provider, with public IP ranges for Guests

For this setup, you can use either a Bridged or Routed model, depending on what your provider allows.

Proxmox VE server at hosting provider, with a single public IP address

In that case the only way to get outgoing network accesses for your guest systems is to use Masquerading. For incoming network access to your guests, you will need to configure Port Forwarding.

For further flexibility, you can configure VLANs (IEEE 802.1q) and network bonding, also known as "link aggregation". That way it is possible to build complex and flexible virtual networks.

Default Configuration using a Bridge

default-network-setup-bridge.svg

Bridges are like physical network switches implemented in software. All VMs can share a single bridge, or you can create multiple bridges to separate network domains. Each host can have up to 4094 bridges.

The installation program creates a single bridge named vmbr0, which is connected to the first Ethernet card. The corresponding configuration in /etc/network/interfaces might look like this:

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.10.2
        netmask 255.255.255.0
        gateway 192.168.10.1
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

Virtual machines behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network.

Routed Configuration

Most hosting providers do not support the above setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.

Tip Some providers allows you to register additional MACs on their management interface. This avoids the problem, but is clumsy to configure because you need to register a MAC for each of your VMs.

You can avoid the problem by “routing” all traffic via a single interface. This makes sure that all network packets use the same MAC address.

default-network-setup-routed.svg

A common scenario is that you have a public IP (assume 198.51.100.5 for this example), and an additional IP block for your VMs (203.0.113.16/29). We recommend the following setup for such situations:

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        address  198.51.100.5
        netmask  255.255.255.0
        gateway  198.51.100.1
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp


auto vmbr0
iface vmbr0 inet static
        address  203.0.113.17
        netmask  255.255.255.248
        bridge_ports none
        bridge_stp off
        bridge_fd 0

Masquerading (NAT) with iptables

Masquerading allows guests having only a private IP address to access the network by using the host IP address for outgoing traffic. Each outgoing packet is rewritten by iptables to appear as originating from the host, and responses are rewritten accordingly to be routed to the original sender.

auto lo
iface lo inet loopback

auto eno1
#real IP address
iface eno1 inet static
        address  198.51.100.5
        netmask  255.255.255.0
        gateway  198.51.100.1

auto vmbr0
#private sub network
iface vmbr0 inet static
        address  10.10.10.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE

Linux Bond

Bonding (also called NIC teaming or Link Aggregation) is a technique for binding multiple NIC’s to a single network device. It is possible to achieve different goals, like make the network fault-tolerant, increase the performance or both together.

High-speed hardware like Fibre Channel and the associated switching hardware can be quite expensive. By doing link aggregation, two NICs can appear as one logical interface, resulting in double speed. This is a native Linux kernel feature that is supported by most switches. If your nodes have multiple Ethernet ports, you can distribute your points of failure by running network cables to different switches and the bonded connection will failover to one cable or the other in case of network trouble.

Aggregated links can improve live-migration delays and improve the speed of replication of data between Proxmox VE Cluster nodes.

There are 7 modes for bonding:

  • Round-robin (balance-rr): Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.

  • Active-backup (active-backup): Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.

  • XOR (balance-xor): Transmit network packets based on [(source MAC address XOR’d with destination MAC address) modulo NIC slave count]. This selects the same NIC slave for each destination MAC address. This mode provides load balancing and fault tolerance.

  • Broadcast (broadcast): Transmit network packets on all slave network interfaces. This mode provides fault tolerance.

  • IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP): Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification.

  • Adaptive transmit load balancing (balance-tlb): Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

  • Adaptive load balancing (balance-alb): Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using the corresponding bonding mode (802.3ad). Otherwise you should generally use the active-backup mode.
If you intend to run your cluster network on the bonding interfaces, then you have to use active-passive mode on the bonding interfaces, other modes are unsupported.

The following bond configuration can be used as distributed/shared storage network. The benefit would be that you get more speed and the network will be fault-tolerant.

Example: Use bond with fixed IP address
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto bond0
iface bond0 inet static
      slaves eno1 eno2
      address  192.168.1.2
      netmask  255.255.255.0
      bond_miimon 100
      bond_mode 802.3ad
      bond_xmit_hash_policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
default-network-setup-bond.svg

Another possibility it to use the bond directly as bridge port. This can be used to make the guest network fault-tolerant.

Example: Use a bond as bridge port
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto bond0
iface bond0 inet manual
      slaves eno1 eno2
      bond_miimon 100
      bond_mode 802.3ad
      bond_xmit_hash_policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

VLAN 802.1Q

A virtual LAN (VLAN) is a broadcast domain that is partitioned and isolated in the network at layer two. So it is possible to have multiple networks (4096) in a physical network, each independent of the other ones.

Each VLAN network is identified by a number often called tag. Network packages are then tagged to identify which virtual network they belong to.

VLAN for Guest Networks

Proxmox VE supports this setup out of the box. You can specify the VLAN tag when you create a VM. The VLAN tag is part of the guest network configuration. The networking layer supports different modes to implement VLANs, depending on the bridge configuration:

  • VLAN awareness on the Linux bridge: In this case, each guest’s virtual network card is assigned to a VLAN tag, which is transparently supported by the Linux bridge. Trunk mode is also possible, but that makes configuration in the guest necessary.

  • "traditional" VLAN on the Linux bridge: In contrast to the VLAN awareness method, this method is not transparent and creates a VLAN device with associated bridge for each VLAN. That is, creating a guest on VLAN 5 for example, would create two interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.

  • Open vSwitch VLAN: This mode uses the OVS VLAN feature.

  • Guest configured VLAN: VLANs are assigned inside the guest. In this case, the setup is completely done inside the guest and can not be influenced from the outside. The benefit is that you can use more than one VLAN on a single virtual NIC.

VLAN on the Host

To allow host communication with an isolated network. It is possible to apply VLAN tags to any network device (NIC, Bond, Bridge). In general, you should configure the VLAN on the interface with the least abstraction layers between itself and the physical NIC.

For example, in a default configuration where you want to place the host management address on a separate VLAN.

Example: Use VLAN 5 for the Proxmox VE management IP with traditional Linux bridge
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno1.5 inet manual

auto vmbr0v5
iface vmbr0v5 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1
        bridge_ports eno1.5
        bridge_stp off
        bridge_fd 0

auto vmbr0
iface vmbr0 inet manual
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
Example: Use VLAN 5 for the Proxmox VE management IP with VLAN aware Linux bridge
auto lo
iface lo inet loopback

iface eno1 inet manual


auto vmbr0.5
iface vmbr0.5 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1

auto vmbr0
iface vmbr0 inet manual
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
        bridge_vlan_aware yes

The next example is the same setup but a bond is used to make this network fail-safe.

Example: Use VLAN 5 with bond0 for the Proxmox VE management IP with traditional Linux bridge
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto bond0
iface bond0 inet manual
      slaves eno1 eno2
      bond_miimon 100
      bond_mode 802.3ad
      bond_xmit_hash_policy layer2+3

iface bond0.5 inet manual

auto vmbr0v5
iface vmbr0v5 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1
        bridge_ports bond0.5
        bridge_stp off
        bridge_fd 0

auto vmbr0
iface vmbr0 inet manual
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0