Proxmox VE uses a bridged networking model. Each host can have up to 4094 bridges. Bridges are like physical network switches implemented in software on the Proxmox VE host. All VMs can share one bridge as if virtual network cables from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges are attached to physical network cards assigned a TCP/IP configuration. For further flexibility, VLANs (IEEE 802.1q) and network bonding/aggregation are possible. In this way it is possible to build complex, flexible virtual networks.
The network configuration is usually changed using the web interface. Changes are stored to
/etc/network/interfaces.new, and are activated when you reboot the host. Actual configuration resides in
/etc/network/interfaces. The following examples list the contents of that file.
- 1 Default Configuration (bridged)
- 2 Routed Configuration
- 3 Masquerading (NAT) with iptables
- 4 QEMU port redirection
- 5 Configuring VLAN in a cluster
- 6 Unsupported Routing
- 7 Naming Conventions
- 8 Video Tutorials
- 9 References
Default Configuration (bridged)
The installation program creates a single bridge (vmbr0), which is connected to the first ethernet card (eth0).
auto lo iface lo inet loopback iface eth0 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.10.2 netmask 255.255.255.0 gateway 192.168.10.1 bridge_ports eth0 bridge_stp off bridge_fd 0
Virtual machines behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network.
Most hosting providers do not support the above setup. For security reasons they disable networking as soon as they detect multiple MAC addresses on a single interface. See discussion on multiple subnets on proxmox using different gateways.
A common setup is a public IP (assume 192.168.10.2 for this example), and additional IP blocks for your VMs (10.10.10.1/255.255.255.0). For such situations we recommend the following setup:
auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.10.2 netmask 255.255.255.0 gateway 192.168.10.1 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp auto vmbr0 iface vmbr0 inet static address 10.10.10.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0
Masquerading (NAT) with iptables
In some cases you may want to use private IPs behind your Proxmox host's true IP, and masquerade the traffic using NAT:
auto lo iface lo inet loopback auto eth0 #real IP adress iface eth0 inet static address 192.168.10.2 netmask 255.255.255.0 gateway 192.168.10.1 auto vmbr0 #private sub network iface vmbr0 inet static address 10.10.10.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
QEMU port redirection
If you have your VM network configuration set to NAT mode, there's another option. According to this thread, QEMU can do port forwarding by itself for TCP and UDP only, but both the web UI and the qm(1) command current lacks the ability to control this directly. Instead, you run:
qm set <vmid> -args "--redir tcp:5555::22"
in order to, say, redirect host port tcp/5555 to guest port tcp/22. At this point you should be able to ssh to the NAT-mode guest VM by connecting to port 5555 on the host's IP.
Configuring VLAN in a cluster
For the simplest way to create VLAN follow the link: VLAN
- Have two separate networks on the same NIC
- Another host (firewall) manages the routing and rules to access these VMs (out of this doc)
Suppose this scenario:
- A cluster with two nodes
- Each node has two NICs
- We want to bond the NICs
- We use two networks: one untagged 192.168.1.0/24 and one tagged (VLanID=53) 192.168.2.0/24, we must configure the switch with port vlan.
- We want to separate these networks at layer 2
First of all we create the bond0 (switch assisted 802.3ad) at the proxmox web interface, follow the video.
Please note that no matter how many nics you add to a bond the speed for a single connection can never be higher than the speed of an individual nic. What bonding means is that more connections can in parallel run with the full speed of a single connection.
At the end we have a /etc/network/interface like this:
# network interface settings auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet manual slaves eth0 eth1 bond_miimon 100 bond_mode 802.3ad auto vmbr0 iface vmbr0 inet static address 192.168.1.1 netmask 255.255.255.0 gateway 192.168.1.250 bridge_ports bond0 bridge_stp off bridge_fd 0
Configure your switch appropriately. If you're using a bond of multiple links, you need to tell this to your switch and put the switch ports in a Link Aggregation Group or Trunk.
We have two methods to follow:
First explicit method
auto vlan53 iface vlan53 inet manual vlan_raw_device bond0
We can use directly the NIC dot VLAN ID, like bond0.53
I prefer the first one!
Create the bridge manually
Now we create the second bridge manually. Here we assume the gateway as 192.168.2.250 for the 192.168.2.0/24 network.
auto vmbr1 iface vmbr1 inet static address 192.168.2.1 netmask 255.255.255.0 network 192.168.2.0 bridge_ports vlan53 bridge_stp off bridge_fd 0 post-up ip route add table vlan53 default via 192.168.2.250 dev vmbr1 post-up ip rule add from 192.168.2.0/24 table vlan53 post-down ip route del table vlan53 default via 192.168.2.250 dev vmbr1 post-down ip rule del from 192.168.2.0/24 table vlan53
- We must not indicate the gateway, we must manually modify the routing table use ip route 2
- The whole configuration must replicate on the other cluster's node, the only change is the node's IP.
Create the table in ip route 2
We must change the file /etc/iproute2/rt_tables, add the following line:
# Table for vlan53 53 vlan53
use these commands to add:
echo "# Table for vlan53" >> /etc/iproute2/rt_tables echo "53 vlan53" >> /etc/iproute2/rt_tables
Create the vlan on switch
For example on a HP Procurve 52 ports we use the following instructions to create the vlan.
- Ports 47-48 trunk (switch assisted 802.3ad) for gateway
- Ports 1-2 trunk (switch assisted 802.3ad) for the first node of cluster proxmox
- Ports 3-4 trunk (switch assisted 802.3ad) for the second node
Enter in configuration mode and type:
trunk 1-2 Trk1 LACP trunk 3-4 Trk2 LACP trunk 47-48 Trk3 LACP vlan 2 name "Vlan2" untagged Trk1-Trk3 ip address 192.168.1.254 255.255.255.0 exit vlan 53 name "Vlan53" tagged Trk1-Trk3 exit
Test the configuration
Reboot the cluster node one by one to test this configuration.
Physical NIC (eg., eth1) cannot currently be made available exclusively for a particular KVM / Container , ie., without bridge and/or bond.
- Ethernet devices: eth0 - eth99
- Allowable bridge names: vmbrn, where 0 ≤ n ≤ 4094
- Bonds: bond0 - bond9
- VLANs: Simply add the VLAN number to the ethernet device name, separated by a period. For example "eth0.50"