The Software Defined Network (SDN) feature allows you to create virtual networks (VNets) at the datacenter level.
SDN is currently an experimental feature in Proxmox VE. This documentation for it is also still under development. Ask on our mailing lists or in the forum for questions and feedback. |
Installation
To enable the experimental Software Defined Network (SDN) integration, you need to install the libpve-network-perl and ifupdown2 packages on every node:
apt update apt install libpve-network-perl ifupdown2
Proxmox VE version 7 and above come installed with ifupdown2. |
After this, you need to add the following line to the end of the /etc/network/interfaces configuration file, so that the SDN configuration gets included and activated.
source /etc/network/interfaces.d/*
Basic Overview
The Proxmox VE SDN allows for separation and fine-grained control of virtual guest networks, using flexible, software-controlled configurations.
Separation is managed through zones, where a zone is its own virtual separated network area. A VNet is a type of a virtual network connected to a zone. Depending on which type or plugin the zone uses, it can behave differently and offer different features, advantages, and disadvantages. Normally, a VNet appears as a common Linux bridge with either a VLAN or VXLAN tag, however, some can also use layer 3 routing for control. VNets are deployed locally on each node, after being configured from the cluster-wide datacenter SDN administration interface.
Main Configuration
Configuration is done at the datacenter (cluster-wide) level and is saved in files located in the shared configuration file system: /etc/pve/sdn
On the web-interface, SDN features 3 main sections:
-
SDN: An overview of the SDN state
-
Zones: Create and manage the virtually separated network zones
-
VNets: Create virtual network bridges and manage subnets
In addition to this, the following options are offered:
-
Controller: For controlling layer 3 routing in complex setups
-
Subnets: Used to defined IP networks on VNets
-
IPAM: Enables the use of external tools for IP address management (guest IPs)
-
DNS: Define a DNS server API for registering virtual guests' hostname and IP addresses
SDN
This is the main status panel. Here you can see the deployment status of zones on different nodes.
The Apply button is used to push and reload local configuration on all cluster nodes.
Local Deployment Monitoring
After applying the configuration through the main SDN panel, the local network configuration is generated locally on each node in the file /etc/network/interfaces.d/sdn, and reloaded with ifupdown2.
You can monitor the status of local zones and VNets through the main tree.
Zones
A zone defines a virtually separated network. Zones can be restricted to specific nodes and assigned permissions, in order to restrict users to a certain zone and its contained VNets.
Different technologies can be used for separation:
-
VLAN: Virtual LANs are the classic method of subdividing a LAN
-
QinQ: Stacked VLAN (formally known as IEEE 802.1ad)
-
VXLAN: Layer2 VXLAN
-
Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
-
EVPN (BGP EVPN): VXLAN using layer 3 border gateway protocol (BGP) routing
Common options
The following options are available for all zone types:
- nodes
-
The nodes which the zone and associated VNets should be deployed on
- ipam
-
Optional. Use an IP Address Management (IPAM) tool to manage IPs in the zone.
- dns
-
Optional. DNS API server.
- reversedns
-
Optional. Reverse DNS API server.
- dnszone
-
Optional. DNS domain name. Used to register hostnames, such as <hostname>.<domain>. The DNS zone must already exist on the DNS server.
Simple Zones
This is the simplest plugin. It will create an isolated VNet bridge. This bridge is not linked to a physical interface, and VM traffic is only local between the node(s). It can also be used in NAT or routed setups.
VLAN Zones
This plugin reuses an existing local Linux or OVS bridge, and manages the VLANs on it. The benefit of using the SDN module is that you can create different zones with specific VNet VLAN tags, and restrict virtual machines to separated zones.
Specific VLAN configuration options:
- bridge
-
Reuse this local bridge or OVS switch, already configured on each local node.
QinQ Zones
QinQ also known as VLAN stacking, wherein the first VLAN tag is defined for the zone (the service-vlan), and the second VLAN tag is defined for the VNets.
Your physical network switches must support stacked VLANs for this configuration! |
Below are the configuration options specific to QinQ:
- bridge
-
A local, VLAN-aware bridge that is already configured on each local node
- service vlan
-
The main VLAN tag of this zone
- service vlan protocol
-
Allows you to choose between an 802.1q (default) or 802.1ad service VLAN type.
- mtu
-
Due to the double stacking of tags, you need 4 more bytes for QinQ VLANs. For example, you must reduce the MTU to 1496 if you physical interface MTU is 1500.
VXLAN Zones
The VXLAN plugin establishes a tunnel (overlay) on top of an existing network (underlay). This encapsulates layer 2 Ethernet frames within layer 4 UDP datagrams, using 4789 as the default destination port. You can, for example, create a private IPv4 VXLAN network on top of public internet network nodes.
This is a layer 2 tunnel only, so no routing between different VNets is possible.
Each VNet will have a specific VXLAN ID in the range 1 - 16777215.
Specific EVPN configuration options:
- peers address list
-
A list of IP addresses from each node through which you want to communicate. Can also be external nodes.
- mtu
-
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes lower than the outgoing physical interface.
EVPN Zones
This is the most complex of all the supported plugins.
BGP-EVPN allows you to create a routable layer 3 network. The VNet of EVPN can have an anycast IP address and/or MAC address. The bridge IP is the same on each node, meaning a virtual guest can use this address as gateway.
Routing can work across VNets from different zones through a VRF (Virtual Routing and Forwarding) interface.
The configuration options specific to EVPN are as follows:
- VRF VXLAN tag
-
This is a VXLAN-ID used for routing interconnect between VNets. It must be different than the VXLAN-ID of the VNets.
- controller
-
An EVPN-controller must to be defined first (see controller plugins section).
- VNet MAC address
-
A unique, anycast MAC address for all VNets in this zone. Will be auto-generated if not defined.
- Exit Nodes
-
Optional. This is used if you want to define some Proxmox VE nodes as exit gateways from the EVPN network, through the real network. The configured nodes will announce a default route in the EVPN network.
- Primary Exit Node
-
Optional. If you use multiple exit nodes, this forces traffic to a primary exit node, instead of load-balancing on all nodes. This is required if you want to use SNAT or if your upstream router doesn’t support ECMP.
- Exit Nodes local routing
-
Optional. This is a special option if you need to reach a VM/CT service from an exit node. (By default, the exit nodes only allow forwarding traffic between real network and EVPN network).
- Advertise Subnets
-
Optional. If you have silent VMs/CTs (for example, if you have multiple IPs and the anycast gateway doesn’t see traffic from theses IPs, the IP addresses won’t be able to be reach inside the EVPN network). This option will announce the full subnet in the EVPN network in this case.
- Disable Arp-Nd Suppression
-
Optional. Don’t suppress ARP or ND packets. This is required if you use floating IPs in your guest VMs (IP are MAC addresses are being moved between systems).
- Route-target import
-
Optional. Allows you to import a list of external EVPN route targets. Used for cross-DC or different EVPN network interconnects.
- MTU
-
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes less than the maximal MTU of the outgoing physical interface.
VNets
A VNet is, in its basic form, a Linux bridge that will be deployed locally on the node and used for virtual machine communication.
The VNet configuration properties are:
- ID
-
An 8 character ID to name and identify a VNet
- Alias
-
Optional longer name, if the ID isn’t enough
- Zone
-
The associated zone for this VNet
- Tag
-
The unique VLAN or VXLAN ID
- VLAN Aware
-
Enable adding an extra VLAN tag in the virtual machine or container’s vNIC configuration, to allow the guest OS to manage the VLAN’s tag.
Subnets
A subnetwork (subnet) allows you to define a specific IP network (IPv4 or IPv6). For each VNet, you can define one or more subnets.
A subnet can be used to:
-
Restrict the IP addresses you can define on a specific VNet
-
Assign routes/gateways on a VNet in layer 3 zones
-
Enable SNAT on a VNet in layer 3 zones
-
Auto assign IPs on virtual guests (VM or CT) through IPAM plugins
-
DNS registration through DNS plugins
If an IPAM server is associated with the subnet zone, the subnet prefix will be automatically registered in the IPAM.
Subnet properties are:
- ID
-
A CIDR network address, for example 10.0.0.0/8
- Gateway
-
The IP address of the network’s default gateway. On layer 3 zones (Simple/EVPN plugins), it will be deployed on the VNet.
- SNAT
-
Optional. Enable SNAT for layer 3 zones (Simple/EVPN plugins), for this subnet. The subnet’s source IP will be NATted to server’s outgoing interface/IP. On EVPN zones, this is only done on EVPN gateway-nodes.
- Dnszoneprefix
-
Optional. Add a prefix to the domain registration, like <hostname>.prefix.<domain>
Controllers
Some zone types need an external controller to manage the VNet control-plane. Currently this is only required for the bgp-evpn zone plugin.
EVPN Controller
For BGP-EVPN, we need a controller to manage the control plane. The currently supported software controller is the "frr" router. You may need to install it on each node where you want to deploy EVPN zones.
apt install frr frr-pythontools
Configuration options:
- asn
-
A unique BGP ASN number. It’s highly recommended to use a private ASN number (64512 – 65534, 4200000000 – 4294967294), as otherwise you could end up breaking global routing by mistake.
- peers
-
An IP list of all nodes where you want to communicate for the EVPN (could also be external nodes or route reflectors servers)
BGP Controller
The BGP controller is not used directly by a zone. You can use it to configure FRR to manage BGP peers.
For BGP-EVPN, it can be used to define a different ASN by node, so doing EBGP.
Configuration options:
- node
-
The node of this BGP controller
- asn
-
A unique BGP ASN number. It’s highly recommended to use a private ASN number in the range (64512 - 65534) or (4200000000 - 4294967294), as otherwise you could break global routing by mistake.
- peers
-
A list of peer IP addresses you want to communicate with using the underlying BGP network.
- ebgp
-
If your peer’s remote-AS is different, this enables EBGP.
- loopback
-
Use a loopback or dummy interface as the source of the EVPN network (for multipath).
- ebgp-mutltihop
-
Increase the number of hops to reach peers, in case they are not directly connected or they use loopback.
- bgp-multipath-as-path-relax
-
Allow ECMP if your peers have different ASN.
IPAMs
IPAM (IP Address Management) tools are used to manage/assign the IP addresses of guests on the network. It can be used to find free IP addresses when you create a VM/CT for example (not yet implemented).
An IPAM can be associated with one or more zones, to provide IP addresses for all subnets defined in those zones.
Proxmox VE IPAM Plugin
This is the default internal IPAM for your Proxmox VE cluster, if you don’t have external IPAM software.
phpIPAM Plugin
You need to create an application in phpIPAM and add an API token with admin privileges.
The phpIPAM configuration properties are:
- url
-
The REST-API endpoint: http://phpipam.domain.com/api/<appname>/
- token
-
An API access token
- section
-
An integer ID. Sections are a group of subnets in phpIPAM. Default installations use sectionid=1 for customers.
NetBox IPAM Plugin
NetBox is an IP address management (IPAM) and datacenter infrastructure management (DCIM) tool. See the source code repository for details: https://github.com/netbox-community/netbox
You need to create an API token in NetBox to use it: https://netbox.readthedocs.io/en/stable/api/authentication
The NetBox configuration properties are:
- url
-
The REST API endpoint: http://yournetbox.domain.com/api
- token
-
An API access token
DNS
The DNS plugin in Proxmox VE SDN is used to define a DNS API server for registration of your hostname and IP address. A DNS configuration is associated with one or more zones, to provide DNS registration for all the subnet IPs configured for a zone.
PowerDNS Plugin
You need to enable the web server and the API in your PowerDNS config:
api=yes api-key=arandomgeneratedstring webserver=yes webserver-port=8081
The PowerDNS configuration options are:
- url
-
The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
- key
-
An API access key
- ttl
-
The default TTL for records
Examples
VLAN Setup Example
While we show plaintext configuration content here, almost everything should be configurable using the web-interface only. |
Node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.1/24 source /etc/network/interfaces.d/*
Node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.2/24 source /etc/network/interfaces.d/*
Create a VLAN zone named ‘myvlanzone’:
id: myvlanzone bridge: vmbr0
Create a VNet named ‘myvnet1' with `vlan-id` `10’ and the previously created ‘myvlanzone’ as its zone.
id: myvnet1 zone: myvlanzone tag: 10
Apply the configuration through the main SDN panel, to create VNets locally on each node.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.100/24
Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24
Following this, you should be able to ping between both VMs over that network.
QinQ Setup Example
While we show plaintext configuration content here, almost everything should be configurable using the web-interface only. |
Node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.1/24 source /etc/network/interfaces.d/*
Node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.2/24 source /etc/network/interfaces.d/*
Create a QinQ zone named ‘qinqzone1’ with service VLAN 20
id: qinqzone1 bridge: vmbr0 service vlan: 20
Create another QinQ zone named ‘qinqzone2’ with service VLAN 30
id: qinqzone2 bridge: vmbr0 service vlan: 30
Create a VNet named ‘myvnet1’ with customer VLAN-ID 100 on the previously created ‘qinqzone1’ zone.
id: myvnet1 zone: qinqzone1 tag: 100
Create a ‘myvnet2’ with customer VLAN-ID 100 on the previously created ‘qinqzone2’ zone.
id: myvnet2 zone: qinqzone2 tag: 100
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.100/24
Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24
Create a third virtual machine (vm3) on node1, with a vNIC on the other VNet ‘myvnet2’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.102/24
Create another virtual machine (vm4) on node2, with a vNIC on the same VNet ‘myvnet2’ as vm3.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.103/24
Then, you should be able to ping between the VMs vm1 and vm2, as well as between vm3 and vm4. However, neither of VMs vm1 or vm2 can ping VMs vm3 or vm4, as they are on a different zone with a different service-vlan.
VXLAN Setup Example
While we show plaintext configuration content here, almost everything is configurable through the web-interface. |
node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.1/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.2/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.3/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
Create a VXLAN zone named ‘myvxlanzone’, using a lower MTU to ensure the extra 50 bytes of the VXLAN header can fit. Add all previously configured IPs from the nodes to the peer address list.
id: myvxlanzone peers address list: 192.168.0.1,192.168.0.2,192.168.0.3 mtu: 1450
Create a VNet named ‘myvnet1’ using the VXLAN zone ‘myvxlanzone’ created previously.
id: myvnet1 zone: myvxlanzone tag: 100000
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM (note the lower MTU).
auto eth0 iface eth0 inet static address 10.0.3.100/24 mtu 1450
Create a second virtual machine (vm2) on node3, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24 mtu 1450
Then, you should be able to ping between between vm1 and vm2.
EVPN Setup Example
node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.1/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.2/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.3/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
Create an EVPN controller, using a private ASN number and the above node addresses as peers.
id: myevpnctl asn: 65000 peers: 192.168.0.1,192.168.0.2,192.168.0.3
Create an EVPN zone named ‘myevpnzone’, using the previously created EVPN-controller. Define node1 and node2 as exit nodes.
id: myevpnzone vrf vxlan tag: 10000 controller: myevpnctl mtu: 1450 vnet mac address: 32:F4:05:FE:6C:0A exitnodes: node1,node2
Create the first VNet named ‘myvnet1’ using the EVPN zone ‘myevpnzone’.
id: myvnet1 zone: myevpnzone tag: 11000
Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on myvnet1.
subnet: 10.0.1.0/24 gateway: 10.0.1.1
Create the second VNet named ‘myvnet2’ using the same EVPN zone ‘myevpnzone’, a different IPv4 CIDR network.
id: myvnet2 zone: myevpnzone tag: 12000
Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
subnet: 10.0.2.0/24 gateway: 10.0.2.1
Apply the configuration from the main SDN web-interface panel to create VNets locally on each node and generate the FRR config.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.1.100/24 gateway 10.0.1.1 #this is the ip of the vnet1 mtu 1450
Create a second virtual machine (vm2) on node2, with a vNIC on the other VNet ‘myvnet2’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.2.100/24 gateway 10.0.2.1 #this is the ip of the myvnet2 mtu 1450
Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
If you ping an external IP from vm2 on the non-gateway node3, the packet will go to the configured myvnet2 gateway, then will be routed to the exit nodes (node1 or node2) and from there it will leave those nodes over the default gateway configured on node1 or node2.
You need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24 networks to node1 and node2 on your external gateway, so that the public network can reply back. |
If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24 and 10.0.2.0/24 in this example), will be announced dynamically.
Notes
Multiple EVPN Exit Nodes
If you have multiple gateway nodes, you should disable the rp_filter (Strict Reverse Path Filter) option, because packets can arrive at one node but go out from another node.
net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0
VXLAN IPSEC Encryption
If you need to add encryption on top of a VXLAN, it’s possible to do so with IPSEC, through strongswan. You’ll need to reduce the MTU by 60 bytes (IPv4) or 80 bytes (IPv6) to handle encryption.
So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC) + 50 (VXLAN) == 1500).
apt install strongswan
Add configuration to ‘/etc/ipsec.conf’. We only need to encrypt traffic from the VXLAN UDP port 4789.
conn %default ike=aes256-sha1-modp1024! # the fastest, but reasonably secure cipher on modern HW esp=aes256-sha1! leftfirewall=yes # this is necessary when using Proxmox VE firewall rules conn output rightsubnet=%dynamic[udp/4789] right=%any type=transport authby=psk auto=route conn input leftsubnet=%dynamic[udp/4789] type=transport authby=psk auto=route
Then generate a pre-shared key with:
openssl rand -base64 128
and add the key to ‘/etc/ipsec.secrets’, so that the file contents looks like:
: PSK <generatedbase64key>
You need to copy the PSK and the configuration onto the other nodes.