The Software Defined Network (SDN) feature allows one to create virtual networks (vnets) at datacenter level.
SDN is currently an experimental feature in Proxmox VE. This Documentation for it is also still under development, ask on our mailing lists or in the forum for questions and feedback. |
Installation
To enable the experimental SDN integration, you need to install the libpve-network-perl and ifupdown2 package on every node:
apt update apt install libpve-network-perl ifupdown2
After that you need to add the following line:
source /etc/network/interfaces.d/*
at the end of the /etc/network/interfaces configuration file, so that the SDN config gets included and activated.
Basic Overview
The Proxmox VE SDN allows separation and fine grained control of Virtual Guests networks, using flexible software controlled configurations.
Separation consists of zones, a zone is it’s own virtual separated network area. A VNet is a type of a virtual network connected to a zone. Depending on which type or plugin the zone uses it can behave differently and offer different features, advantages or disadvantages. Normally a VNet shows up as a common Linux bridge with either a VLAN or VXLAN tag, but some can also use layer 3 routing for control. The VNets are deployed locally on each node, after configuration was committed from the cluster-wide datacenter SDN administration interface.
Main configuration
The configuration is done at datacenter (cluster-wide) level, it will be saved in configuration files located in the shared configuration file system: /etc/pve/sdn
On the web-interface SDN feature have 3 main sections for the configuration
-
SDN: a overview of the SDN state
-
Zones: Create and manage the virtual separated network Zones
-
VNets: Create virtual network bridges + subnets management.
And some options:
-
Controller: For complex setups to control Layer 3 routing
-
Sub-nets: Used to defined ip networks on VNets.
-
IPAM: Allow to use external tools for IP address management (guest IPs)
-
DNS: Allow to define a DNS server api for registering a virtual guests hostname and IP-addresses
SDN
This is the main status panel. Here you can see deployment status of zones on different nodes.
There is an Apply button, to push and reload local configuration on all cluster nodes.
Local Deployment Monitoring
After applying the configuration through the main SDN web-interface panel, the local network configuration is generated locally on each node in /etc/network/interfaces.d/sdn, and with ifupdown2 reloaded.
You can monitor the status of local zones and vnets through the main tree.
Zones
A zone will define a virtually separated network.
It can use different technologies for separation:
-
VLAN: Virtual LANs are the classic method to sub-divide a LAN
-
QinQ: stacked VLAN (formally known as IEEE 802.1ad)
-
VXLAN: (layer2 vxlan)
-
Simple: Isolated Bridge, simple l3 routing bridge (NAT)
-
bgp-evpn: vxlan using layer3 border gateway protocol routing
You can restrict a zone to specific nodes.
It’s also possible to add permissions on a zone, to restrict user to use only a specific zone and only the VNets in that zone
Common options
The following options are available for all zone types.
- nodes
-
Deploy and allow to use a VNets configured for this Zone only on these nodes.
- ipam
-
Optional, if you want to use an ipam tool to manage ips in this zone
- dns
-
Optional, dns api server.
- reversedns
-
Optional, reverse dns api server.
- dnszone
-
Optional, dns domain name. Use to register hostname like <hostname>.<domain>. The dns zone need to be already existing in dns server.
Simple Zones
This is the simplest plugin, it will create an isolated vnet bridge. This bridge is not linked to physical interfaces, VM traffic is only local to the node(s). It can be also used for NAT or routed setup.
VLAN Zones
This plugin will reuse an existing local Linux or OVS bridge, and manage VLANs on it. The benefit of using SDN module, is that you can create different zones with specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
Specific VLAN configuration options:
- bridge
-
Reuse this local bridge or OVS switch, already configured on each local node.
QinQ Zones
QinQ is stacked VLAN. The first VLAN tag defined for the zone (so called service-vlan), and the second VLAN tag defined for the vnets
Your physical network switches must support stacked VLANs! |
Specific QinQ configuration options:
- bridge
-
A local VLAN-aware bridge already configured on each local node
- service vlan
-
The main VLAN tag of this zone
- service vlan protocol
-
allow to define a 802.1q (default) or 802.1ad service vlan type.
- mtu
-
Due to the double stacking of tags you need 4 more bytes for QinQ VLANs. For example, you reduce the MTU to 1496 if you physical interface MTU is 1500.
VXLAN Zones
The VXLAN plugin will establish a tunnel (named overlay) on top of an existing network (named underlay). It encapsulate layer 2 Ethernet frames within layer 4 UDP datagrams, using 4789 as the default destination port. You can, for example, create a private IPv4 VXLAN network on top of public internet network nodes. This is a layer2 tunnel only, no routing between different VNets is possible.
Each VNet will have use specific VXLAN id from the range (1 - 16777215).
Specific EVPN configuration options:
- peers address list
-
A list of IPs from all nodes through which you want to communicate. Can also be external nodes.
- mtu
-
Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes lower than the outgoing physical interface.
EVPN Zones
This is the most complex of all supported plugins.
BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can have an anycast IP-address and or MAC-address. The bridge IP is the same on each node, with this a virtual guest can use that address as gateway.
Routing can work across VNets from different zones through a VRF (Virtual Routing and Forwarding) interface.
Specific EVPN configuration options:
- VRF VXLAN tag
-
This is a vxlan-id used for routing interconnect between vnets, it must be different than VXLAN-id of VNets
- controller
-
an EVPN-controller need to be defined first (see controller plugins section)
- VNet MAC address
-
A unique anycast MAC address for all VNets in this zone. Will be auto-generated if not defined.
- Exit Nodes
-
This is used if you want to define some proxmox nodes, as exit gateway from evpn network through real network. The configured nodes will announce a default route in the EVPN network.
- MTU
-
because VXLAN encapsulation use 50 bytes, the MTU needs to be 50 bytes lower than the maximal MTU of the outgoing physical interface.
VNets
A VNet is in its basic form just a Linux bridge that will be deployed locally on the node and used for Virtual Machine communication.
VNet properties are:
- ID
-
a 8 characters ID to name and identify a VNet
- Alias
-
Optional longer name, if the ID isn’t enough
- Zone
-
The associated zone for this VNet
- Tag
-
The unique VLAN or VXLAN id
- VLAN Aware
-
Allow to add an extra VLAN tag in the virtual machine or container vNIC configurations or allow the guest OS to manage the VLAN’s tag.
Sub-Nets
A sub-network (subnet or sub-net) allows you to define a specific IP network (IPv4 or IPv6). For each VNET, you can define one or more subnets.
A subnet can be used to:
-
restrict IP-addresses you can define on a specific VNET
-
assign routes/gateway on a VNET in layer 3 zones
-
enable SNAT on a VNET in layer 3 zones
-
auto assign IPs on virtual guests (VM or CT) through IPAM plugin
-
DNS registration through DNS plugins
If an IPAM server is associated to the subnet zone, the subnet prefix will be automatically registered in the IPAM.
Subnet properties are:
- ID
-
a cidr network address. Ex: 10.0.0.0/8
- Gateway
-
ip address for the default gateway of the network. On layer3 zones (simple/evpn plugins), it’ll be deployed on the vnet.
- Snat
-
Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet. The subnet source ip will be natted to server outgoing interface/ip. On evpn zone, it’s done only on evpn gateway-nodes.
- Dnszoneprefix
-
Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
Controllers
Some zone types need an external controller to manage the VNet control-plane. Currently this is only required for the bgp-evpn zone plugin.
EVPN Controller
For BGP-EVPN, we need a controller to manage the control plane. The currently supported software controller is the "frr" router. You may need to install it on each node where you want to deploy EVPN zones.
apt install frr frr-pythontools
Configuration options:
- asn
-
A unique BGP ASN number. It’s highly recommended to use private ASN number (64512 – 65534, 4200000000 – 4294967294), as else you could end up breaking, or get broken, by global routing by mistake.
- peers
-
An ip list of all nodes where you want to communicate for the EVPN (could be also external nodes or route reflectors servers)
BGP Controller
The bgp controller is not used directly by a zone. You can used it to configure frr to manage bgp peers.
For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
Configuration options:
- node
-
The node of this BGP controller
- asn
-
A unique BGP ASN number. It’s highly recommended to use private ASN number from the range (64512 - 65534) or (4200000000 - 4294967294), as else you could end up breaking, or get broken, by global routing by mistake.
- peers
-
An IP list of peers you want to communicate with for the underlying BGP network.
- ebgp
-
If your peer’s remote-AS is different, it’s enabling EBGP.
- loopback
-
If you want to use a loopback or dummy interface as source for the evpn network. (for multipath)
- ebgp-mutltihop
-
if the peers are not directly connected or use loopback, you can increase the number of hops to reach them.
IPAMs
IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network. It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
Proxmox VE IPAM plugin
This is the default internal IPAM for your proxmox cluster if you don’t have external ipam software
phpIPAM plugin
You need to create an application in phpipam, and add an api token with admin permission
phpIPAM properties are:
- url
-
The REST-API endpoint: http://phpipam.domain.com/api/<appname>/
- token
-
An API access token
- section
-
An integer ID. Sections are group of subnets in phpIPAM. Default installations use sectionid=1 for customers.
Netbox IPAM plugin
NetBox is an IP address management (IPAM) and data center infrastructure management (DCIM) tool, see the source code repository for details: https://github.com/netbox-community/netbox
You need to create an api token in netbox https://netbox.readthedocs.io/en/stable/api/authentication
NetBox properties are:
- url
-
The REST API endpoint: http://yournetbox.domain.com/api
- token
-
An API access token
DNS
The DNS plugin in Proxmox VE SDN is used to define a DNS API server for registration of your hostname and IP-address. A DNS configuration is associated with one or more zones, to provide DNS registration for all the sub-net IPs configured for a zone.
PowerDNS plugin
You need to enable the webserver and the API in your PowerDNS config:
api=yes api-key=arandomgeneratedstring webserver=yes webserver-port=8081
Powerdns properties are:
- url
-
The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
- key
-
An API access key
- ttl
-
The default TTL for records
Examples
VLAN Setup Example
While we show plain configuration content here, almost everything should be configurable using the web-interface only. |
Node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.1/24 source /etc/network/interfaces.d/*
Node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.2/24 source /etc/network/interfaces.d/*
Create a VLAN zone named ‘myvlanzone’:
id: myvlanzone bridge: vmbr0
Create a VNet named ‘myvnet1' with `vlan-id` `10’ and the previously created ‘myvlanzone’ as it’s zone.
id: myvnet1 zone: myvlanzone tag: 10
Apply the configuration through the main SDN panel, to create VNets locally on each nodes.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.100/24
Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24
Then, you should be able to ping between both VMs over that network.
QinQ Setup Example
While we show plain configuration content here, almost everything should be configurable using the web-interface only. |
Node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.1/24 source /etc/network/interfaces.d/*
Node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.2/24 source /etc/network/interfaces.d/*
Create an QinQ zone named ‘qinqzone1’ with service VLAN 20
id: qinqzone1 bridge: vmbr0 service vlan: 20
Create another QinQ zone named ‘qinqzone2’ with service VLAN 30
id: qinqzone2 bridge: vmbr0 service vlan: 30
Create a VNet named ‘myvnet1’ with customer vlan-id 100 on the previously created ‘qinqzone1’ zone.
id: myvnet1 zone: qinqzone1 tag: 100
Create a ‘myvnet2’ with customer VLAN-id 100 on the previously created ‘qinqzone2’ zone.
id: myvnet2 zone: qinqzone2 tag: 100
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.100/24
Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24
Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet ‘myvnet2’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.102/24
Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet ‘myvnet2’ as vm3.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.103/24
Then, you should be able to ping between the VMs vm1 and vm2, also between vm3 and vm4. But, none of VMs vm1 or vm2 can ping the VMs vm3 or vm4, as they are on a different zone with different service-vlan.
VXLAN Setup Example
While we show plain configuration content here, almost everything should be configurable using the web-interface only. |
node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.1/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.2/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.3/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
Create an VXLAN zone named ‘myvxlanzone’, use the lower MTU to ensure the extra 50 bytes of the VXLAN header can fit. Add all previously configured IPs from the nodes as peer address list.
id: myvxlanzone peers address list: 192.168.0.1,192.168.0.2,192.168.0.3 mtu: 1450
Create a VNet named ‘myvnet1’ using the VXLAN zone ‘myvxlanzone’ created previously.
id: myvnet1 zone: myvxlanzone tag: 100000
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM, note the lower MTU here.
auto eth0 iface eth0 inet static address 10.0.3.100/24 mtu 1450
Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24 mtu 1450
Then, you should be able to ping between between vm1 and vm2.
EVPN Setup Example
node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.1/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.2/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.3/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
Create a EVPN controller, using a private ASN number and above node addreesses as peers.
id: myevpnctl asn: 65000 peers: 192.168.0.1,192.168.0.2,192.168.0.3
Create an EVPN zone named ‘myevpnzone’ using the previously created EVPN-controller Define node1 and node2 as exit nodes.
id: myevpnzone vrf vxlan tag: 10000 controller: myevpnctl mtu: 1450 vnet mac address: 32:F4:05:FE:6C:0A exitnodes: node1,node2
Create the first VNet named ‘myvnet1’ using the EVPN zone ‘myevpnzone’.
id: myvnet1 zone: myevpnzone tag: 11000
Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on vnet1
subnet: 10.0.1.0/24 gateway: 10.0.1.1
Create the second VNet named ‘myvnet2’ using the same EVPN zone ‘myevpnzone’, a different IPv4 CIDR network.
id: myvnet2 zone: myevpnzone tag: 12000
Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
subnet: 10.0.2.0/24 gateway: 10.0.2.1
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes and generate the FRR config.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.1.100/24 gateway 10.0.1.1 #this is the ip of the vnet1 mtu 1450
Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet ‘myvnet2’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.2.100/24 gateway 10.0.2.1 #this is the ip of the vnet2 mtu 1450
Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
If you ping an external IP from vm2 on the non-gateway node3, the packet will go to the configured myvnet2 gateway, then will be routed to the exit nodes (node1 or node2) and from there it will leave those nodes over the default gateway configured on node1 or node2.
Of course you need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24 network to node1, node2 on your external gateway, so that the public network can reply back. |
If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24 and 10.0.2.0/24 in this example), will be announced dynamically.