Software-Defined Network: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
m (Thomas Lamprecht moved page Software Defined Network to Software-Defined Network)
No edit summary
 
Line 4: Line 4:
[[Category:Reference Documentation]]
[[Category:Reference Documentation]]
<pvehide>
<pvehide>
The Software Defined Network (SDN) feature allows you to create
The Software-Defined Network (SDN) feature in Proxmox VE enables the
virtual networks (VNets) at the datacenter level.
creation of virtual zones and networks (VNets). This functionality simplifies
SDN is currently an experimental feature in Proxmox VE. This
advanced networking configurations and multitenancy setup.
documentation for it is also still under development. Ask on our
Introduction
mailing lists or in the forum for questions and feedback.
The Proxmox VE SDN allows for separation and fine-grained control of virtual guest
networks, using flexible, software-controlled configurations.
Separation is managed through zones, virtual networks (VNets), and
subnets.  A zone is its own virtually separated network area.  A VNet is a
virtual network that belongs to a zone. A subnet is an IP range inside a VNet.
Depending on the type of the zone, the network behaves differently and offers
specific features, advantages, and limitations.
Use cases for SDN range from an isolated private network on each individual node
to complex overlay networks across multiple PVE clusters on different locations.
After configuring an VNet in the cluster-wide datacenter SDN administration
interface, it is available as a common Linux bridge, locally on each node, to be
assigned to VMs and Containers.
Support Status
History
The Proxmox VE SDN stack has been available as an experimental feature since 2019 and
has been continuously improved and tested by many developers and users.
With its integration into the web interface in Proxmox VE 6.2, a significant
milestone towards broader integration was achieved.
During the Proxmox VE 7 release cycle, numerous improvements and features were added.
Based on user feedback, it became apparent that the fundamental design choices
and their implementation were quite sound and stable. Consequently, labeling it
as &#8216;experimental&#8217; did not do justice to the state of the SDN stack.
For Proxmox VE 8, a decision was made to lay the groundwork for full integration of
the SDN feature by elevating the management of networks and interfaces to a core
component in the Proxmox VE access control stack.
In Proxmox VE 8.1, two major milestones were achieved: firstly, DHCP integration was
added to the IP address management (IPAM) feature, and secondly, the SDN
integration is now installed by default.
Current Status
The current support status for the various layers of our SDN installation is as
follows:
Core SDN, which includes VNet management and its integration with the Proxmox VE
  stack, is fully supported.
IPAM, including DHCP management for virtual guests, is in tech preview.
Complex routing via FRRouting and controller integration are in tech preview.
Installation
Installation
To enable the experimental Software Defined Network (SDN) integration, you need
SDN Core
to install the libpve-network-perl and ifupdown2 packages on every node:
Since Proxmox VE 8.1 the core Software-Defined Network (SDN) packages are installed
by default.
If you upgrade from an older version, you need to install the
libpve-network-perl package on every node:
apt update
apt update
apt install libpve-network-perl ifupdown2
apt install libpve-network-perl
Proxmox VE version 7 and above come installed with ifupdown2.
Proxmox VE version 7.0 and above have the ifupdown2 package installed by
After this, you need to add the following line to the end of the
default. If you originally installed your system with an older version, you need
/etc/network/interfaces configuration file, so that the SDN configuration gets
to explicitly install the ifupdown2 package.
included and activated.
After installation, you need to ensure that the following line is present at the
end of the /etc/network/interfaces configuration file on all nodes, so that
the SDN configuration gets included and activated.
source /etc/network/interfaces.d/*
source /etc/network/interfaces.d/*
Basic Overview
DHCP IPAM
The Proxmox VE SDN allows for separation and fine-grained control of virtual guest
The DHCP integration into the built-in PVE IP Address Management stack
networks, using flexible, software-controlled configurations.
currently uses dnsmasq for giving out DHCP leases. This is currently opt-in.
Separation is managed through zones, where a zone is its own virtual separated
To use that feature you need to install the dnsmasq package on every node:
network area. A VNet is a type of a virtual network connected to a zone.
apt update
Depending on which type or plugin the zone uses, it can behave differently and
apt install dnsmasq
offer different features, advantages, and disadvantages. Normally, a VNet
# disable default instance
appears as a common Linux bridge with either a VLAN or VXLAN tag, however,
systemctl disable --now dnsmasq
some can also use layer 3 routing for control. VNets are deployed locally on
FRRouting
each node, after being configured from the cluster-wide datacenter SDN
The Proxmox VE SDN stack uses the FRRouting project for
administration interface.
advanced setups. This is currently opt-in.
Main Configuration
To use the SDN routing integration you need to install the frr-pythontools
Configuration is done at the datacenter (cluster-wide) level and is saved in
package on all nodes:
files located in the shared configuration file system:
apt update
/etc/pve/sdn
apt install frr-pythontools
On the web-interface, SDN features 3 main sections:
Configuration Overview
SDN: An overview of the SDN state
Configuration is done at the web UI at datacenter level, separated into the
Zones: Create and manage the virtually separated network zones
following sections:
VNets: Create virtual network bridges and manage subnets
SDN:: Here you get an overview of the current active SDN state, and you can
In addition to this, the following options are offered:
  apply all pending changes to the whole cluster.
Controller: For controlling layer 3 routing in complex setups
Zones: Create and manage the virtually separated
Subnets: Used to defined IP networks on VNets
  network zones
IPAM: Enables the use of external tools for IP address management (guest
VNets VNets: Create virtual network bridges and
   IPs)
  manage subnets
DNS: Define a DNS server API for registering virtual guests' hostname and IP
The Options category allows adding and managing additional services to be used
  addresses
in your SDN setup.
SDN
Controllers: For controlling layer 3 routing
This is the main status panel. Here you can see the deployment status of zones
  in complex setups
on different nodes.
DHCP: Define a DHCP server for a zone that automatically allocates IPs for
The Apply button is used to push and reload local configuration on all cluster
  guests in the IPAM and leases them to the guests via DHCP.
nodes.
IPAM: Enables external for IP address management for
Local Deployment Monitoring
   guests
After applying the configuration through the main SDN panel,
DNS: Define a DNS server integration for registering
the local network configuration is generated locally on each node in
  virtual guests' hostname and IP addresses
the file /etc/network/interfaces.d/sdn, and reloaded with ifupdown2.
Technology &amp; Configuration
You can monitor the status of local zones and VNets through the main tree.
The Proxmox VE Software-Defined Network implementation uses standard Linux networking
as much as possible. The reason for this is that modern Linux networking
provides almost all needs for a feature full SDN implementation and avoids adding
external dependencies and reduces the overall amount of components that can
break.
The Proxmox VE SDN configurations are located in /etc/pve/sdn, which is shared with
all other cluster nodes through the Proxmox VE configuration file system.
Those configurations get translated to the respective configuration formats of
the tools that manage the underlying network stack (for example ifupdown2 or
frr).
New changes are not immediately applied but recorded as pending first. You can
then apply a set of different changes all at once in the main SDN overview
panel on the web interface. This system allows to roll-out various changes as
single atomic one.
The SDN tracks the rolled-out state through the .running-config and .version
files located in /etc/pve/sdn.
Zones
Zones
A zone defines a virtually separated network. Zones can be restricted to
A zone defines a virtually separated network. Zones are restricted to
specific nodes and assigned permissions, in order to restrict users to a certain
specific nodes and assigned permissions, in order to restrict users to a certain
zone and its contained VNets.
zone and its contained VNets.
Different technologies can be used for separation:
Different technologies can be used for separation:
Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
VLAN: Virtual LANs are the classic method of subdividing a LAN
VLAN: Virtual LANs are the classic method of subdividing a LAN
QinQ: Stacked VLAN (formally known as IEEE 802.1ad)
QinQ: Stacked VLAN (formally known as IEEE 802.1ad)
VXLAN: Layer2 VXLAN
VXLAN: Layer 2 VXLAN network via a UDP tunnel
Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)
EVPN (BGP EVPN): VXLAN with BGP to establish Layer 3 routing
EVPN (BGP EVPN): VXLAN using layer 3 border gateway protocol (BGP) routing
Common Options
Common options
The following options are available for all zone types:
The following options are available for all zone types:
nodes
Nodes
The nodes which the zone and associated VNets should be deployed on
The nodes which the zone and associated VNets should be deployed on.
ipam
IPAM
Optional. Use an IP Address Management (IPAM) tool to manage IPs in the
Use an IP Address Management (IPAM) tool to manage IPs in the
   zone.
   zone. Optional, defaults to pve.
dns
DNS
Optional. DNS API server.
DNS API server. Optional.
reversedns
ReverseDNS
Optional. Reverse DNS API server.
Reverse DNS API server. Optional.
dnszone
DNSZone
Optional. DNS domain name. Used to register hostnames, such as
DNS domain name. Used to register hostnames, such as
   &lt;hostname&gt;.&lt;domain&gt;. The DNS zone must already exist on the DNS server.
   &lt;hostname&gt;.&lt;domain&gt;. The DNS zone must already exist on the DNS server. Optional.
Simple Zones
Simple Zones
This is the simplest plugin. It will create an isolated VNet bridge.
This is the simplest plugin. It will create an isolated VNet bridge. This
This bridge is not linked to a physical interface, and VM traffic is only
bridge is not linked to a physical interface, and VM traffic is only local on
local between the node(s).
each the node.
It can also be used in NAT or routed setups.
It can be used in NAT or routed setups.
VLAN Zones
VLAN Zones
This plugin reuses an existing local Linux or OVS bridge, and manages the VLANs
The VLAN plugin uses an existing local Linux or OVS bridge to connect to the
on it. The benefit of using the SDN module is that you can create different
node&#8217;s physical interface. It uses VLAN tagging defined in the VNet to isolate
zones with specific VNet VLAN tags, and restrict virtual machines to separated
the network segments.  This allows connectivity of VMs between different nodes.
zones.
VLAN zone configuration options:
Specific VLAN configuration options:
Bridge
bridge
The local bridge or OVS switch, already configured on each node that
Reuse this local bridge or OVS switch, already configured on each
   allows node-to-node connection.
   local node.
QinQ Zones
QinQ Zones
QinQ also known as VLAN stacking, wherein the first VLAN tag is defined for the
QinQ also known as VLAN stacking, that uses multiple layers of VLAN tags for
zone (the service-vlan), and the second VLAN tag is defined for the
isolation.  The QinQ zone defines the outer VLAN tag (the Service VLAN)
VNets.
whereas the inner VLAN tag is defined by the VNet.
Your physical network switches must support stacked VLANs for this
Your physical network switches must support stacked VLANs for this
configuration!
configuration.
Below are the configuration options specific to QinQ:
QinQ zone configuration options:
bridge
Bridge
A local, VLAN-aware bridge that is already configured on each local
A local, VLAN-aware bridge that is already configured on each local
   node
   node
service vlan
Service VLAN
The main VLAN tag of this zone
The main VLAN tag of this zone
service vlan protocol
Service VLAN Protocol
Allows you to choose between an 802.1q (default) or
Allows you to choose between an 802.1q (default) or
   802.1ad service VLAN type.
   802.1ad service VLAN type.
mtu
MTU
Due to the double stacking of tags, you need 4 more bytes for QinQ VLANs.
Due to the double stacking of tags, you need 4 more bytes for QinQ VLANs.
   For example, you must reduce the MTU to 1496 if you physical interface MTU is
   For example, you must reduce the MTU to 1496 if you physical interface MTU is
   1500.
   1500.
VXLAN Zones
VXLAN Zones
The VXLAN plugin establishes a tunnel (overlay) on top of an existing
The VXLAN plugin establishes a tunnel (overlay) on top of an existing network
network (underlay). This encapsulates layer 2 Ethernet frames within layer
(underlay). This encapsulates layer 2 Ethernet frames within layer 4 UDP
4 UDP datagrams, using 4789 as the default destination port. You can, for
datagrams using the default destination port 4789.
example, create a private IPv4 VXLAN network on top of public internet network
You have to configure the underlay network yourself to enable UDP connectivity
nodes.
between all peers.
This is a layer 2 tunnel only, so no routing between different VNets is
You can, for example, create a VXLAN overlay network on top of public internet,
possible.
appearing to the VMs as if they share the same local Layer 2 network.
Each VNet will have a specific VXLAN ID in the range 1 - 16777215.
VXLAN on its own does does not provide any encryption. When joining
Specific EVPN configuration options:
  multiple sites via VXLAN, make sure to establish a secure connection between
peers address list
  the site, for example by using a site-to-site VPN.
A list of IP addresses from each node through which you
VXLAN zone configuration options:
   want to communicate. Can also be external nodes.
Peers Address List
mtu
A list of IP addresses of each node in the VXLAN zone. This
   can be external nodes reachable at this IP address.
  All nodes in the cluster need to be mentioned here.
MTU
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
   lower than the outgoing physical interface.
   lower than the outgoing physical interface.
EVPN Zones
EVPN Zones
This is the most complex of all the supported plugins.
The EVPN zone creates a routable Layer 3 network, capable of spanning across
BGP-EVPN allows you to create a routable layer 3 network. The VNet of EVPN can
multiple clusters. This is achieved by establishing a VPN and utilizing BGP as
have an anycast IP address and/or MAC address. The bridge IP is the same on each
the routing protocol.
node, meaning a virtual guest can use this address as gateway.
The VNet of EVPN can have an anycast IP address and/or MAC address. The bridge
IP is the same on each node, meaning a virtual guest can use this address as
gateway.
Routing can work across VNets from different zones through a VRF (Virtual
Routing can work across VNets from different zones through a VRF (Virtual
Routing and Forwarding) interface.
Routing and Forwarding) interface.
The configuration options specific to EVPN are as follows:
EVPN zone configuration options:
VRF VXLAN tag
VRF VXLAN ID
This is a VXLAN-ID used for routing interconnect between VNets.
A VXLAN-ID used for dedicated routing interconnect between VNets.
   It must be different than the VXLAN-ID of the VNets.
   It must be different than the VXLAN-ID of the VNets.
controller
Controller
An EVPN-controller must to be defined first (see controller plugins
The EVPN-controller to use for this zone. (See controller plugins
   section).
   section).
VNet MAC address
VNet MAC Address
A unique, anycast MAC address for all VNets in this zone.
Anycast MAC address that gets assigned to all VNets in this
  Will be auto-generated if not defined.
  zone. Will be auto-generated if not defined.
Exit Nodes
Exit Nodes
Optional. This is used if you want to define some Proxmox VE nodes as
Nodes that shall be configured as exit gateways from the EVPN
  exit gateways from the EVPN network, through the real network. The configured
  network, through the real network. The configured nodes will announce a
  nodes will announce a default route in the EVPN network.
  default route in the EVPN network.  Optional.
Primary Exit Node
Primary Exit Node
Optional. If you use multiple exit nodes, this forces
If you use multiple exit nodes, force traffic through this
   traffic to a primary exit node, instead of load-balancing on all nodes. This
   primary exit node, instead of load-balancing on all nodes. Optional but
   is required if you want to use SNAT or if your upstream router doesn&#8217;t support
   necessary if you want to use SNAT or if your upstream router doesn&#8217;t support
   ECMP.
   ECMP.
Exit Nodes local routing
Exit Nodes Local Routing
Optional. This is a special option if you need to
This is a special option if you need to reach a VM/CT
  reach a VM/CT service from an exit node. (By default, the exit nodes only
  service from an exit node. (By default, the exit nodes only allow forwarding
   allow forwarding traffic between real network and EVPN network).
   traffic between real network and EVPN network).  Optional.
Advertise Subnets
Advertise Subnets
Optional. If you have silent VMs/CTs (for example, if you
Announce the full subnet in the EVPN network.
  have multiple IPs and the anycast gateway doesn&#8217;t see traffic from theses IPs,
  If you have silent VMs/CTs (for example, if you have multiple IPs and the
  the IP addresses won&#8217;t be able to be reach inside the EVPN network). This
  anycast gateway doesn&#8217;t see traffic from theses IPs, the IP addresses won&#8217;t be
  option will announce the full subnet in the EVPN network in this case.
  able to be reached inside the EVPN network). Optional.
Disable Arp-Nd Suppression
Disable ARP ND Suppression
Optional. Don&#8217;t suppress ARP or ND packets.
Don&#8217;t suppress ARP or ND (Neighbor Discovery)
  This is required if you use floating IPs in your guest VMs
  packets. This is required if you use floating IPs in your VMs (IP and MAC
  (IP are MAC addresses are being moved between systems).
  addresses are being moved between systems).  Optional.
Route-target import
Route-target Import
Optional. Allows you to import a list of external EVPN
Allows you to import a list of external EVPN route
   route targets. Used for cross-DC or different EVPN network interconnects.
   targets. Used for cross-DC or different EVPN network interconnects.  Optional.
MTU
MTU
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes
   less than the maximal MTU of the outgoing physical interface.
   less than the maximal MTU of the outgoing physical interface.  Optional,
  defaults to 1450.
VNets
VNets
A VNet is, in its basic form, a Linux bridge that will be deployed locally on
After creating a virtual network (VNet) through the SDN GUI, a local network
the node and used for virtual machine communication.
interface with the same name is available on each node. To connect a guest to the
The VNet configuration properties are:
VNet, assign the interface to the guest and set the IP address accordingly.
Depending on the zone, these options have different meanings and are explained
in the respective zone section in this document.
In the current state, some options may have no effect or won&#8217;t work in
certain zones.
VNet configuration options:
ID
ID
An 8 character ID to name and identify a VNet
An up to 8 character ID to identify a VNet
Alias
Comment
Optional longer name, if the ID isn&#8217;t enough
More descriptive identifier. Assigned as an alias on the interface. Optional
Zone
Zone
The associated zone for this VNet
The associated zone for this VNet
Line 186: Line 250:
The unique VLAN or VXLAN ID
The unique VLAN or VXLAN ID
VLAN Aware
VLAN Aware
Enable adding an extra VLAN tag in the virtual machine or
Enables vlan-aware option on the interface, enabling configuration
container&#8217;s vNIC configuration, to allow the guest OS to manage the VLAN&#8217;s tag.
  in the guest.
Subnets
Subnets
A subnetwork (subnet) allows you to define a specific IP network
A subnet define a specific IP range, described by the CIDR network address.
(IPv4 or IPv6). For each VNet, you can define one or more subnets.
Each VNet, can have one or more subnets.
A subnet can be used to:
A subnet can be used to:
Restrict the IP addresses you can define on a specific VNet
Restrict the IP addresses you can define on a specific VNet
Line 199: Line 263:
If an IPAM server is associated with the subnet zone, the subnet prefix will be
If an IPAM server is associated with the subnet zone, the subnet prefix will be
automatically registered in the IPAM.
automatically registered in the IPAM.
Subnet properties are:
Subnet configuration options:
ID
ID
A CIDR network address, for example 10.0.0.0/8
A CIDR network address, for example 10.0.0.0/8
Line 206: Line 270:
   (Simple/EVPN plugins), it will be deployed on the VNet.
   (Simple/EVPN plugins), it will be deployed on the VNet.
SNAT
SNAT
Optional. Enable SNAT for layer 3 zones (Simple/EVPN plugins), for this
Enable Source NAT which allows VMs from inside a
   subnet. The subnet&#8217;s source IP will be NATted to server&#8217;s outgoing interface/IP.
   VNet to connect to the outside network by forwarding the packets to the nodes
  On EVPN zones, this is only done on EVPN gateway-nodes.
  outgoing interface. On EVPN zones, forwarding is done on EVPN gateway-nodes.
Dnszoneprefix
  Optional.
Optional. Add a prefix to the domain registration, like
DNS Zone Prefix
&lt;hostname&gt;.prefix.&lt;domain&gt;
Add a prefix to the domain registration, like
  &lt;hostname&gt;.prefix.&lt;domain&gt; Optional.
Controllers
Controllers
Some zone types need an external controller to manage the VNet control-plane.
Some zones implement a separated control and data plane that require an external
Currently this is only required for the bgp-evpn zone plugin.
controller to manage the VNet&#8217;s control plane.
Currently, only the EVPN zone requires an external controller.
EVPN Controller
EVPN Controller
For BGP-EVPN, we need a controller to manage the control plane.
The EVPN, zone requires an external controller to manage the control plane.
The currently supported software controller is the "frr" router.
The EVPN controller plugin configures the Free Range Routing (frr) router.
You may need to install it on each node where you want to deploy EVPN zones.
To enable the EVPN controller, you need to install frr on every node that shall
participate in the EVPN zone.
apt install frr frr-pythontools
apt install frr frr-pythontools
Configuration options:
EVPN controller configuration options:
asn
ASN #
A unique BGP ASN number. It&#8217;s highly recommended to use a private ASN
A unique BGP ASN number. It&#8217;s highly recommended to use a private ASN
   number (64512 – 65534, 4200000000 – 4294967294), as otherwise you could end up
   number (64512 – 65534, 4200000000 – 4294967294), as otherwise you could end up
   breaking global routing by mistake.
   breaking global routing by mistake.
peers
Peers
An IP list of all nodes where you want to communicate for the EVPN
An IP list of all nodes that are part of the EVPN zone.  (could also be
  (could also be external nodes or route reflectors servers)
  external nodes or route reflector servers)
BGP Controller
BGP Controller
The BGP controller is not used directly by a zone.
The BGP controller is not used directly by a zone.
You can use it to configure FRR to manage BGP peers.
You can use it to configure FRR to manage BGP peers.
For BGP-EVPN, it can be used to define a different ASN by node, so doing EBGP.
For BGP-EVPN, it can be used to define a different ASN by node, so doing EBGP.
Configuration options:
It can also be used to export EVPN routes to an external BGP peer.
node
By default, for a simple full mesh EVPN, you don&#8217;t need to define a BGP
controller.
BGP controller configuration options:
Node
The node of this BGP controller
The node of this BGP controller
asn
ASN #
A unique BGP ASN number. It&#8217;s highly recommended to use a private ASN
A unique BGP ASN number. It&#8217;s highly recommended to use a private ASN
   number in the range (64512 - 65534) or (4200000000 - 4294967294), as otherwise
   number in the range (64512 - 65534) or (4200000000 - 4294967294), as otherwise
   you could break global routing by mistake.
   you could break global routing by mistake.
peers
Peer
A list of peer IP addresses you want to communicate with using the
A list of peer IP addresses you want to communicate with using the
   underlying BGP network.
   underlying BGP network.
ebgp
EBGP
If your peer&#8217;s remote-AS is different, this enables EBGP.
If your peer&#8217;s remote-AS is different, this enables EBGP.
loopback
Loopback Interface
Use a loopback or dummy interface as the source of the EVPN network
Use a loopback or dummy interface as the source of the EVPN network
   (for multipath).
   (for multipath).
Line 252: Line 322:
bgp-multipath-as-path-relax
bgp-multipath-as-path-relax
Allow ECMP if your peers have different ASN.
Allow ECMP if your peers have different ASN.
IPAMs
ISIS Controller
IPAM (IP Address Management) tools are used to manage/assign the IP addresses of
The ISIS controller is not used directly by a zone.
guests on the network. It can be used to find free IP addresses when you create
You can use it to configure FRR to export EVPN routes to an ISIS domain.
a VM/CT for example (not yet implemented).
ISIS controller configuration options:
An IPAM can be associated with one or more zones, to provide IP addresses
Node
for all subnets defined in those zones.
The node of this ISIS controller.
Proxmox VE IPAM Plugin
Domain
This is the default internal IPAM for your Proxmox VE cluster, if you don&#8217;t have
A unique ISIS domain.
external IPAM software.
Network Entity Title
A Unique ISIS network address that identifies this node.
Interfaces
A list of physical interface(s) used by ISIS.
Loopback
Use a loopback or dummy interface as the source of the EVPN network
  (for multipath).
IPAM
IP Address Management (IPAM) tools manage the IP addresses of clients on the
network. SDN in Proxmox VE uses IPAM for example to find free IP addresses for new
guests.
A single IPAM instance can be associated with one or more zones.
PVE IPAM Plugin
The default built-in IPAM for your Proxmox VE cluster.
You can inspect the current status of the PVE IPAM Plugin via the IPAM panel in
the SDN section of the datacenter configuration. This UI can be used to create,
update and delete IP mappings. This is particularly convenient in conjunction
with the DHCP feature.
If you are using DHCP, you can use the IPAM panel to create or edit leases for
specific VMs, which enables you to change the IPs allocated via DHCP. When
editing an IP of a VM that is using DHCP you must make sure to force the guest
to acquire a new DHCP leases. This can usually be done by reloading the network
stack of the guest or rebooting it.
NetBox IPAM Plugin
NetBox is an open-source IP
Address Management (IPAM) and datacenter infrastructure management (DCIM) tool.
To integrate NetBox with Proxmox VE SDN, create an API token in NetBox as described
here: https://docs.netbox.dev/en/stable/integrations/rest-api/#tokens
The NetBox configuration properties are:
URL
The NetBox REST API endpoint: http://yournetbox.domain.com/api
Token
An API access token
phpIPAM Plugin
phpIPAM Plugin
https://phpipam.net/
In phpIPAM you need to create an "application" and add
You need to create an application in phpIPAM and add an API token with admin
an API token with admin privileges to the application.
privileges.
The phpIPAM configuration properties are:
The phpIPAM configuration properties are:
url
URL
The REST-API endpoint: http://phpipam.domain.com/api/&lt;appname&gt;/
The REST-API endpoint: http://phpipam.domain.com/api/&lt;appname&gt;/
token
Token
An API access token
An API access token
section
Section
An integer ID. Sections are a group of subnets in phpIPAM. Default
An integer ID. Sections are a group of subnets in phpIPAM. Default
   installations use sectionid=1 for customers.
   installations use sectionid=1 for customers.
NetBox IPAM Plugin
NetBox is an IP address management (IPAM) and datacenter infrastructure
management (DCIM) tool. See the source code repository for details:
https://github.com/netbox-community/netbox
You need to create an API token in NetBox to use it:
https://netbox.readthedocs.io/en/stable/api/authentication
The NetBox configuration properties are:
url
The REST API endpoint: http://yournetbox.domain.com/api
token
An API access token
DNS
DNS
The DNS plugin in Proxmox VE SDN is used to define a DNS API server for registration
The DNS plugin in Proxmox VE SDN is used to define a DNS API server for registration
Line 303: Line 393:
ttl
ttl
The default TTL for records
The default TTL for records
DHCP
The DHCP plugin in Proxmox VE SDN can be used to automatically deploy a DHCP server
for a Zone. It provides DHCP for all Subnets in a Zone that have a DHCP range
configured. Currently the only available backend plugin for DHCP is the dnsmasq
plugin.
The DHCP plugin works by allocating an IP in the IPAM plugin configured in the
Zone when adding a new network interface to a VM/CT. You can find more
information on how to configure an IPAM in the
respective section of our documentation.
When the VM starts, a mapping for the MAC address and IP gets created in the DHCP
plugin of the zone. When the network interfaces is removed or the VM/CT are
destroyed, then the entry in the IPAM and the DHCP server are deleted as well.
Some features (adding/editing/removing IP mappings) are currently only
available when using the PVE IPAM plugin.
Configuration
You can enable automatic DHCP for a zone in the Web UI via the Zones panel and
enabling DHCP in the advanced options of a zone.
Currently only Simple Zones have support for automatic DHCP
After automatic DHCP has been enabled for a Zone, DHCP Ranges need to be
configured for the subnets in a Zone. In order to that, go to the Vnets panel and
select the Subnet for which you want to configure DHCP ranges. In the edit
dialogue you can configure DHCP ranges in the respective Tab. Alternatively you
can set DHCP ranges for a Subnet via the following CLI command:
pvesh set /cluster/sdn/vnets/&lt;vnet&gt;/subnets/&lt;subnet&gt;
-dhcp-range start-address=10.0.1.100,end-address=10.0.1.200
-dhcp-range start-address=10.0.2.100,end-address=10.0.2.200
You also need to have a gateway configured for the subnet - otherwise
automatic DHCP will not work.
The DHCP plugin will then allocate IPs in the IPAM only in the configured
ranges.
Do not forget to follow the installation steps for the
dnsmasq DHCP plugin as well.
Plugins
Dnsmasq Plugin
Currently this is the only DHCP plugin and therefore the plugin that gets used
when you enable DHCP for a zone.
Installation
For installation see the DHCP IPAM section.
Configuration
The plugin will create a new systemd service for each zone that dnsmasq gets
deployed to. The name for the service is dnsmasq@&lt;zone&gt;. The lifecycle of this
service is managed by the DHCP plugin.
The plugin automatically generates the following configuration files in the
folder /etc/dnsmasq.d/&lt;zone&gt;:
00-default.conf
This contains the default global configuration for a dnsmasq instance.
10-&lt;zone&gt;-&lt;subnet_cidr&gt;.conf
This file configures specific options for a subnet, such as the DNS server that
should get configured via DHCP.
10-&lt;zone&gt;-&lt;subnet_cidr&gt;.ranges.conf
This file configures the DHCP ranges for the dnsmasq instance.
ethers
This file contains the MAC-address and IP mappings from the IPAM plugin. In
order to override those mappings, please use the respective IPAM plugin rather
than editing this file, as it will get overwritten by the dnsmasq plugin.
You must not edit any of the above files, since they are managed by the DHCP
plugin. In order to customize the dnsmasq configuration you can create
additional files (e.g. 90-custom.conf) in the configuration folder - they will
not get changed by the dnsmasq DHCP plugin.
Configuration files are read in order, so you can control the order of the
configuration directives by naming your custom configuration files appropriately.
DHCP leases are stored in the file /var/lib/misc/dnsmasq.&lt;zone&gt;.leases.
When using the PVE IPAM plugin, you can update, create and delete DHCP leases.
For more information please consult the documentation of
the PVE IPAM plugin. Changing DHCP leases is
currently not supported for the other IPAM plugins.
Examples
Examples
This section presents multiple configuration examples tailored for common SDN
use cases. It aims to offer tangible implementations, providing additional
details to enhance comprehension of the available configuration options.
Simple Zone Example
Simple zone networks create an isolated network for guests on a single host to
connect to each other.
connection between guests are possible if all guests reside on a same host
but cannot be reached on other nodes.
Create a simple zone named simple.
Add a VNet names vnet1.
Create a Subnet with a gateway and the SNAT option enabled.
This creates a network bridge vnet1 on the node. Assign this bridge to the
  guests that shall join the network and configure an IP address.
The network interface configuration in two VMs may look like this which allows
them to communicate via the 10.0.1.0/24 network.
allow-hotplug ens19
iface ens19 inet static
        address 10.0.1.14/24
allow-hotplug ens19
iface ens19 inet static
        address 10.0.1.15/24
Source NAT Example
If you want to allow outgoing connections for guests in the simple network zone
the simple zone offers a Source NAT (SNAT) option.
Starting from the configuration above, Add a
Subnet to the VNet vnet1, set a gateway IP and enable the SNAT option.
Subnet: 172.16.0.0/24
Gateway: 172.16.0.1
SNAT: checked
In the guests configure the static IP address inside the subnet&#8217;s IP range.
The node itself will join this network with the Gateway IP 172.16.0.1 and
function as the NAT gateway for guests within the subnet range.
VLAN Setup Example
VLAN Setup Example
While we show plaintext configuration content here, almost everything
When VMs on different nodes need to communicate through an isolated network, the
should be configurable using the web-interface only.
VLAN zone allows network level isolation using VLAN tags.
Node1: /etc/network/interfaces
Create a VLAN zone named myvlanzone:
auto vmbr0
ID: myvlanzone
iface vmbr0 inet manual
Bridge: vmbr0
        bridge-ports eno1
Create a VNet named myvnet1 with VLAN tag 10 and the previously created
        bridge-stp off
myvlanzone.
        bridge-fd 0
ID: myvnet1
        bridge-vlan-aware yes
Zone: myvlanzone
        bridge-vids 2-4094
Tag: 10
#management ip on vlan100
auto vmbr0.100
iface vmbr0.100 inet static
        address 192.168.0.1/24
source /etc/network/interfaces.d/*
Node2: /etc/network/interfaces
auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#management ip on vlan100
auto vmbr0.100
iface vmbr0.100 inet static
        address 192.168.0.2/24
source /etc/network/interfaces.d/*
Create a VLAN zone named &#8216;myvlanzone&#8217;:
id: myvlanzone
bridge: vmbr0
Create a VNet named &#8216;myvnet1' with `vlan-id` `10&#8217; and the previously created
&#8216;myvlanzone&#8217; as its zone.
id: myvnet1
zone: myvlanzone
tag: 10
Apply the configuration through the main SDN panel, to create VNets locally on
Apply the configuration through the main SDN panel, to create VNets locally on
each node.
each node.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on &#8216;myvnet1&#8217;.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on myvnet1.
Use the following network configuration for this VM:
Use the following network configuration for this VM:
auto eth0
auto eth0
Line 349: Line 511:
         address 10.0.3.100/24
         address 10.0.3.100/24
Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet
Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet
&#8216;myvnet1&#8217; as vm1.
myvnet1 as vm1.
Use the following network configuration for this VM:
Use the following network configuration for this VM:
auto eth0
auto eth0
iface eth0 inet static
iface eth0 inet static
         address 10.0.3.101/24
         address 10.0.3.101/24
Following this, you should be able to ping between both VMs over that network.
Following this, you should be able to ping between both VMs using that network.
QinQ Setup Example
QinQ Setup Example
While we show plaintext configuration content here, almost everything
This example configures two QinQ zones and adds two VMs to each zone to
should be configurable using the web-interface only.
demonstrate the additional layer of VLAN tags which allows the configuration of
Node1: /etc/network/interfaces
more isolated VLANs.
auto vmbr0
A typical use case for this configuration is a hosting provider that provides an
iface vmbr0 inet manual
isolated network to customers for VM communication but isolates the VMs from
        bridge-ports eno1
other customers.
        bridge-stp off
Create a QinQ zone named qinqzone1 with service VLAN 20
        bridge-fd 0
ID: qinqzone1
        bridge-vlan-aware yes
Bridge: vmbr0
        bridge-vids 2-4094
Service VLAN: 20
#management ip on vlan100
Create another QinQ zone named qinqzone2 with service VLAN 30
auto vmbr0.100
ID: qinqzone2
iface vmbr0.100 inet static
Bridge: vmbr0
        address 192.168.0.1/24
Service VLAN: 30
source /etc/network/interfaces.d/*
Create a VNet named myvnet1 with VLAN-ID 100 on the previously created
Node2: /etc/network/interfaces
qinqzone1 zone.
auto vmbr0
ID: qinqvnet1
iface vmbr0 inet manual
Zone: qinqzone1
        bridge-ports eno1
Tag: 100
        bridge-stp off
Create a myvnet2 with VLAN-ID 100 on the qinqzone2 zone.
        bridge-fd 0
ID: qinqvnet2
        bridge-vlan-aware yes
Zone: qinqzone2
        bridge-vids 2-4094
Tag: 100
#management ip on vlan100
Apply the configuration on the main SDN web interface panel to create VNets
auto vmbr0.100
locally on each node.
iface vmbr0.100 inet static
Create four Debian-bases virtual machines (vm1, vm2, vm3, vm4) and add network
        address 192.168.0.2/24
interfaces to vm1 and vm2 with bridge qinqvnet1 and vm3 and vm4 with bridge
source /etc/network/interfaces.d/*
qinqvnet2.
Create a QinQ zone named &#8216;qinqzone1&#8217; with service VLAN 20
Inside the VM, configure the IP addresses of the interfaces, for example via
id: qinqzone1
/etc/network/interfaces:
bridge: vmbr0
service vlan: 20
Create another QinQ zone named &#8216;qinqzone2&#8217; with service VLAN 30
id: qinqzone2
bridge: vmbr0
service vlan: 30
Create a VNet named &#8216;myvnet1&#8217; with customer VLAN-ID 100 on the previously
created &#8216;qinqzone1&#8217; zone.
id: myvnet1
zone: qinqzone1
tag: 100
Create a &#8216;myvnet2&#8217; with customer VLAN-ID 100 on the previously created
&#8216;qinqzone2&#8217; zone.
id: myvnet2
zone: qinqzone2
tag: 100
Apply the configuration on the main SDN web-interface panel to create VNets
locally on each nodes.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on &#8216;myvnet1&#8217;.
Use the following network configuration for this VM:
auto eth0
iface eth0 inet static
        address 10.0.3.100/24
Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet
&#8216;myvnet1&#8217; as vm1.
Use the following network configuration for this VM:
auto eth0
auto eth0
iface eth0 inet static
iface eth0 inet static
         address 10.0.3.101/24
         address 10.0.3.101/24
Create a third virtual machine (vm3) on node1, with a vNIC on the other VNet
Configure all four VMs to have IP addresses from the 10.0.3.101 to
&#8216;myvnet2&#8217;.
10.0.3.104 range.
Use the following network configuration for this VM:
Now you should be able to ping between the VMs vm1 and vm2, as well as
auto eth0
iface eth0 inet static
        address 10.0.3.102/24
Create another virtual machine (vm4) on node2, with a vNIC on the same VNet
&#8216;myvnet2&#8217; as vm3.
Use the following network configuration for this VM:
auto eth0
iface eth0 inet static
        address 10.0.3.103/24
Then, you should be able to ping between the VMs vm1 and vm2, as well as
between vm3 and vm4. However, neither of VMs vm1 or vm2 can ping VMs
between vm3 and vm4. However, neither of VMs vm1 or vm2 can ping VMs
vm3 or vm4, as they are on a different zone with a different service-vlan.
vm3 or vm4, as they are on a different zone with a different service-VLAN.
VXLAN Setup Example
VXLAN Setup Example
While we show plaintext configuration content here, almost everything
The example assumes a cluster with three nodes, with the node IP addresses
is configurable through the web-interface.
192.168.0.1, 192.168.0.2 and 192.168.0.3.
node1: /etc/network/interfaces
Create a VXLAN zone named myvxlanzone and add all IPs from the nodes to the
auto vmbr0
peer address list. Use the default MTU of 1450 or configure accordingly.
iface vmbr0 inet static
ID: myvxlanzone
        address 192.168.0.1/24
Peers Address List: 192.168.0.1,192.168.0.2,192.168.0.3
        gateway 192.168.0.254
Create a VNet named vxvnet1 using the VXLAN zone myvxlanzone created
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500
source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
        address 192.168.0.2/24
        gateway 192.168.0.254
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500
source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
        address 192.168.0.3/24
        gateway 192.168.0.254
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500
source /etc/network/interfaces.d/*
Create a VXLAN zone named &#8216;myvxlanzone&#8217;, using a lower MTU to ensure the extra
50 bytes of the VXLAN header can fit. Add all previously configured IPs from
the nodes to the peer address list.
id: myvxlanzone
peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
mtu: 1450
Create a VNet named &#8216;myvnet1&#8217; using the VXLAN zone &#8216;myvxlanzone&#8217; created
previously.
previously.
id: myvnet1
ID: vxvnet1
zone: myvxlanzone
Zone: myvxlanzone
tag: 100000
Tag: 100000
Apply the configuration on the main SDN web-interface panel to create VNets
Apply the configuration on the main SDN web interface panel to create VNets
locally on each nodes.
locally on each nodes.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on &#8216;myvnet1&#8217;.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on vxvnet1.
Use the following network configuration for this VM (note the lower MTU).
Use the following network configuration for this VM (note the lower MTU).
auto eth0
auto eth0
Line 483: Line 577:
         mtu 1450
         mtu 1450
Create a second virtual machine (vm2) on node3, with a vNIC on the same VNet
Create a second virtual machine (vm2) on node3, with a vNIC on the same VNet
&#8216;myvnet1&#8217; as vm1.
vxvnet1 as vm1.
Use the following network configuration for this VM:
Use the following network configuration for this VM:
auto eth0
auto eth0
Line 491: Line 585:
Then, you should be able to ping between between vm1 and vm2.
Then, you should be able to ping between between vm1 and vm2.
EVPN Setup Example
EVPN Setup Example
node1: /etc/network/interfaces
The example assumes a cluster with three nodes (node1, node2, node3) with IP
auto vmbr0
addresses 192.168.0.1, 192.168.0.2 and 192.168.0.3.
iface vmbr0 inet static
        address 192.168.0.1/24
        gateway 192.168.0.254
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500
source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
        address 192.168.0.2/24
        gateway 192.168.0.254
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500
source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0
iface vmbr0 inet static
        address 192.168.0.3/24
        gateway 192.168.0.254
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 1500
source /etc/network/interfaces.d/*
Create an EVPN controller, using a private ASN number and the above node
Create an EVPN controller, using a private ASN number and the above node
addresses as peers.
addresses as peers.
id: myevpnctl
ID: myevpnctl
asn: 65000
ASN#: 65000
peers: 192.168.0.1,192.168.0.2,192.168.0.3
Peers: 192.168.0.1,192.168.0.2,192.168.0.3
Create an EVPN zone named &#8216;myevpnzone&#8217;, using the previously created
Create an EVPN zone named myevpnzone, assign the previously created
EVPN-controller. Define node1 and node2 as exit nodes.
EVPN-controller and define node1 and node2 as exit nodes.
id: myevpnzone
ID: myevpnzone
vrf vxlan tag: 10000
VRF VXLAN Tag: 10000
controller: myevpnctl
Controller: myevpnctl
mtu: 1450
MTU: 1450
vnet mac address: 32:F4:05:FE:6C:0A
VNet MAC Address: 32:F4:05:FE:6C:0A
exitnodes: node1,node2
Exit Nodes: node1,node2
Create the first VNet named &#8216;myvnet1&#8217; using the EVPN zone &#8216;myevpnzone&#8217;.
Create the first VNet named myvnet1 using the EVPN zone myevpnzone.
id: myvnet1
ID: myvnet1
zone: myevpnzone
Zone: myevpnzone
tag: 11000
Tag: 11000
Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on myvnet1.
Create a subnet on myvnet1:
subnet: 10.0.1.0/24
Subnet: 10.0.1.0/24
gateway: 10.0.1.1
Gateway: 10.0.1.1
Create the second VNet named &#8216;myvnet2&#8217; using the same EVPN zone &#8216;myevpnzone&#8217;, a
Create the second VNet named myvnet2 using the same EVPN zone myevpnzone.
different IPv4 CIDR network.
ID: myvnet2
id: myvnet2
Zone: myevpnzone
zone: myevpnzone
Tag: 12000
tag: 12000
Create a different subnet on myvnet2`:
Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
Subnet: 10.0.2.0/24
subnet: 10.0.2.0/24
Gateway: 10.0.2.1
gateway: 10.0.2.1
Apply the configuration from the main SDN web interface panel to create VNets
Apply the configuration from the main SDN web-interface panel to create VNets
locally on each node and generate the FRR configuration.
locally on each node and generate the FRR config.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on myvnet1.
Create a Debian-based virtual machine (vm1) on node1, with a vNIC on &#8216;myvnet1&#8217;.
Use the following network configuration for vm1:
Use the following network configuration for this VM:
auto eth0
auto eth0
iface eth0 inet static
iface eth0 inet static
         address 10.0.1.100/24
         address 10.0.1.100/24
         gateway 10.0.1.1   #this is the ip of the vnet1
         gateway 10.0.1.1
         mtu 1450
         mtu 1450
Create a second virtual machine (vm2) on node2, with a vNIC on the other VNet
Create a second virtual machine (vm2) on node2, with a vNIC on the other VNet
&#8216;myvnet2&#8217;.
myvnet2.
Use the following network configuration for this VM:
Use the following network configuration for vm2:
auto eth0
auto eth0
iface eth0 inet static
iface eth0 inet static
         address 10.0.2.100/24
         address 10.0.2.100/24
         gateway 10.0.2.1   #this is the ip of the myvnet2
         gateway 10.0.2.1
         mtu 1450
         mtu 1450
Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
Now you should be able to ping vm2 from vm1, and vm1 from vm2.
If you ping an external IP from vm2 on the non-gateway node3, the packet
If you ping an external IP from vm2 on the non-gateway node3, the packet
will go to the configured myvnet2 gateway, then will be routed to the exit
will go to the configured myvnet2 gateway, then will be routed to the exit
Line 581: Line 646:
Reverse Path Filter) option, because packets can arrive at one node but go out
Reverse Path Filter) option, because packets can arrive at one node but go out
from another node.
from another node.
sysctl.conf disabling rp_filter
Add the following to /etc/sysctl.conf:
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.rp_filter=0
VXLAN IPSEC Encryption
VXLAN IPSEC Encryption
If you need to add encryption on top of a VXLAN, it&#8217;s possible to do so with
To add IPSEC encryption on top of a VXLAN, this example shows how to use
IPSEC, through strongswan. You&#8217;ll need to reduce the MTU by 60 bytes (IPv4)
strongswan.
or 80 bytes (IPv6) to handle encryption.
You`ll need to reduce the MTU by additional 60 bytes for IPv4 or 80 bytes for
IPv6 to handle encryption.
So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC)
So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC)
+ 50 (VXLAN) == 1500).
+ 50 (VXLAN) == 1500).
Install strongswan
Install strongswan on the host.
apt install strongswan
apt install strongswan
Add configuration to &#8216;/etc/ipsec.conf&#8217;. We only need to encrypt traffic from
Add configuration to /etc/ipsec.conf. We only need to encrypt traffic from
the VXLAN UDP port 4789.
the VXLAN UDP port 4789.
conn %default
conn %default
Line 609: Line 675:
     authby=psk
     authby=psk
     auto=route
     auto=route
Then generate a pre-shared key with:
Generate a pre-shared key with:
openssl rand -base64 128
openssl rand -base64 128
and add the key to &#8216;/etc/ipsec.secrets&#8217;, so that the file contents looks like:
and add the key to /etc/ipsec.secrets, so that the file contents looks like:
: PSK &lt;generatedbase64key&gt;
: PSK &lt;generatedbase64key&gt;
You need to copy the PSK and the configuration onto the other nodes.
Copy the PSK and the configuration to all nodes participating in the VXLAN network.
</pvehide>
</pvehide>
<!--PVE_IMPORT_END_MARKER-->
<!--PVE_IMPORT_END_MARKER-->

Latest revision as of 17:30, 6 March 2024

The Software-Defined Network (SDN) feature in Proxmox VE enables the creation of virtual zones and networks (VNets). This functionality simplifies advanced networking configurations and multitenancy setup.

Introduction

The Proxmox VE SDN allows for separation and fine-grained control of virtual guest networks, using flexible, software-controlled configurations.

Separation is managed through zones, virtual networks (VNets), and subnets. A zone is its own virtually separated network area. A VNet is a virtual network that belongs to a zone. A subnet is an IP range inside a VNet.

Depending on the type of the zone, the network behaves differently and offers specific features, advantages, and limitations.

Use cases for SDN range from an isolated private network on each individual node to complex overlay networks across multiple PVE clusters on different locations.

After configuring an VNet in the cluster-wide datacenter SDN administration interface, it is available as a common Linux bridge, locally on each node, to be assigned to VMs and Containers.

Support Status

History

The Proxmox VE SDN stack has been available as an experimental feature since 2019 and has been continuously improved and tested by many developers and users. With its integration into the web interface in Proxmox VE 6.2, a significant milestone towards broader integration was achieved. During the Proxmox VE 7 release cycle, numerous improvements and features were added. Based on user feedback, it became apparent that the fundamental design choices and their implementation were quite sound and stable. Consequently, labeling it as ‘experimental’ did not do justice to the state of the SDN stack. For Proxmox VE 8, a decision was made to lay the groundwork for full integration of the SDN feature by elevating the management of networks and interfaces to a core component in the Proxmox VE access control stack. In Proxmox VE 8.1, two major milestones were achieved: firstly, DHCP integration was added to the IP address management (IPAM) feature, and secondly, the SDN integration is now installed by default.

Current Status

The current support status for the various layers of our SDN installation is as follows:

  • Core SDN, which includes VNet management and its integration with the Proxmox VE stack, is fully supported.

  • IPAM, including DHCP management for virtual guests, is in tech preview.

  • Complex routing via FRRouting and controller integration are in tech preview.

Installation

SDN Core

Since Proxmox VE 8.1 the core Software-Defined Network (SDN) packages are installed by default.

If you upgrade from an older version, you need to install the libpve-network-perl package on every node:

apt update
apt install libpve-network-perl
Note Proxmox VE version 7.0 and above have the ifupdown2 package installed by default. If you originally installed your system with an older version, you need to explicitly install the ifupdown2 package.

After installation, you need to ensure that the following line is present at the end of the /etc/network/interfaces configuration file on all nodes, so that the SDN configuration gets included and activated.

source /etc/network/interfaces.d/*

DHCP IPAM

The DHCP integration into the built-in PVE IP Address Management stack currently uses dnsmasq for giving out DHCP leases. This is currently opt-in.

To use that feature you need to install the dnsmasq package on every node:

apt update
apt install dnsmasq
# disable default instance
systemctl disable --now dnsmasq

FRRouting

The Proxmox VE SDN stack uses the FRRouting project for advanced setups. This is currently opt-in.

To use the SDN routing integration you need to install the frr-pythontools package on all nodes:

apt update
apt install frr-pythontools

Configuration Overview

Configuration is done at the web UI at datacenter level, separated into the following sections:

  • SDN:: Here you get an overview of the current active SDN state, and you can apply all pending changes to the whole cluster.

  • Zones: Create and manage the virtually separated network zones

  • VNets VNets: Create virtual network bridges and manage subnets

The Options category allows adding and managing additional services to be used in your SDN setup.

  • Controllers: For controlling layer 3 routing in complex setups

  • DHCP: Define a DHCP server for a zone that automatically allocates IPs for guests in the IPAM and leases them to the guests via DHCP.

  • IPAM: Enables external for IP address management for guests

  • DNS: Define a DNS server integration for registering virtual guests' hostname and IP addresses

Technology & Configuration

The Proxmox VE Software-Defined Network implementation uses standard Linux networking as much as possible. The reason for this is that modern Linux networking provides almost all needs for a feature full SDN implementation and avoids adding external dependencies and reduces the overall amount of components that can break.

The Proxmox VE SDN configurations are located in /etc/pve/sdn, which is shared with all other cluster nodes through the Proxmox VE configuration file system. Those configurations get translated to the respective configuration formats of the tools that manage the underlying network stack (for example ifupdown2 or frr).

New changes are not immediately applied but recorded as pending first. You can then apply a set of different changes all at once in the main SDN overview panel on the web interface. This system allows to roll-out various changes as single atomic one.

The SDN tracks the rolled-out state through the .running-config and .version files located in /etc/pve/sdn.

Zones

A zone defines a virtually separated network. Zones are restricted to specific nodes and assigned permissions, in order to restrict users to a certain zone and its contained VNets.

Different technologies can be used for separation:

  • Simple: Isolated Bridge. A simple layer 3 routing bridge (NAT)

  • VLAN: Virtual LANs are the classic method of subdividing a LAN

  • QinQ: Stacked VLAN (formally known as IEEE 802.1ad)

  • VXLAN: Layer 2 VXLAN network via a UDP tunnel

  • EVPN (BGP EVPN): VXLAN with BGP to establish Layer 3 routing

Common Options

The following options are available for all zone types:

Nodes

The nodes which the zone and associated VNets should be deployed on.

IPAM

Use an IP Address Management (IPAM) tool to manage IPs in the zone. Optional, defaults to pve.

DNS

DNS API server. Optional.

ReverseDNS

Reverse DNS API server. Optional.

DNSZone

DNS domain name. Used to register hostnames, such as <hostname>.<domain>. The DNS zone must already exist on the DNS server. Optional.

Simple Zones

This is the simplest plugin. It will create an isolated VNet bridge. This bridge is not linked to a physical interface, and VM traffic is only local on each the node. It can be used in NAT or routed setups.

VLAN Zones

The VLAN plugin uses an existing local Linux or OVS bridge to connect to the node’s physical interface. It uses VLAN tagging defined in the VNet to isolate the network segments. This allows connectivity of VMs between different nodes.

VLAN zone configuration options:

Bridge

The local bridge or OVS switch, already configured on each node that allows node-to-node connection.

QinQ Zones

QinQ also known as VLAN stacking, that uses multiple layers of VLAN tags for isolation. The QinQ zone defines the outer VLAN tag (the Service VLAN) whereas the inner VLAN tag is defined by the VNet.

Note Your physical network switches must support stacked VLANs for this configuration.

QinQ zone configuration options:

Bridge

A local, VLAN-aware bridge that is already configured on each local node

Service VLAN

The main VLAN tag of this zone

Service VLAN Protocol

Allows you to choose between an 802.1q (default) or 802.1ad service VLAN type.

MTU

Due to the double stacking of tags, you need 4 more bytes for QinQ VLANs. For example, you must reduce the MTU to 1496 if you physical interface MTU is 1500.

VXLAN Zones

The VXLAN plugin establishes a tunnel (overlay) on top of an existing network (underlay). This encapsulates layer 2 Ethernet frames within layer 4 UDP datagrams using the default destination port 4789.

You have to configure the underlay network yourself to enable UDP connectivity between all peers.

You can, for example, create a VXLAN overlay network on top of public internet, appearing to the VMs as if they share the same local Layer 2 network.

Warning VXLAN on its own does does not provide any encryption. When joining multiple sites via VXLAN, make sure to establish a secure connection between the site, for example by using a site-to-site VPN.

VXLAN zone configuration options:

Peers Address List

A list of IP addresses of each node in the VXLAN zone. This can be external nodes reachable at this IP address. All nodes in the cluster need to be mentioned here.

MTU

Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes lower than the outgoing physical interface.

EVPN Zones

The EVPN zone creates a routable Layer 3 network, capable of spanning across multiple clusters. This is achieved by establishing a VPN and utilizing BGP as the routing protocol.

The VNet of EVPN can have an anycast IP address and/or MAC address. The bridge IP is the same on each node, meaning a virtual guest can use this address as gateway.

Routing can work across VNets from different zones through a VRF (Virtual Routing and Forwarding) interface.

EVPN zone configuration options:

VRF VXLAN ID

A VXLAN-ID used for dedicated routing interconnect between VNets. It must be different than the VXLAN-ID of the VNets.

Controller

The EVPN-controller to use for this zone. (See controller plugins section).

VNet MAC Address

Anycast MAC address that gets assigned to all VNets in this zone. Will be auto-generated if not defined.

Exit Nodes

Nodes that shall be configured as exit gateways from the EVPN network, through the real network. The configured nodes will announce a default route in the EVPN network. Optional.

Primary Exit Node

If you use multiple exit nodes, force traffic through this primary exit node, instead of load-balancing on all nodes. Optional but necessary if you want to use SNAT or if your upstream router doesn’t support ECMP.

Exit Nodes Local Routing

This is a special option if you need to reach a VM/CT service from an exit node. (By default, the exit nodes only allow forwarding traffic between real network and EVPN network). Optional.

Advertise Subnets

Announce the full subnet in the EVPN network. If you have silent VMs/CTs (for example, if you have multiple IPs and the anycast gateway doesn’t see traffic from theses IPs, the IP addresses won’t be able to be reached inside the EVPN network). Optional.

Disable ARP ND Suppression

Don’t suppress ARP or ND (Neighbor Discovery) packets. This is required if you use floating IPs in your VMs (IP and MAC addresses are being moved between systems). Optional.

Route-target Import

Allows you to import a list of external EVPN route targets. Used for cross-DC or different EVPN network interconnects. Optional.

MTU

Because VXLAN encapsulation uses 50 bytes, the MTU needs to be 50 bytes less than the maximal MTU of the outgoing physical interface. Optional, defaults to 1450.

VNets

After creating a virtual network (VNet) through the SDN GUI, a local network interface with the same name is available on each node. To connect a guest to the VNet, assign the interface to the guest and set the IP address accordingly.

Depending on the zone, these options have different meanings and are explained in the respective zone section in this document.

Warning In the current state, some options may have no effect or won’t work in certain zones.

VNet configuration options:

ID

An up to 8 character ID to identify a VNet

Comment

More descriptive identifier. Assigned as an alias on the interface. Optional

Zone

The associated zone for this VNet

Tag

The unique VLAN or VXLAN ID

VLAN Aware

Enables vlan-aware option on the interface, enabling configuration in the guest.

Subnets

A subnet define a specific IP range, described by the CIDR network address. Each VNet, can have one or more subnets.

A subnet can be used to:

  • Restrict the IP addresses you can define on a specific VNet

  • Assign routes/gateways on a VNet in layer 3 zones

  • Enable SNAT on a VNet in layer 3 zones

  • Auto assign IPs on virtual guests (VM or CT) through IPAM plugins

  • DNS registration through DNS plugins

If an IPAM server is associated with the subnet zone, the subnet prefix will be automatically registered in the IPAM.

Subnet configuration options:

ID

A CIDR network address, for example 10.0.0.0/8

Gateway

The IP address of the network’s default gateway. On layer 3 zones (Simple/EVPN plugins), it will be deployed on the VNet.

SNAT

Enable Source NAT which allows VMs from inside a VNet to connect to the outside network by forwarding the packets to the nodes outgoing interface. On EVPN zones, forwarding is done on EVPN gateway-nodes. Optional.

DNS Zone Prefix

Add a prefix to the domain registration, like <hostname>.prefix.<domain> Optional.

Controllers

Some zones implement a separated control and data plane that require an external controller to manage the VNet’s control plane.

Currently, only the EVPN zone requires an external controller.

EVPN Controller

The EVPN, zone requires an external controller to manage the control plane. The EVPN controller plugin configures the Free Range Routing (frr) router.

To enable the EVPN controller, you need to install frr on every node that shall participate in the EVPN zone.

apt install frr frr-pythontools

EVPN controller configuration options:

ASN #

A unique BGP ASN number. It’s highly recommended to use a private ASN number (64512 – 65534, 4200000000 – 4294967294), as otherwise you could end up breaking global routing by mistake.

Peers

An IP list of all nodes that are part of the EVPN zone. (could also be external nodes or route reflector servers)

BGP Controller

The BGP controller is not used directly by a zone. You can use it to configure FRR to manage BGP peers.

For BGP-EVPN, it can be used to define a different ASN by node, so doing EBGP. It can also be used to export EVPN routes to an external BGP peer.

Note By default, for a simple full mesh EVPN, you don’t need to define a BGP controller.

BGP controller configuration options:

Node

The node of this BGP controller

ASN #

A unique BGP ASN number. It’s highly recommended to use a private ASN number in the range (64512 - 65534) or (4200000000 - 4294967294), as otherwise you could break global routing by mistake.

Peer

A list of peer IP addresses you want to communicate with using the underlying BGP network.

EBGP

If your peer’s remote-AS is different, this enables EBGP.

Loopback Interface

Use a loopback or dummy interface as the source of the EVPN network (for multipath).

ebgp-mutltihop

Increase the number of hops to reach peers, in case they are not directly connected or they use loopback.

bgp-multipath-as-path-relax

Allow ECMP if your peers have different ASN.

ISIS Controller

The ISIS controller is not used directly by a zone. You can use it to configure FRR to export EVPN routes to an ISIS domain.

ISIS controller configuration options:

Node

The node of this ISIS controller.

Domain

A unique ISIS domain.

Network Entity Title

A Unique ISIS network address that identifies this node.

Interfaces

A list of physical interface(s) used by ISIS.

Loopback

Use a loopback or dummy interface as the source of the EVPN network (for multipath).

IPAM

IP Address Management (IPAM) tools manage the IP addresses of clients on the network. SDN in Proxmox VE uses IPAM for example to find free IP addresses for new guests.

A single IPAM instance can be associated with one or more zones.

PVE IPAM Plugin

The default built-in IPAM for your Proxmox VE cluster.

You can inspect the current status of the PVE IPAM Plugin via the IPAM panel in the SDN section of the datacenter configuration. This UI can be used to create, update and delete IP mappings. This is particularly convenient in conjunction with the DHCP feature.

If you are using DHCP, you can use the IPAM panel to create or edit leases for specific VMs, which enables you to change the IPs allocated via DHCP. When editing an IP of a VM that is using DHCP you must make sure to force the guest to acquire a new DHCP leases. This can usually be done by reloading the network stack of the guest or rebooting it.

NetBox IPAM Plugin

NetBox is an open-source IP Address Management (IPAM) and datacenter infrastructure management (DCIM) tool.

To integrate NetBox with Proxmox VE SDN, create an API token in NetBox as described here: https://docs.netbox.dev/en/stable/integrations/rest-api/#tokens

The NetBox configuration properties are:

URL

The NetBox REST API endpoint: http://yournetbox.domain.com/api

Token

An API access token

phpIPAM Plugin

In phpIPAM you need to create an "application" and add an API token with admin privileges to the application.

The phpIPAM configuration properties are:

URL

The REST-API endpoint: http://phpipam.domain.com/api/<appname>/

Token

An API access token

Section

An integer ID. Sections are a group of subnets in phpIPAM. Default installations use sectionid=1 for customers.

DNS

The DNS plugin in Proxmox VE SDN is used to define a DNS API server for registration of your hostname and IP address. A DNS configuration is associated with one or more zones, to provide DNS registration for all the subnet IPs configured for a zone.

PowerDNS Plugin

You need to enable the web server and the API in your PowerDNS config:

api=yes
api-key=arandomgeneratedstring
webserver=yes
webserver-port=8081

The PowerDNS configuration options are:

url

The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost

key

An API access key

ttl

The default TTL for records

DHCP

The DHCP plugin in Proxmox VE SDN can be used to automatically deploy a DHCP server for a Zone. It provides DHCP for all Subnets in a Zone that have a DHCP range configured. Currently the only available backend plugin for DHCP is the dnsmasq plugin.

The DHCP plugin works by allocating an IP in the IPAM plugin configured in the Zone when adding a new network interface to a VM/CT. You can find more information on how to configure an IPAM in the respective section of our documentation.

When the VM starts, a mapping for the MAC address and IP gets created in the DHCP plugin of the zone. When the network interfaces is removed or the VM/CT are destroyed, then the entry in the IPAM and the DHCP server are deleted as well.

Note Some features (adding/editing/removing IP mappings) are currently only available when using the PVE IPAM plugin.

Configuration

You can enable automatic DHCP for a zone in the Web UI via the Zones panel and enabling DHCP in the advanced options of a zone.

Note Currently only Simple Zones have support for automatic DHCP

After automatic DHCP has been enabled for a Zone, DHCP Ranges need to be configured for the subnets in a Zone. In order to that, go to the Vnets panel and select the Subnet for which you want to configure DHCP ranges. In the edit dialogue you can configure DHCP ranges in the respective Tab. Alternatively you can set DHCP ranges for a Subnet via the following CLI command:

pvesh set /cluster/sdn/vnets/<vnet>/subnets/<subnet>
 -dhcp-range start-address=10.0.1.100,end-address=10.0.1.200
 -dhcp-range start-address=10.0.2.100,end-address=10.0.2.200

You also need to have a gateway configured for the subnet - otherwise automatic DHCP will not work.

The DHCP plugin will then allocate IPs in the IPAM only in the configured ranges.

Do not forget to follow the installation steps for the dnsmasq DHCP plugin as well.

Plugins

Dnsmasq Plugin

Currently this is the only DHCP plugin and therefore the plugin that gets used when you enable DHCP for a zone.

Installation

For installation see the DHCP IPAM section.

Configuration

The plugin will create a new systemd service for each zone that dnsmasq gets deployed to. The name for the service is dnsmasq@<zone>. The lifecycle of this service is managed by the DHCP plugin.

The plugin automatically generates the following configuration files in the folder /etc/dnsmasq.d/<zone>:

00-default.conf

This contains the default global configuration for a dnsmasq instance.

10-<zone>-<subnet_cidr>.conf

This file configures specific options for a subnet, such as the DNS server that should get configured via DHCP.

10-<zone>-<subnet_cidr>.ranges.conf

This file configures the DHCP ranges for the dnsmasq instance.

ethers

This file contains the MAC-address and IP mappings from the IPAM plugin. In order to override those mappings, please use the respective IPAM plugin rather than editing this file, as it will get overwritten by the dnsmasq plugin.

You must not edit any of the above files, since they are managed by the DHCP plugin. In order to customize the dnsmasq configuration you can create additional files (e.g. 90-custom.conf) in the configuration folder - they will not get changed by the dnsmasq DHCP plugin.

Configuration files are read in order, so you can control the order of the configuration directives by naming your custom configuration files appropriately.

DHCP leases are stored in the file /var/lib/misc/dnsmasq.<zone>.leases.

When using the PVE IPAM plugin, you can update, create and delete DHCP leases. For more information please consult the documentation of the PVE IPAM plugin. Changing DHCP leases is currently not supported for the other IPAM plugins.

Examples

This section presents multiple configuration examples tailored for common SDN use cases. It aims to offer tangible implementations, providing additional details to enhance comprehension of the available configuration options.

Simple Zone Example

Simple zone networks create an isolated network for guests on a single host to connect to each other.

Tip connection between guests are possible if all guests reside on a same host but cannot be reached on other nodes.
  • Create a simple zone named simple.

  • Add a VNet names vnet1.

  • Create a Subnet with a gateway and the SNAT option enabled.

  • This creates a network bridge vnet1 on the node. Assign this bridge to the guests that shall join the network and configure an IP address.

The network interface configuration in two VMs may look like this which allows them to communicate via the 10.0.1.0/24 network.

allow-hotplug ens19
iface ens19 inet static
        address 10.0.1.14/24
allow-hotplug ens19
iface ens19 inet static
        address 10.0.1.15/24

Source NAT Example

If you want to allow outgoing connections for guests in the simple network zone the simple zone offers a Source NAT (SNAT) option.

Starting from the configuration above, Add a Subnet to the VNet vnet1, set a gateway IP and enable the SNAT option.

Subnet: 172.16.0.0/24
Gateway: 172.16.0.1
SNAT: checked

In the guests configure the static IP address inside the subnet’s IP range.

The node itself will join this network with the Gateway IP 172.16.0.1 and function as the NAT gateway for guests within the subnet range.

VLAN Setup Example

When VMs on different nodes need to communicate through an isolated network, the VLAN zone allows network level isolation using VLAN tags.

Create a VLAN zone named myvlanzone:

ID: myvlanzone
Bridge: vmbr0

Create a VNet named myvnet1 with VLAN tag 10 and the previously created myvlanzone.

ID: myvnet1
Zone: myvlanzone
Tag: 10

Apply the configuration through the main SDN panel, to create VNets locally on each node.

Create a Debian-based virtual machine (vm1) on node1, with a vNIC on myvnet1.

Use the following network configuration for this VM:

auto eth0
iface eth0 inet static
        address 10.0.3.100/24

Create a second virtual machine (vm2) on node2, with a vNIC on the same VNet myvnet1 as vm1.

Use the following network configuration for this VM:

auto eth0
iface eth0 inet static
        address 10.0.3.101/24

Following this, you should be able to ping between both VMs using that network.

QinQ Setup Example

This example configures two QinQ zones and adds two VMs to each zone to demonstrate the additional layer of VLAN tags which allows the configuration of more isolated VLANs.

A typical use case for this configuration is a hosting provider that provides an isolated network to customers for VM communication but isolates the VMs from other customers.

Create a QinQ zone named qinqzone1 with service VLAN 20

ID: qinqzone1
Bridge: vmbr0
Service VLAN: 20

Create another QinQ zone named qinqzone2 with service VLAN 30

ID: qinqzone2
Bridge: vmbr0
Service VLAN: 30

Create a VNet named myvnet1 with VLAN-ID 100 on the previously created qinqzone1 zone.

ID: qinqvnet1
Zone: qinqzone1
Tag: 100

Create a myvnet2 with VLAN-ID 100 on the qinqzone2 zone.

ID: qinqvnet2
Zone: qinqzone2
Tag: 100

Apply the configuration on the main SDN web interface panel to create VNets locally on each node.

Create four Debian-bases virtual machines (vm1, vm2, vm3, vm4) and add network interfaces to vm1 and vm2 with bridge qinqvnet1 and vm3 and vm4 with bridge qinqvnet2.

Inside the VM, configure the IP addresses of the interfaces, for example via /etc/network/interfaces:

auto eth0
iface eth0 inet static
        address 10.0.3.101/24

Configure all four VMs to have IP addresses from the 10.0.3.101 to 10.0.3.104 range.

Now you should be able to ping between the VMs vm1 and vm2, as well as between vm3 and vm4. However, neither of VMs vm1 or vm2 can ping VMs vm3 or vm4, as they are on a different zone with a different service-VLAN.

VXLAN Setup Example

The example assumes a cluster with three nodes, with the node IP addresses 192.168.0.1, 192.168.0.2 and 192.168.0.3.

Create a VXLAN zone named myvxlanzone and add all IPs from the nodes to the peer address list. Use the default MTU of 1450 or configure accordingly.

ID: myvxlanzone
Peers Address List: 192.168.0.1,192.168.0.2,192.168.0.3

Create a VNet named vxvnet1 using the VXLAN zone myvxlanzone created previously.

ID: vxvnet1
Zone: myvxlanzone
Tag: 100000

Apply the configuration on the main SDN web interface panel to create VNets locally on each nodes.

Create a Debian-based virtual machine (vm1) on node1, with a vNIC on vxvnet1.

Use the following network configuration for this VM (note the lower MTU).

auto eth0
iface eth0 inet static
        address 10.0.3.100/24
        mtu 1450

Create a second virtual machine (vm2) on node3, with a vNIC on the same VNet vxvnet1 as vm1.

Use the following network configuration for this VM:

auto eth0
iface eth0 inet static
        address 10.0.3.101/24
        mtu 1450

Then, you should be able to ping between between vm1 and vm2.

EVPN Setup Example

The example assumes a cluster with three nodes (node1, node2, node3) with IP addresses 192.168.0.1, 192.168.0.2 and 192.168.0.3.

Create an EVPN controller, using a private ASN number and the above node addresses as peers.

ID: myevpnctl
ASN#: 65000
Peers: 192.168.0.1,192.168.0.2,192.168.0.3

Create an EVPN zone named myevpnzone, assign the previously created EVPN-controller and define node1 and node2 as exit nodes.

ID: myevpnzone
VRF VXLAN Tag: 10000
Controller: myevpnctl
MTU: 1450
VNet MAC Address: 32:F4:05:FE:6C:0A
Exit Nodes: node1,node2

Create the first VNet named myvnet1 using the EVPN zone myevpnzone.

ID: myvnet1
Zone: myevpnzone
Tag: 11000

Create a subnet on myvnet1:

Subnet: 10.0.1.0/24
Gateway: 10.0.1.1

Create the second VNet named myvnet2 using the same EVPN zone myevpnzone.

ID: myvnet2
Zone: myevpnzone
Tag: 12000

Create a different subnet on myvnet2`:

Subnet: 10.0.2.0/24
Gateway: 10.0.2.1

Apply the configuration from the main SDN web interface panel to create VNets locally on each node and generate the FRR configuration.

Create a Debian-based virtual machine (vm1) on node1, with a vNIC on myvnet1.

Use the following network configuration for vm1:

auto eth0
iface eth0 inet static
        address 10.0.1.100/24
        gateway 10.0.1.1
        mtu 1450

Create a second virtual machine (vm2) on node2, with a vNIC on the other VNet myvnet2.

Use the following network configuration for vm2:

auto eth0
iface eth0 inet static
        address 10.0.2.100/24
        gateway 10.0.2.1
        mtu 1450

Now you should be able to ping vm2 from vm1, and vm1 from vm2.

If you ping an external IP from vm2 on the non-gateway node3, the packet will go to the configured myvnet2 gateway, then will be routed to the exit nodes (node1 or node2) and from there it will leave those nodes over the default gateway configured on node1 or node2.

Note You need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24 networks to node1 and node2 on your external gateway, so that the public network can reply back.

If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24 and 10.0.2.0/24 in this example), will be announced dynamically.

Notes

Multiple EVPN Exit Nodes

If you have multiple gateway nodes, you should disable the rp_filter (Strict Reverse Path Filter) option, because packets can arrive at one node but go out from another node.

Add the following to /etc/sysctl.conf:

net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0

VXLAN IPSEC Encryption

To add IPSEC encryption on top of a VXLAN, this example shows how to use strongswan.

You`ll need to reduce the MTU by additional 60 bytes for IPv4 or 80 bytes for IPv6 to handle encryption.

So with default real 1500 MTU, you need to use a MTU of 1370 (1370 + 80 (IPSEC) + 50 (VXLAN) == 1500).

Install strongswan on the host.

apt install strongswan

Add configuration to /etc/ipsec.conf. We only need to encrypt traffic from the VXLAN UDP port 4789.

conn %default
    ike=aes256-sha1-modp1024!  # the fastest, but reasonably secure cipher on modern HW
    esp=aes256-sha1!
    leftfirewall=yes           # this is necessary when using Proxmox VE firewall rules

conn output
    rightsubnet=%dynamic[udp/4789]
    right=%any
    type=transport
    authby=psk
    auto=route

conn input
    leftsubnet=%dynamic[udp/4789]
    type=transport
    authby=psk
    auto=route

Generate a pre-shared key with:

openssl rand -base64 128

and add the key to /etc/ipsec.secrets, so that the file contents looks like:

: PSK <generatedbase64key>

Copy the PSK and the configuration to all nodes participating in the VXLAN network.