[pve-devel] rfc : /etc/pve/networks.cfg implementation

Alexandre DERUMIER aderumier at odiso.com
Thu Feb 28 09:20:09 CET 2019


>>Or just activate when needed (at VM start)? But yes, a separate config is preferable. 

Another thing is if we want to update config. (change multicast address, add a new unicast node,....),
when the vm are already running.


----- Mail original -----
De: "aderumier" <aderumier at odiso.com>
À: "dietmar" <dietmar at proxmox.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Jeudi 28 Février 2019 08:57:44
Objet: Re: [pve-devel] rfc : /etc/pve/networks.cfg implementation

>>Not sure if we need those extra switch settings? 

yes, indeed, I think something like vnet[0-4096] could be better, 

Can't we combine 
>>switch and transportzones? i.e. 
>> 
>>vnet1: vxlanfrr 
>> name: zone4 # not really required 
>> transportzone zone4 
>> ... 
>> l3vni: id 
>> l3vnihwaddres: macaddress 
>> allowedid: 1-16millions 


It's more to avoid to redone all config each time. 

for example, 

you define 1000 vnet, with unicast vxlan 
with option: 
vxlan_remoteip proxmoxip1,proxmoxip2,proxmoxip3,.... 


and one day, you want to add a new node (could be an external proxmox cluster too), 
you need to edit the 1000 vnet 

same with multicast, if you want to change multicast address, or another attribute 

also, some attribute need to be common, like a vrf (you can't have a different vrf applied on the real interface, and others vrf on differents vxlan) 

(vmware nsx is doing that too, creating logical/distributed switch on top of a transport zone) 

> 3) 
> 
> After that, I think we need a new daemon, to generate /etc/network/interfaces locally 
> on each node, do ifupdown2 reload on change,.... maybe do we need to manage that on a separate config ? /etc/network/interfaces.d/networks.cfg ? 

>>Or just activate when needed (at VM start)? But yes, a separate config is preferable. 

Yes, I was thinking about this. 

The only case, is with frr + asymetric routing, it's a problem, because if you want 

host1: vxlan1 - vm1 
host2: vxlan2 - vm2 

when vm1->vm2, it's correctly routed, but the reply of vm2 need to go through vxlan1 directly. (so vxlan1 need to active on host2 too). 
(I'm not a big fan of asymmetric, so we could only implement symetric routing with frr, where the l3vni which is doing the routing) 

Maybe another thing, if one day we want to implement dhcp,I don't known if it's more easy to have all network always up ? 





----- Mail original ----- 
De: "dietmar" <dietmar at proxmox.com> 
À: "aderumier" <aderumier at odiso.com> 
Cc: "pve-devel" <pve-devel at pve.proxmox.com> 
Envoyé: Jeudi 28 Février 2019 06:54:11 
Objet: Re: rfc : /etc/pve/networks.cfg implementation 

> I'll work next week on /etc/pve/networks.cfg, 

great! 

> I have take time to polish the configs file, I'll would to have some feedback 
> before coding. 
> 
> 
> 1) add transportzone in /etc/network/interface. 
> only on physical interfaces (eth/bond), not tagged interfaces. 
> This is only an hint, not used by ifupdown. 
> 
> 1 transportzone can be set only on 1 interface. 
> 
> 
> 
> /etc/network/interfaces 
> ----------------------- 
> 
> auto eth|bond 
> transportzone zone1 

looks reasonable 


> 2) add a new /etc/pve/networks.cfg configuration, with 2 main sections 
> 
> 
> /etc/pve/networks.cfg 
> 
> a) the transportzones (with plugins), 
> where we can define if a transport zone is a vlan, vxlan,... with differents attributes specific to the plugin. 
> 
> some examples: 
> 
> #transportzones 
> 
> 
> vlan: zone1 
> vlan-aware 1|0 (qinq) 
> allowedid: 1 - 4096 
> 
> 
> vxlanmulticast: zone2 
> vxlan-svcnodeip 225.20.1.1 
> allowedid: 1-16millions 
> 
> 
> vxlanunicast: zone3 
> vxlan_remoteip proxmoxip1,proxmoxip2,proxmoxip3,.... 
> allowedid: 1-16millions 
> 
> vxlanfrr: zone4 
> vrf: 
> l3vni: id 
> l3vnihwaddres: macaddress 
> allowedid: 1-16millions 
> 
> 
> b) the networks/bridge/switchs, 
> where the attributes are common. 
> (basicaly, this is a bridge config with vlan/vxlan id) 
> 
> 
> #network 
> 
> switch : mynetwork1 
> transportzone zone1 
> networkid: (vlan/vxlan-id) 
> 
> 
> switch: mynetwork2 
> transportzone zone4 
> networkid: (vlan/vxlan-id) 
> address: cidr 
> hwaddress: 44:39:39:FF:40:10 
> 

Not sure if we need those extra switch settings? Can't we combine 
switch and transportzones? i.e. 

vnet1: vxlanfrr 
name: zone4 # not really required 
transportzone zone4 
... 
l3vni: id 
l3vnihwaddres: macaddress 
allowedid: 1-16millions 


What was the reason for spliting this into zones and switches? 

> 3) 
> 
> After that, I think we need a new daemon, to generate /etc/network/interfaces locally 
> on each node, do ifupdown2 reload on change,.... maybe do we need to manage that on a separate config ? /etc/network/interfaces.d/networks.cfg ? 

Or just activate when needed (at VM start)? But yes, a separate config is preferable. 

> (or maybe reuse pvestatd ?) 
> 
> 
> 
> 
> I'm not sure for the generate interfaces name, as we have 16characters limit: 
> 
> auto vxlanmynetwork1 
> auto vmbrmynetwork1 

Yes, this is a problem ... (use vnetX instead) 

> maybe use an id by switch, to be able to do something like 
> 
> /etc/pve/networks.cfg 
> 
> switch : vnet1 
> name mynetwork1 
> 
> /etc/network/interfaces 
> 
> auto vxlanvnet1 
> auto vmbrvnet1 
> 
> (can't use the vxlanid in name, as we have 16millions characters) 

_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 




More information about the pve-devel mailing list