[pve-devel] [PATCH pve-docs] Update Dokumentation to Systemd Network Interface Names

Wolfgang Link w.link at proxmox.com
Thu Jul 6 14:54:29 CEST 2017


---
 pve-network.adoc | 72 +++++++++++++++++++++++++++++++++++++++++---------------
 pvecm.adoc       | 12 +++++-----
 qm.adoc          |  2 +-
 3 files changed, 60 insertions(+), 26 deletions(-)

diff --git a/pve-network.adoc b/pve-network.adoc
index 45f6424..102bb8e 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -39,37 +39,71 @@ Naming Conventions
 
 We currently use the following naming conventions for device names:
 
-* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
+* New Ethernet devices: en*, systemd network interface names.
+
+* Lagacy Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
+They are available when Proxmox VE has been updated by an earlier version.
 
 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
 
 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
 
 * VLANs: Simply add the VLAN number to the device name,
-  separated by a period (`eth0.50`, `bond1.30`)
+  separated by a period (`eno1.50`, `bond1.30`)
 
 This makes it easier to debug networks problems, because the device
 names implies the device type.
 
+Systemd Network Interface Names
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Two character prefixes based on the type of interface:
+
+* en — Enoernet
+
+* sl — serial line IP (slip)
+
+* wl — wlan
+
+* ww — wwan
+
+The next characters depence on the device driver and the fact which schema matches first.
+
+* o<index>[n<phys_port_name>|d<dev_port>] — devices on board
+
+* s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
+
+* [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
+
+* x<MAC> — device by MAC address
+
+The most common patterns are
+
+* eno1 — is the first on board NIC
+
+* enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
+
+For more information see link:https://github.com/systemd/systemd/blob/master/src/udev/udev-builtin-net_id.c#L20[Systemd Network Interface Names]
+
 Default Configuration using a Bridge
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The installation program creates a single bridge named `vmbr0`, which
-is connected to the first ethernet card `eth0`. The corresponding
+is connected to the first ethernet card `eno0`. The corresponding
 configuration in `/etc/network/interfaces` looks like this:
 
 ----
 auto lo
 iface lo inet loopback
 
-iface eth0 inet manual
+iface eno1 inet manual
 
 auto vmbr0
 iface vmbr0 inet static
         address 192.168.10.2
         netmask 255.255.255.0
         gateway 192.168.10.1
-        bridge_ports eth0
+        bridge_ports eno1
         bridge_stp off
         bridge_fd 0
 ----
@@ -104,12 +138,12 @@ situations:
 auto lo
 iface lo inet loopback
 
-auto eth0
-iface eth0 inet static
+auto eno1
+iface eno1 inet static
         address  192.168.10.2
         netmask  255.255.255.0
         gateway  192.168.10.1
-        post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
+        post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
 
 
 auto vmbr0
@@ -132,9 +166,9 @@ host's true IP, and masquerade the traffic using NAT:
 auto lo
 iface lo inet loopback
 
-auto eth0
+auto eno0
 #real IP adress 
-iface eth0 inet static
+iface eno1 inet static
         address  192.168.10.2
         netmask  255.255.255.0
         gateway  192.168.10.1
@@ -149,8 +183,8 @@ iface vmbr0 inet static
         bridge_fd 0
 
         post-up echo 1 > /proc/sys/net/ipv4/ip_forward
-        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
-        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
+        post-up   iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
+        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
 ----
 
 
@@ -230,13 +264,13 @@ network will be fault-tolerant.
 auto lo
 iface lo inet loopback
 
-iface eth1 inet manual
+iface eno1 inet manual
 
-iface eth2 inet manual
+iface eno2 inet manual
 
 auto bond0
 iface bond0 inet static
-      slaves eth1 eth2
+      slaves eno1 eno2
       address  192.168.1.2
       netmask  255.255.255.0
       bond_miimon 100
@@ -248,7 +282,7 @@ iface vmbr0 inet static
         address  10.10.10.2
         netmask  255.255.255.0
 	gateway  10.10.10.1
-        bridge_ports eth0
+        bridge_ports eno1
         bridge_stp off
         bridge_fd 0
 
@@ -263,13 +297,13 @@ This can be used to make the guest network fault-tolerant.
 auto lo
 iface lo inet loopback
 
-iface eth1 inet manual
+iface eno1 inet manual
 
-iface eth2 inet manual
+iface eno2 inet manual
 
 auto bond0
 iface bond0 inet maunal
-      slaves eth1 eth2
+      slaves eno1 eno2
       bond_miimon 100
       bond_mode 802.3ad
       bond_xmit_hash_policy layer2+3
diff --git a/pvecm.adoc b/pvecm.adoc
index 7cbca8b..4414d20 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -928,7 +928,7 @@ dedicated network for migration.
 A network configuration for such a setup might look as follows:
 
 ----
-iface eth0 inet manual
+iface eno1 inet manual
 
 # public network
 auto vmbr0
@@ -936,19 +936,19 @@ iface vmbr0 inet static
     address 192.X.Y.57
     netmask 255.255.250.0
     gateway 192.X.Y.1
-    bridge_ports eth0
+    bridge_ports eno1
     bridge_stp off
     bridge_fd 0
 
 # cluster network
-auto eth1
-iface eth1 inet static
+auto eno2
+iface eno2 inet static
     address  10.1.1.1
     netmask  255.255.255.0
 
 # fast network
-auto eth2
-iface eth2 inet static
+auto eno3
+iface eno3 inet static
     address  10.1.2.1
     netmask  255.255.255.0
 ----
diff --git a/qm.adoc b/qm.adoc
index 02a4555..e8da4e1 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -390,7 +390,7 @@ to the number of Total Cores of your guest. You also need to set in
 the VM the number of multi-purpose channels on each VirtIO NIC with the ethtool
 command:
 
-`ethtool -L eth0 combined X`
+`ethtool -L ens1 combined X`
 
 where X is the number of the number of vcpus of the VM.
 
-- 
2.11.0





More information about the pve-devel mailing list