[PVE-User] Dedicated Migration Network?

Phil Kauffman kauffman at cs.uchicago.edu
Tue Jul 12 23:10:33 CEST 2016


Is setting up a dedicated migration network a configurable?
How can I get this working?

 From what I have seen online I found this hack: 
https://forum.proxmox.com/threads/how-to-separate-migration-network-dirty-fix.21538/

Even after specifically setting up my cluster like so:
   pvecm create bravo \
       -bindnet0_addr 192.168.2.164 \
       -ring0_addr bravo-vmnode2-coro.example.com

   pvecm add bravo-vmnode2-coro.example.com \
       -ring0_addr bravo-vmnode1-coro.example.com

The 192 network is a bonded 10G interface with a vlan.


After creating the cluster it seems that proxmox ignores my request and 
does the following:

# cat /etc/pve/.members
{
"nodename": "bravo-vmnode1",
"version": 4,
"cluster": { "name": "bravo", "version": 2, "nodes": 2, "quorate": 1 },
"nodelist": {
   "bravo-vmnode1": { "id": 2, "online": 1, "ip": "10.135.164.162"},
   "bravo-vmnode2": { "id": 1, "online": 1, "ip": "10.135.164.164"}
   }
}

The 10.135 network is only 1G and only supposed to be used to access the 
web interface. :(

My dns is setup like so:
bravo-vmnode1.cs.uchicago.edu has address 10.135.164.162
bravo-vmnode1-coro.cs.uchicago.edu has address 192.168.2.162
bravo-vmnode1-priv.cs.uchicago.edu has address 192.168.1.162

Each network is dedicated to a particular task. The 192 networks are 10G.


I can easily apply a patch like is suggested in the above link using my 
config manager... but I am hopoing I just missed some documentation.

Attached is also a sample of the proxmox nodes /etc/network/interfaces 
file.

Any insight or links to documentation would be very helpful.

Cheers,

Phil
-------------- next part --------------
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
  address 10.135.164.162
  netmask 255.255.255.0
  gateway 10.135.164.1
  dns-nameservers <redacted nameserver IP>
  dns-domain example.com

auto eth1
  iface eth1 inet manual
bond-master bond0
  post-up ip link set dev eth1 mtu 9000

auto eth3
  iface eth3 inet manual
bond-master bond0
  post-up ip link set dev eth3 mtu 9000

auto bond0
iface bond0 inet manual
  bond-slaves none
  bond-miimon 100
  bond-mode 6
  post-up ip link set dev bond0 mtu 9000

auto bond0.101
iface bond0.101 inet static
  address 192.168.1.162
  netmask 255.255.255.0
  vlan-raw-device bond0

auto bond0.102
iface bond0.102 inet static
  address 192.168.2.162
  netmask 255.255.255.0
  vlan-raw-device bond0

auto bond0.164
iface bond0.164 inet manual
  vlan-raw-device bond0

auto bond0.24
iface bond0.24 inet manual
  vlan-raw-device bond0
  
auto vmbr0
iface vmbr0 inet manual
  bridge_ports bond0
  bridge_stp off
  bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
  bridge_ports none
  bridge_stp off
  bridge_fd 0

auto vmbr24
iface vmbr24 inet manual                                                                                                                                                                                    
  bridge_ports bond0.24
  bridge_stp off
  bridge_fd 0

auto vmbr164                                                                                 
iface vmbr164 inet manual
  bridge_ports bond0.164
  bridge_stp off
  bridge_fd 0


More information about the pve-user mailing list