Open vSwitch: Difference between revisions

From Proxmox VE
Jump to navigation Jump to search
Line 232: Line 232:


==== Example 4: Spanning Tree (STP) - 1Gbps uplink, 10Gbps interconnect ====
==== Example 4: Spanning Tree (STP) - 1Gbps uplink, 10Gbps interconnect ====
This example shows how you can use Spanning Tree (STP) to interconnect your ProxMox nodes inexpensively, and uplinking to your core switches for external traffic, all while maintaining a fully fault-tolerant interconnection scheme.  This means VM<->VM access (or possibly Ceph<->Ceph) can operate at the speed of the network interfaces directly attached in a star or ring topology.  In this example, we are using 10Gbps to interconnect our 3 nodes (direct-attach), and uplink to our core switches at 1Gbps.  Spanning Tree configured with the right cost metrics will prevent loops and activate the optimal paths for traffic.  Obviously we are using this topology 10Gbps switch ports are very expensive so this is strictly a cost-savings manoeuvre.  You could obviously use 40Gbps ports instead of 10Gbps ports, but the key thing is the interfaces used to interconnect the nodes are higher-speed than the interfaces used to connect to the core switches.
This example shows how you can use Spanning Tree (STP) to interconnect your ProxMox nodes inexpensively, and uplinking to your core switches for external traffic, all while maintaining a fully fault-tolerant interconnection scheme.  This means VM<->VM access (or possibly Ceph<->Ceph) can operate at the speed of the network interfaces directly attached in a star or ring topology.  In this example, we are using 10Gbps to interconnect our 3 nodes (direct-attach), and uplink to our core switches at 1Gbps.  Spanning Tree configured with the right cost metrics will prevent loops and activate the optimal paths for traffic.  Obviously we are using this topology because 10Gbps switch ports are very expensive so this is strictly a cost-savings manoeuvre.  You could obviously use 40Gbps ports instead of 10Gbps ports, but the key thing is the interfaces used to interconnect the nodes are higher-speed than the interfaces used to connect to the core switches.


To better explain what we are accomplishing, look at this ascii-art representation below:
To better explain what we are accomplishing, look at this ascii-art representation below:

Revision as of 18:58, 21 September 2015

Open vSwitch is an alternative to Linux native bridges, bonds, and vlan interfaces. It is designed with virtualized environments in mind and is recommended to ease deployments.

Installation

  • Install the Open vSwitch packages
apt-get install openvswitch-switch

Configuration

Official reference here, though a bit bare: http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob;f=debian/openvswitch-switch.README.Debian;hb=HEAD

Overview

Open vSwitch and Linux bonding and bridging or vlans MUST NOT be mixed. For instance, do not attempt to add a vlan to an OVS Bond, or add a Linux Bond to an OVSBridge or vice-versa. Open vSwitch is specifically tailored to function within virtualized environments, there is no reason to use the native linux functionality.

Bridges

A bridge is another term for a Switch. It directs traffic to the appropriate interface based on mac address. Open vSwitch bridges should contain raw ethernet devices, along with virtual interfaces such as OVSBonds or OVSIntPorts. These bridges can carry multiple vlans, and be broken out into 'internal ports' to be used as vlan interfaces on the host.

It should be noted that it is recommended that the bridge is bound to a trunk port with no untagged vlans; this means that your bridge itself will never have an ip address. If you need to work with untagged traffic coming into the bridge, it is recommended you tag it (assign it to a vlan) on the originating interface before entering the bridge (though you can assign an IP address on the bridge directly for that untagged data, it is not recommended). You can split out your tagged VLANs using virtual interfaces (OVSIntPort) if you need access to those vlans from your local host. Proxmox will assign the guest VMs a tap interface associated with a vlan, so you do NOT need a bridge per vlan (such as classic linux networking requires). You should think of your OVSBridge much like a physical hardware switch.

When configuring a bridge, in /etc/network/interfaces, prefix the bridge interface definition with allow-ovs $iface. For instance, a simple bridge containing a single interface would look like:

auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
  ovs_type OVSBridge
  ovs_ports eth0

Remember, if you want to split out vlans with ips for use on the local host, you should use OVSIntPorts, see sections to follow.

However, any interfaces (Physical, OVSBonds, or OVSIntPorts) associated with a bridge should have their definitions prefixed with allow-$brname $iface, e.g. allow-vmbr0 bond0

NOTE: All interfaces must be listed under ovs_ports that are part of the bridge even if you have a port definition (e.g. OVSIntPort) that cross-references the bridge!!!

Bonds

Bonds are used to join multiple network interfaces together to act as single unit. Bonds must refer to raw ethernet devices (e.g. eth0, eth1).

When configuring a bond, it is recommended to use LACP (aka 802.3ad) for link aggregation. This requires switch support on the other end. A simple bond using eth0 and eth1 that will be part of the vmbr0 bridge might look like this.

allow-vmbr0 bond0
iface ovsbond inet manual
  ovs_bridge vmbr0
  ovs_type OVSBond
  ovs_bonds eth0 eth1
  ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast

NOTE: The interfaces that are part of a bond do not need to have their own configuration section.

VLANs Host Interfaces

In order for the host (e.g. proxmox host, not VMs themselves!) to utilize a vlan within the bridge, you must create OVSIntPorts. These split out a virtual interface in the specified vlan that you can assign an ip address to (or use DHCP). You need to set ovs_options tag=$VLAN to let OVS know what vlan the interface should be a part of. In the switch world, this is commonly referred to as an RVI (Routed Virtual Interface), or IRB (Integrated Routing and Bridging) interface.

IMPORTANT: These OVSIntPorts you create MUST also show up in the actual bridge definition under ovs_ports. If they do not, they will NOT be brought up even though you specified an ovs_bridge. You also need to prefix the definition with allow-$bridge $iface

Setting up this vlan port would look like this in /etc/network/interfaces:

allow-vmbr0 vlan50
iface vlan50 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=50
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.50.10.44
  netmask 255.255.255.0
  gateway 10.50.10.1

Note on MTU

If you plan on using a MTU larger than the default of 1500, you need to mark any physical interfaces, bonds, and bridges with a larger MTU by adding an mtu setting to the definition such as mtu 9000 otherwise it will be disallowed. However, you should NOT create definitions for your physical interfaces that are part of a bond, instead at the bond layer, you should use a pre-up script such as

pre-up ( ifconfig eth0 mtu 9000 && ifconfig eth1 mtu 9000 )

If you instead create entries in /etc/network/interfaces for those physical interfaces and set the MTU there, then that MTU will propagate to EVERY child. That means you wouldn't be able to configure OVSIntPorts with an mtu of 1500.

Odd Note: Some newer Intel Gigabit NICs have a hardware limitation which means the maximum MTU they can support is 8996 (instead of 9000). If your interfaces aren't coming up and you are trying to use 9000, this is likely the reason and can be difficult to debug. Try setting all your MTUs to 8996 and see if it resolves your issues.

Examples

Example 1: Bridge + Internal Ports + Untagged traffic

The below example shows you how to create a bridge with one physical interface, with 2 vlan interfaces split out, and tagging untagged traffic coming in on eth0 to vlan 1.

This is a complete and working /etc/network/interfaces listing:

# Loopback interface
auto lo
iface lo inet loopback

# Bridge for our eth0 physical interfaces and vlan virtual interfaces (our VMs will
# also attach to this bridge)
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
  ovs_type OVSBridge
  # NOTE: we MUST mention eth0, vlan1, and vlan55 even though each
  #       of them lists ovs_bridge vmbr0!  Not sure why it needs this
  #       kind of cross-referencing but it won't work without it!
  ovs_ports eth0 vlan1 vlan55
  mtu 9000

# Physical interface for traffic coming into the system.  Retag untagged
# traffic into vlan 1, but pass through other tags.
auto eth0
allow-vmbr0 eth0
iface eth0 inet manual
  ovs_bridge vmbr0
  ovs_type OVSPort
  ovs_options tag=1 vlan_mode=native-untagged
# Alternatively if you want to also restrict what vlans are allowed through
# you could use:
# ovs_options tag=1 vlan_mode=native-untagged trunks=10,20,30,40
  mtu 9000

# Virtual interface to take advantage of originally untagged traffic
allow-vmbr0 vlan1
iface vlan1 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=1
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.50.10.44
  netmask 255.255.255.0
  gateway 10.50.10.1
  mtu 1500

# Ceph cluster communication vlan (jumbo frames)
allow-vmbr0 vlan55
iface vlan55 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=55
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.55.10.44
  netmask 255.255.255.0
  mtu 9000

Example 2: Bond + Bridge + Internal Ports

The below example shows you a combination of all the above features. 2 NICs are bonded together and added to an OVS Bridge. 2 vlan interfaces are split out in order to provide the host access to vlans with different MTUs.

This is a complete and working /etc/network/interfaces listing:

# Loopback interface
auto lo
iface lo inet loopback

# Bond eth0 and eth1 together
allow-vmbr0 bond0
iface bond0 inet manual
  ovs_bridge vmbr0
  ovs_type OVSBond
  ovs_bonds eth0 eth1
  # Force the MTU of the physical interfaces to be jumbo-frame capable.
  # This doesn't mean that any OVSIntPorts must be jumbo-capable.  
  # We cannot, however set up definitions for eth0 and eth1 directly due
  # to what appear to be bugs in the initialization process.
  pre-up ( ifconfig eth0 mtu 9000 && ifconfig eth1 mtu 9000 )
  ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast
  mtu 9000

# Bridge for our bond and vlan virtual interfaces (our VMs will
# also attach to this bridge)
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
  ovs_type OVSBridge
  # NOTE: we MUST mention bond0, vlan50, and vlan55 even though each
  #       of them lists ovs_bridge vmbr0!  Not sure why it needs this
  #       kind of cross-referencing but it won't work without it!
  ovs_ports bond0 vlan50 vlan55
  mtu 9000

# Proxmox cluster communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=50
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.50.10.44
  netmask 255.255.255.0
  gateway 10.50.10.1
  mtu 1500

# Ceph cluster communication vlan (jumbo frames)
allow-vmbr0 vlan55
iface vlan55 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=55
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.55.10.44
  netmask 255.255.255.0
  mtu 9000

Example 3: Bond + Bridge + Internal Ports + Untagged traffic + No LACP

The below example shows you a combination of all the above features. 2 NICs are bonded together and added to an OVS Bridge. This example imitates the default proxmox network configuration but using a bond instead of a single NIC and the bond will work without a managed switch which supports LACP.

This is a complete and working /etc/network/interfaces listing:

# Loopback interface
auto lo
iface lo inet loopback

# Bond eth0 and eth1 together
allow-vmbr0 bond0
iface bond0 inet manual
	ovs_bridge vmbr0
	ovs_type OVSBond
	ovs_bonds eth0 eth1
	ovs_options bond_mode=balance-slb vlan_mode=native-untagged

# Bridge for our bond and vlan virtual interfaces (our VMs will
# also attach to this bridge)
auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
	ovs_type OVSBridge
	ovs_ports bond0 vlan1

# Virtual interface to take advantage of originally untagged traffic
allow-vmbr0 vlan1
iface vlan1 inet static
	ovs_type OVSIntPort
	ovs_bridge vmbr0
	ovs_options vlan_mode=access
	ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
	address 192.168.3.5
	netmask 255.255.255.0
	gateway 192.168.3.254

Example 4: Spanning Tree (STP) - 1Gbps uplink, 10Gbps interconnect

This example shows how you can use Spanning Tree (STP) to interconnect your ProxMox nodes inexpensively, and uplinking to your core switches for external traffic, all while maintaining a fully fault-tolerant interconnection scheme. This means VM<->VM access (or possibly Ceph<->Ceph) can operate at the speed of the network interfaces directly attached in a star or ring topology. In this example, we are using 10Gbps to interconnect our 3 nodes (direct-attach), and uplink to our core switches at 1Gbps. Spanning Tree configured with the right cost metrics will prevent loops and activate the optimal paths for traffic. Obviously we are using this topology because 10Gbps switch ports are very expensive so this is strictly a cost-savings manoeuvre. You could obviously use 40Gbps ports instead of 10Gbps ports, but the key thing is the interfaces used to interconnect the nodes are higher-speed than the interfaces used to connect to the core switches.

To better explain what we are accomplishing, look at this ascii-art representation below:

 X     = 10Gbps port
 G     = 1Gbps port
 B     = Blocked via Spanning Tree
 R     = Spanning Tree Root
 PM1-3 = Proxmox hosts 1-3
 SW1-2 = Juniper Switches (stacked) 1-2
 * NOTE: Open vSwitch cannot do STP on bonded links, otherwise the links to the core
         switches would be bonded in this diagram :/

 |-----------------------------|
 | G           G           G   | SW1
 |-|-----------|-----------|---| R
 |-+-----------+-----------+---|
 | | G         | G         | G | SW2
 |-+-|---------+-|---------+-|-|
   | |         | |         | |
   | |         | |         | |
   | B         B B         B B
   | |         | |         | |
|--|-|--|      | |      |--|-|--|
|  O OX--------+-+--------XO O  |
|     X |      | |      | X     |
|------\|      | |      |/------|
   PM1  \      | |      /  PM3
         \     | |     B
          \    | |    /
           \|--|-|--|/
            \  O O  /
            |X     X|
            |-------|
               PM2 

This is a complete and working /etc/network/interfaces listing:

auto lo
iface lo inet loopback

allow-vmbr0 eth0
# 1Gbps link to core switch
iface eth0 inet manual
   ovs_bridge vmbr0
   ovs_type OVSPort
   ovs_options other_config:stp-path-cost=100
   mtu 8996

allow-vmbr0 eth1
# 1Gbps link to secondary core switch
iface eth1 inet manual
   ovs_bridge vmbr0
   ovs_type OVSPort
   ovs_options other_config:stp-path-cost=100
   mtu 8996

allow-vmbr0 eth2
# 10Gbps link to another proxmox/ceph node
iface eth2 inet manual
   ovs_bridge vmbr0
   ovs_type OVSPort
   ovs_options other_config:stp-path-cost=10
   mtu 8996

allow-vmbr0 eth3
# 10Gbps link to another proxmox/ceph node
iface eth3 inet manual
   ovs_bridge vmbr0
   ovs_type OVSPort
   ovs_options other_config:stp-path-cost=10
   mtu 8996 

auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
  ovs_type OVSBridge
  ovs_ports eth0 eth1 eth2 eth3 vlan50 vlan55

  # Lower settings for shorter convergence times, we're on a fast network.
  # Set the priority high so that it won't be promoted to the STP root
  # NOTE: ovs_options and ovs_extra do *not* work for some reason to set the STP
  #       options.
  up ovs-vsctl set Bridge ${IFACE} stp_enable=true other_config:stp-priority=0xFFFF other_config:stp-forward-delay=4 other_config:stp-max-age=6 other_config:stp-hello-time=1
  mtu 8996
  # Wait for spanning-tree convergence
  post-up sleep 20

# Proxmox cluster communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=50
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.50.30.44
  netmask 255.255.255.0
  gateway 10.50.30.1
  mtu 1500

# Ceph cluster communication vlan (jumbo frames)
allow-vmbr0 vlan55
iface vlan55 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=55
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.55.30.44
  netmask 255.255.255.0
  mtu 8996

On our Juniper core switches, we put in place this configuration:

set protocols rstp bridge-priority 0
set protocols rstp forward-delay 4
set protocols rstp max-age 6
set protocols rstp hello-time 1
# ProxMox 1
set protocols rstp interface ge-0/0/3 cost 70
set protocols rstp interface ge-1/0/3 cost 70
# ProxMox 2
set protocols rstp interface ge-0/0/4 cost 100
set protocols rstp interface ge-1/0/4 cost 100
# ProxMox 3
set protocols rstp interface ge-0/0/5 cost 130
set protocols rstp interface ge-1/0/5 cost 130

Multicast

Right now Open vSwitch doesn't do anything in regards to multicast. Typically where you might tell linux to enable the multicast querier on the bridge, you should instead set up your querier at your router or switch. Please refer to the Multicast_notes wiki for more information.

Using Open vSwitch in Proxmox

Using Open vSwitch isn't that much different than using normal linux bridges. The main difference is instead of having a bridge per vlan, you have a single bridge containing all your vlans. Then when configuring the network interface for the VM, you would select the bridge (probably the only bridge you have), and you would also enter the VLAN Tag associated with the VLAN you want your VM to be a part of. Now there is zero effort when adding or removing VLANs!