[pve-devel] [PATCH] openvswitch hybrid network model implementation

Alexandre DERUMIER aderumier at odiso.com
Fri Apr 25 08:03:14 CEST 2014


here the results
----------------

network model
-------------

bridge
------
pm0.94----pm0-----pm0.peer----->vmbr0<-----veth100i0--------veth100i0p.94 (taggig vlan94)--------->fwbr100i0<-----------tap100i0 
bond0-------------------------->     <-----veth110i0--------veth110i0p.94 (taggig vlan94)--------->fwbr110i0<-----------tap110i0
                                     <-----veth200i0--------veth200i0p   (no vlan)---------------->fwbr200i0<-----------tap200i0


                                


/etc/networtk/interfaces
------------------------

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode active-backup
        pre-up ifup eth0 eth1
        post-down ifdown eth0 eth1

auto vmbr0
iface vmbr0 inet manual
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        post-up echo 0 > /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping

auto pm0
iface pm0 inet manual
        VETH_BRIDGETO vmbr0

auto pm0.94
iface pm0.94 inet static
        address X.X.X.X
        netmask 255.255.255.0
        gateway X.X.X.X
        vlan-raw-device pm0



bench results kernel 3.10
---------------------------
vm->host : 12gbit/s
host->vm :12 bgit/s
vm->vm : 10gibit/s


The bottleneck is not the veth, but vhost-net process, by default using only 1 core.
I got same result with taps directly on bridge or openvswitch.


This could improve with virtio-net multiqueue
http://www.linux-kvm.org/page/Multiqueue


But, It's works very fine.
No need to tricks with bridgevlan, simply tag on veth.
I think that setup with qinq should work too.




bench results kernel 2.6.32
-------------------------------
I can't test it, because I have multicast problem with this setup on the host side.
I don't known why, but the whole cluster multicast didn't work anymore when the host has booted with this config.
(I can ping the host, connect to it, but multicast/pve-cluster was broken, and on all nodes).
snooping was disabled on all hosts and switchs).



Could be great if somebody test with 2.6.32. (see if it's buggy, and performance too)

(I'll send patch)



----- Mail original ----- 

De: "Alexandre DERUMIER" <aderumier at odiso.com> 
À: "Dietmar Maurer" <dietmar at proxmox.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Jeudi 24 Avril 2014 12:27:17 
Objet: Re: [pve-devel] [PATCH] openvswitch hybrid network model implementation 

>>Sorry, I am busy, doing kernel debugging right now ... 

Ok,no problem, I'll send a report tomorrow 

----- Mail original ----- 

De: "Dietmar Maurer" <dietmar at proxmox.com> 
À: "Alexandre DERUMIER" <aderumier at odiso.com> 
Cc: pve-devel at pve.proxmox.com 
Envoyé: Jeudi 24 Avril 2014 10:40:36 
Objet: RE: [pve-devel] [PATCH] openvswitch hybrid network model implementation 

> (I have patchs for Network.pm to manage veth-fwbridge, do you want them to 
> test ?) 

Sorry, I am busy, doing kernel debugging right now ... 
_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 



More information about the pve-devel mailing list