[pve-devel] The network performance future for VMs

Cesar Peschiera brain at click.com.py
Wed Aug 19 10:50:37 CEST 2015


Thanks Alexandre for your prompt response.

> Seem to be easy with vhost-user virtual network card, and this one can't
> work with linux bridge, because it's userland

I forgot say that OVS was configured in two ports for the LAN link, and
other two ports with the Linux stack for DRBD in blance-rr NIC-to-NIC (OVS
not have the option balance-rr).
In this case is that i had problems with DRBD, then, I preferred disable
totally in my servers the OVS setup.

> I'm not sure, but maybe dpkg on linux stack can only work with host
> physical interfaces and not qemu virtual interfaces.
dpkg?, i assume that you want to say DPDK.

Please see in this link:
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not for
VMs Windows... is it correct???

Moreover, all this questions is due to that i want to improve the speed of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of 10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get 10 
Gb/s in each link.


----- Original Message ----- 
From: "Alexandre DERUMIER" <aderumier at odiso.com>
To: "Cesar Peschiera" <brain at click.com.py>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Sent: Wednesday, August 19, 2015 1:39 AM
Subject: Re: [pve-devel] The network performance future for VMs


>>So now my question is if DPDK can be activated also with the Linux stack?.

I need to dig a little more about this.
Intel seem to push the ovs-dpdk in all conferenfece I have see.
(Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland)


I'm not sure, but maybe dpkg on linux stack can only work with host physical
interfaces and not qemu virtual interfaces.



----- Mail original -----
De: "Cesar Peschiera" <brain at click.com.py>
À: "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Mardi 18 Août 2015 21:25:46
Objet: Re: [pve-devel] The network performance future for VMs

Oh, ok.

In the past, i had problems with DRBD 8.4.5 when OVS is enabled, so i had
that change my setup from OVS to the Linux stack, after of it, i had no more
problems with DRBD.

About of the problem with OVS and DRBD, i did not test in depth the problem
(in the season of preproduction phase), but if i not bad remember, maybe the
problem appears when "OVS Intport" is enabled, or maybe only when OVS is
enabled in the setup.

I was using PVE 3.3

So now my question is if DPDK can be activated also with the Linux stack?.

----- Original Message ----- 
From: "Alexandre DERUMIER" <aderumier at odiso.com>
To: "Cesar Peschiera" <brain at click.com.py>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Sent: Tuesday, August 18, 2015 8:57 AM
Subject: Re: [pve-devel] The network performance future for VMs


>>So, i would like to ask about of the future of PVE in network performance
>>terms.

dpdk will be implemented in openvswitch through vhost-user,
I'm waiting for ovs 2.4 to look at this.


----- Mail original ----- 
De: "Cesar Peschiera" <brain at click.com.py>
À: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Mardi 18 Août 2015 13:00:59
Objet: [pve-devel] The network performance future for VMs

Hi developers of PVE

I would like to talk about of the network speed for VMs:

I see in this link (Web official of Red Hat):
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

In the page 19 of this pdf, i see a interesting info:
Network Function Virtualization (NFV)
Throughput and Packets/sec "RHEL7.x + DPDK (Data Plane Development Kit)":

Millons packets per second:
KVM = 208
Docker = 215
Bare-metal = 218
HW maximum = 225

Between KVM and Bare-metal, the difference is little: 10

Also i see a list of HW NICs compatibility on this link:
http://dpdk.org/doc/nics

So, i would like to ask about of the future of PVE in network performance
terms.

Best regards
Cesar

_______________________________________________
pve-devel mailing list
pve-devel at pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




More information about the pve-devel mailing list