[pve-devel] applied: [PATCH http-server 1/4] websocket: improve masking performance

Alexandre DERUMIER aderumier at odiso.com
Tue Mar 10 14:57:33 CET 2020


>>[ 5] 0.00-10.00 sec 2.58 GBytes 2.22 Gbits/sec 0 sender 
>>[ 5] 0.00-10.00 sec 2.57 GBytes 2.21 Gbits/sec receiver 
>>iperf Done. 

>>this is with TLS and our regular AnyEvent API server handling the 
>>connection, with the target being a VM on the same physical host. 
>>
>>with -l 1024k (instead of the default 128k) we get about ~200mbit more. 

That's seem already really good ! 

>>multiple streams in parallel can handle more traffic (we 
>>could think about enabling parallel drive mirror instead of sequential? 

yes, I was thinking about the same. (On my setup, I have also 2 disks, 1 for os, 1 for data)
it should be easy to implement parallel mirroring


I'll do test on my side to see if optimizations could be done.
Thanks !



----- Mail original -----
De: "Fabian Grünbichler" <f.gruenbichler at proxmox.com>
À: "aderumier" <aderumier at odiso.com>, "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Mardi 10 Mars 2020 13:41:11
Objet: Re: [pve-devel] applied: [PATCH http-server 1/4] websocket: improve masking performance

On March 10, 2020 11:28 am, Alexandre DERUMIER wrote: 
> Hi, 
> 
> do have a small poc sample to create a tunnel like socat ? 
> 
> I would like to bench with iperf. 
> 
> 
> I have some benchmark, on recent server with 3ghz cpu: 
> 
> nbd migration direct : 3,5gbit/s 
> 
> iperf through socat tunnel : 3,5gbit/s (1core 100%) 
> 
> iperf through websocat plaintext (a rust implementation with websocket) https://github.com/vi/websocat : 3 gbit/s (1core 100%) 
> 
> 
> Could be great to reach 3gbits with perl implementation too :) 
> (don't known with encryption, but with aes in cpu hardware, maybe the overhead is not so big) 

testing with the following script already showed me a bug in the 
websocket client part ;) with buffering in both direction you can use 
the following script (fill out the first few variables) to tunnel 
$local_port to $remote_port. 

$ iperf3 -c 127.0.0.1 -p 12345 

Connecting to host 127.0.0.1, port 12345 
[ 5] local 127.0.0.1 port 40164 connected to 127.0.0.1 port 12345 
[ ID] Interval Transfer Bitrate Retr Cwnd 
[ 5] 0.00-1.00 sec 264 MBytes 2.21 Gbits/sec 0 1.50 MBytes 
[ 5] 1.00-2.00 sec 245 MBytes 2.06 Gbits/sec 0 1.50 MBytes 
[ 5] 2.00-3.00 sec 261 MBytes 2.19 Gbits/sec 0 1.50 MBytes 
[ 5] 3.00-4.00 sec 238 MBytes 1.99 Gbits/sec 0 1.50 MBytes 
[ 5] 4.00-5.00 sec 274 MBytes 2.30 Gbits/sec 0 1.50 MBytes 
[ 5] 5.00-6.00 sec 279 MBytes 2.34 Gbits/sec 0 1.50 MBytes 
[ 5] 6.00-7.00 sec 299 MBytes 2.51 Gbits/sec 0 1.50 MBytes 
[ 5] 7.00-8.00 sec 284 MBytes 2.38 Gbits/sec 0 1.50 MBytes 
[ 5] 8.00-9.00 sec 254 MBytes 2.13 Gbits/sec 0 1.50 MBytes 
[ 5] 9.00-10.00 sec 246 MBytes 2.07 Gbits/sec 0 1.50 MBytes 
- - - - - - - - - - - - - - - - - - - - - - - - - 
[ ID] Interval Transfer Bitrate Retr 
[ 5] 0.00-10.00 sec 2.58 GBytes 2.22 Gbits/sec 0 sender 
[ 5] 0.00-10.00 sec 2.57 GBytes 2.21 Gbits/sec receiver 

iperf Done. 

this is with TLS and our regular AnyEvent API server handling the 
connection, with the target being a VM on the same physical host. 

with -l 1024k (instead of the default 128k) we get about ~200mbit more. 

speeds remain roughly the same over a few minutes of sustained 
throughput. multiple streams in parallel can handle more traffic (we 
could think about enabling parallel drive mirror instead of sequential? 
but the load increases as well of course). e.g., with 8 connections on 
the same setup (only 2 vcpus!) I get ~4.2Gbps with a bit more 
fluctuation. with 4 vcpus I get 2.77Gbps with a single connection, and 
5Gbps with -P 2 if we manage to hit two different pveproxy workers ;) 

raw iperf performance is about 9-11Gbs, so that does not look too bad so 
far. 

fixup and debug output in PVE::WebSocket.pm : 

---------8<---------- 

diff --git a/src/PVE/WebSocket.pm b/src/PVE/WebSocket.pm 
index 4cda43d..150c14f 100644 
--- a/src/PVE/WebSocket.pm 
+++ b/src/PVE/WebSocket.pm 
@@ -4,6 +4,7 @@ use strict; 
use warnings; 

use Errno qw(EINTR EAGAIN); 
+use IO::Select; 
use IO::Socket::SSL; 
use MIME::Base64; 
use Digest::SHA; 
@@ -220,6 +221,7 @@ sub decode { 
$self->{pong_data} = $frame_data; 
} elsif ($opcode == 0xA) { 
# pong received, continue 
+ print STDERR "pong received\n"; 
} else { 
die "received unhandled websocket opcode $opcode\n"; 
} 
@@ -252,6 +254,7 @@ sub process { 
my ($fh, $buffer_ref) = @_; 

if (length($$buffer_ref)) { 
+ $! = 0; 
my $nr = syswrite($fh, $$buffer_ref); 
if (!defined($nr)) { 
return if $! == EINTR || $! == EAGAIN; 
@@ -276,6 +279,8 @@ sub process { 
if length($output_buffer) <= $max_buffer_len; 
} elsif ($fh == $self->{socket}) { 
$drain_buffer->($self->{socket}, \$websock_buffer); 
+ $read_select->add($input_fh) 
+ if length($websock_buffer) <= $max_buffer_len; 
} 
} 

@@ -287,6 +292,7 @@ sub process { 
} elsif ($nr == 0) { 
$read_select->remove($self->{socket}); 
$write_select->remove($self->{socket}); 
+ print STDERR "websocket EOF\n"; 
$close = 1; 
next; 
} else { 
@@ -299,6 +305,7 @@ sub process { 
} 
} 
if ($req_close) { 
+ print STDERR "websocket REQ_CLOSE\n"; 
$close = 1; 
next; 
} 
@@ -311,6 +318,7 @@ sub process { 
$read_select->remove($input_fh); 
# close connection 
if (!$close) { 
+ print STDERR "input FH EOF\n"; 
$websock_buffer .= $self->encode(pack('n', 0), "\x88"); # close with status code 0 
$close = 1; 
$write_select->add($self->{socket}); 
@@ -318,6 +326,9 @@ sub process { 
} else { 
$websock_buffer .= $self->encode($buff); 
$write_select->add($self->{socket}); 
+ if (length($websock_buffer) > $max_buffer_len) { 
+ $read_select->remove($input_fh); 
+ } 
} 
} 
} 
@@ -325,9 +336,11 @@ sub process { 

if (!$close) { 
if ($!) { 
+ print STDERR "error $self->{path}\n"; 
die "error processing websocket connection - $!\n"; 
} 
# heartbeat / ping 
+ print STDERR "ping $self->{path}\n"; 
$websock_buffer .= $self->encode('1', "\x89"); 
$write_select->add($self->{socket}); 
} 
@@ -336,6 +349,7 @@ sub process { 
my $err = $@; 

eval { 
+ print STDERR "connection closed - $self->{path}\n"; 
if ($self->{socket}->connected) { 
# close connection 
$websock_buffer .= $self->encode(pack('n', 0), "\x88"); # close with status code 0 

--------->8---------- 

test script for "socat"-like behaviour in perl: 

---------8<---------- 

use strict; 
use warnings; 

use PVE::WebSocket; 
use PVE::APIClient::LWP; 

use IO::Socket::IP; 

#actually belongs in PVE::WebSocket ;) 
use IO::Select; 

my $node = 'TARGET_NODE_NAME'; 
my $host = 'TARGET_NODE_HOSTNAME_OR_IP'; 
my $vmid = 999111; 
my $local_port = 12345; 
my $remote_port = 12345; 
my $apitoken = 'PVEAPIToken=root at pam!TOKENID=TOKEN_VALUE_UUID'; 
my $fingerprint = 'AA:BB:..'; 

my $api_path = "/api2/json/nodes/$node/qemu/$vmid/mtunnelwebsocket?port=$remote_port"; 

my $conn = PVE::APIClient::LWP->new( 
apitoken => $apitoken, 
host => $host, 
cached_fingerprints => { 
$fingerprint => 1, 
}, 
); 

my $data = { 
version => 2, 
}; 
my $local_socket = IO::Socket::IP->new( 
LocalHost => '127.0.0.1', 
LocalPort => $local_port, 
Listen => 1, 
Type => SOCK_STREAM(), 
); 

while (my $client = $local_socket->accept()) { 
print "accept()-ed new connection\n"; 
my $cpid = fork(); 

if ($cpid) { 
$client->close(); 
} else { 
$local_socket->close(); 
my $ws = PVE::WebSocket->new($host, 8006, $api_path); 

my $auth = "Authorization: $conn->{apitoken}"; 

$ws->connect($auth); 

$ws->{reader} = $client; 
$ws->{writer} = $client; 

$ws->process(); 
} 
} 

1; 

--------->8---------- 




More information about the pve-devel mailing list