Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[jetty-users] Unix socket performance numbers


Some very early very unscientific performance numbers on unix socket connector in head with haproxy.

This is using siege over localhost.   So that is HTTP/1.0 with no keep-alive on the same machine hammering on a hello world servlet.   Insert all the usual disclaimers here about this being a very poor benchmark.

Connectors were:

8080 HTTP direct
8443 HTTPS direct
8888 HTTP haproxy/unixsocket in tcp mode with proxy prototol
8843 HTTPS haproxy/unixsocket in tcp mode with proxy protocol


here are the early results:


gregw@Tile440: ~
[2016] siege -c 100 -b http://localhost:8080/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege...^C

Transactions:              329921 hits
Availability:              100.00 %
Elapsed time:               22.86 secs
Data transferred:          328.80 MB
Response time:                0.00 secs
Transaction rate:        14432.24 trans/sec
Throughput:               14.38 MB/sec
Concurrency:               59.76
Successful transactions:      329921
Failed transactions:               0
Longest transaction:           15.02
Shortest transaction:            0.00
 

gregw@Tile440: ~
[2017] siege -c 100 -b http://localhost:8888/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.

Transactions:              256987 hits
Availability:              100.00 %
Elapsed time:               24.47 secs
Data transferred:          256.11 MB
Response time:                0.00 secs
Transaction rate:        10502.12 trans/sec
Throughput:               10.47 MB/sec
Concurrency:               12.53
Successful transactions:      256987
Failed transactions:               0
Longest transaction:           15.03
Shortest transaction:            0.00

gregw@Tile440: ~
[2018] siege -c 100 -b https://localhost:8443/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.

Transactions:                1016 hits
Availability:              100.00 %
Elapsed time:               24.10 secs
Data transferred:            1.01 MB
Response time:                2.20 secs
Transaction rate:           42.16 trans/sec
Throughput:                0.04 MB/sec
Concurrency:               92.82
Successful transactions:        1016
Failed transactions:               0
Longest transaction:            3.94
Shortest transaction:            0.84

gregw@Tile440: ~
[2019] siege -c 100 -b https://localhost:8843/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.

Transactions:                8312 hits
Availability:              100.00 %
Elapsed time:               23.74 secs
Data transferred:            8.28 MB
Response time:                0.22 secs
Transaction rate:          350.13 trans/sec
Throughput:                0.35 MB/sec
Concurrency:               75.73
Successful transactions:        8312
Failed transactions:               0
Longest transaction:            3.01
Shortest transaction:            0.01
 

So proxying HTTP is ~40% slower than direct.  To be expected!
HTTPS direct really really suxs, specially for this test mode of 1 small request per connection.
HTTPS haproxy is still slow in this mode, but almost an order of magnitude better than direct!

I hope to soon do some tests with a better test client.

But if you want faster SSL, then this looks a promising direction.

cheers



--

Back to the top