Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Unix socket performance numbers



So here are some numbers using ab with keep alive option:

HTTP :8080  98634.66 [#/sec] 117224.98 [Kbytes/sec]
HTTP :8888  67073.40 [#/sec]  79715.16 [Kbytes/sec]
HTTPS:8443  23622.46 [#/sec]  28074.74 [Kbytes/sec]
HTTPS:8843
  52365.51 [#/sec]  62235.18 [Kbytes/sec]

So the headline here is that even with keep-alive direct SSL sucks, but via haproxy/unixsockets it is about half the throughput of a direct connection... but more importantly (in and apples with apples kind of way) it is 78% of the throughput of the plain text proxied test.  So as proxies can have other benefits (eg load balancing), SSL is only just over a 20% cost... if you have a proxy anyway.

Full numbers below (still lacking rigour and only over localhost... but I'm trying to stir up interest here so that others might benchmark with some real world apps/load)

gregw@Tile440: ~
[2035] ab -n 500000 -c 100 -k http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 50000 requests
Completed 100000 requests
Completed 150000 requests
Completed 200000 requests
Completed 250000 requests
Completed 300000 requests
Completed 350000 requests
Completed 400000 requests
Completed 450000 requests
Completed 500000 requests
Finished 500000 requests


Server Software:        Jetty(9.3.z-SNAPSHOT)
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        1045 bytes

Concurrency Level:      100
Time taken for tests:   5.069 seconds
Complete requests:      500000
Failed requests:        0
Keep-Alive requests:    500000
Total transferred:      608500000 bytes
HTML transferred:       522500000 bytes
Requests per second:    98634.66 [#/sec] (mean)
Time per request:       1.014 [ms] (mean)
Time per request:       0.010 [ms] (mean, across all concurrent requests)
Transfer rate:          117224.98 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       4
Processing:     0    1   1.3      1      88
Waiting:        0    1   1.3      1      88
Total:          0    1   1.3      1      88

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      2
  95%      2
  98%      3
  99%      5
 100%     88 (longest request)

gregw@Tile440: ~
[2036] ab -n 500000 -c 100 -k http://localhost:8888/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 50000 requests
Completed 100000 requests
Completed 150000 requests
Completed 200000 requests
Completed 250000 requests
Completed 300000 requests
Completed 350000 requests
Completed 400000 requests
Completed 450000 requests
Completed 500000 requests
Finished 500000 requests


Server Software:        Jetty(9.3.z-SNAPSHOT)
Server Hostname:        localhost
Server Port:            8888

Document Path:          /
Document Length:        1045 bytes

Concurrency Level:      100
Time taken for tests:   7.455 seconds
Complete requests:      500000
Failed requests:        0
Keep-Alive requests:    500000
Total transferred:      608500000 bytes
HTML transferred:       522500000 bytes
Requests per second:    67073.40 [#/sec] (mean)
Time per request:       1.491 [ms] (mean)
Time per request:       0.015 [ms] (mean, across all concurrent requests)
Transfer rate:          79715.16 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       3
Processing:     0    1   0.8      1      29
Waiting:        0    1   0.8      1      29
Total:          0    1   0.8      1      29

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      2
  75%      2
  80%      2
  90%      2
  95%      3
  98%      3
  99%      4
 100%     29 (longest request)

gregw@Tile440: ~
[2037] ab -n 500000 -c 100 -k https://localhost:8443/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 50000 requests
Completed 100000 requests
Completed 150000 requests
Completed 200000 requests
Completed 250000 requests
Completed 300000 requests
Completed 350000 requests
Completed 400000 requests
Completed 450000 requests
Completed 500000 requests
Finished 500000 requests


Server Software:        Jetty(9.3.z-SNAPSHOT)
Server Hostname:        localhost
Server Port:            8443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES128-SHA256,2048,128

Document Path:          /
Document Length:        1045 bytes

Concurrency Level:      100
Time taken for tests:   21.166 seconds
Complete requests:      500000
Failed requests:        0
Keep-Alive requests:    500000
Total transferred:      608500000 bytes
HTML transferred:       522500000 bytes
Requests per second:    23622.46 [#/sec] (mean)
Time per request:       4.233 [ms] (mean)
Time per request:       0.042 [ms] (mean, across all concurrent requests)
Transfer rate:          28074.74 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0  27.5      0    2215
Processing:     0    4   6.4      3     727
Waiting:        0    4   6.4      3     727
Total:          0    4  30.8      3    2219

Percentage of the requests served within a certain time (ms)
  50%      3
  66%      4
  75%      4
  80%      5
  90%      7
  95%      8
  98%     11
  99%     13
 100%   2219 (longest request)

gregw@Tile440: ~
[2038] ab -n 500000 -c 100 -k https://localhost:8843/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 50000 requests
Completed 100000 requests
Completed 150000 requests
Completed 200000 requests
Completed 250000 requests
Completed 300000 requests
Completed 350000 requests
Completed 400000 requests
Completed 450000 requests
Completed 500000 requests
Finished 500000 requests


Server Software:        Jetty(9.3.z-SNAPSHOT)
Server Hostname:        localhost
Server Port:            8843
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path:          /
Document Length:        1045 bytes

Concurrency Level:      100
Time taken for tests:   9.548 seconds
Complete requests:      500000
Failed requests:        0
Keep-Alive requests:    500000
Total transferred:      608500000 bytes
HTML transferred:       522500000 bytes
Requests per second:    52365.51 [#/sec] (mean)
Time per request:       1.910 [ms] (mean)
Time per request:       0.019 [ms] (mean, across all concurrent requests)
Transfer rate:          62235.18 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   2.4      0     182
Processing:     0    2   0.8      2      76
Waiting:        0    2   0.8      2      76
Total:          0    2   2.8      2     196

Percentage of the requests served within a certain time (ms)
  50%      2
  66%      2
  75%      2
  80%      2
  90%      3
  95%      3
  98%      4
  99%      4
 100%    196 (longest request)




On 19 November 2015 at 14:30, Greg Wilkins <gregw@xxxxxxxxxxx> wrote:

Some very early very unscientific performance numbers on unix socket connector in head with haproxy.

This is using siege over localhost.   So that is HTTP/1.0 with no keep-alive on the same machine hammering on a hello world servlet.   Insert all the usual disclaimers here about this being a very poor benchmark.

Connectors were:

8080 HTTP direct
8443 HTTPS direct
8888 HTTP haproxy/unixsocket in tcp mode with proxy prototol
8843 HTTPS haproxy/unixsocket in tcp mode with proxy protocol


here are the early results:


gregw@Tile440: ~
[2016] siege -c 100 -b http://localhost:8080/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege...^C

Transactions:              329921 hits
Availability:              100.00 %
Elapsed time:               22.86 secs
Data transferred:          328.80 MB
Response time:                0.00 secs
Transaction rate:        14432.24 trans/sec
Throughput:               14.38 MB/sec
Concurrency:               59.76
Successful transactions:      329921
Failed transactions:               0
Longest transaction:           15.02
Shortest transaction:            0.00
 

gregw@Tile440: ~
[2017] siege -c 100 -b http://localhost:8888/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.

Transactions:              256987 hits
Availability:              100.00 %
Elapsed time:               24.47 secs
Data transferred:          256.11 MB
Response time:                0.00 secs
Transaction rate:        10502.12 trans/sec
Throughput:               10.47 MB/sec
Concurrency:               12.53
Successful transactions:      256987
Failed transactions:               0
Longest transaction:           15.03
Shortest transaction:            0.00

gregw@Tile440: ~
[2018] siege -c 100 -b https://localhost:8443/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.

Transactions:                1016 hits
Availability:              100.00 %
Elapsed time:               24.10 secs
Data transferred:            1.01 MB
Response time:                2.20 secs
Transaction rate:           42.16 trans/sec
Throughput:                0.04 MB/sec
Concurrency:               92.82
Successful transactions:        1016
Failed transactions:               0
Longest transaction:            3.94
Shortest transaction:            0.84

gregw@Tile440: ~
[2019] siege -c 100 -b https://localhost:8843/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.

Transactions:                8312 hits
Availability:              100.00 %
Elapsed time:               23.74 secs
Data transferred:            8.28 MB
Response time:                0.22 secs
Transaction rate:          350.13 trans/sec
Throughput:                0.35 MB/sec
Concurrency:               75.73
Successful transactions:        8312
Failed transactions:               0
Longest transaction:            3.01
Shortest transaction:            0.01
 

So proxying HTTP is ~40% slower than direct.  To be expected!
HTTPS direct really really suxs, specially for this test mode of 1 small request per connection.
HTTPS haproxy is still slow in this mode, but almost an order of magnitude better than direct!

I hope to soon do some tests with a better test client.

But if you want faster SSL, then this looks a promising direction.

cheers



--



--

Back to the top