Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] HTTP/2 multiplexing

https://github.com/oneam/test-http2-jetty

I ran a test using 2 m4 x-large EC2 instances. The ping time between them was sub millisecond (~0.100ms)

Using HTTP/1.1 I was able to get ~40000 requests/s with a sub-millisecond median latency. Anything higher would cause a latency spike.
Using HTTP/2 I was able to ~50000 requests/s with a sub-millisecond median latency. Again, anything higher would cause a latency spike.

That's an improvement, but not as significant as I have seen for similar protocol changes.

I've done similar tests (ie. going from single request/response per TCP socket with multiple connections to multiple out-of-order concurrent request/response on a single TCP socket) and witnessed a 5-10x improvement in throughput. I was hoping to see something similar with HTTP/1.1 -> HTTP/2.

(I know. Lies, damn lies, and benchmarks)


On Sat, Jul 25, 2015 at 6:13 AM, Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
Hi,

On Thu, Jul 23, 2015 at 6:07 PM, Sam Leitch <sam@xxxxxxxxxx> wrote:
> Is there any way to ensure the HTTP/2 client/server interaction is truely
> multiplexed and asynchronous?

It is implemented in this way, so you are sure it's multiplexed.

> I'm excited for HTTP/2, not for it's ability to improve the performance for
> a browser, but it's ability to improve performance in a data center.
>
> I personally think a single machine doing simple REST operations in a
> low-latency environment should be able to handle hundreds of thousands of
> requests. Unfortunately that has not been my experience in practice.
> I've done some prototyping and discovered that a bottleneck in HTTP/1.1 is
> the number of IP packets required. Even when using multiple connections,
> each request/response requires an IP packet to be sent. This puts an
> artificial limit on the number of concurrent requests/responses in that you
> cannot send more than the number of packets/s that your machine can manage.
> In my testing, this has been on the order of 10s of thousands packets/s. In
> addition, getting to that level requires the CPUs to be completely saturated
> which makes doing any useful work impossible.

Okay. We have clients that run 20k requests/s with little CPU, say 15%
or so (don't recall exact number now).

> HTTP/2 uses a single TCP connection. That allows multiple requests/responses
> to be transmitted within a single TCP/IP packet, which can increase the
> request rate to hundreds of thousands and even millions.

And we do have this optimization in place for writes.

> Unfortunately, I did not see that kind of improve when testing Jetty HTTP/2.
>
> Does anyone have any idea what could be throttling requests when using
> HTTP/2 client?
> Are there any tweaks I can do to remove the throttles on HTTP/2 requests?

There is no throttling mechanism in Jetty.

You have to explain in much more details the conditions you are testing on.
We are interested in such comparisons with HTTP/1.1 so if you detail
what you're doing we may help you out.

In my experience, most of the times it's the client that is the
bottleneck in load testing, but without details I cannot comment
further.
Is your code available in a public place ?

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


Back to the top