Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] HTTP/2 multiplexing

Hi,

On Thu, Jul 23, 2015 at 6:07 PM, Sam Leitch <sam@xxxxxxxxxx> wrote:
> Is there any way to ensure the HTTP/2 client/server interaction is truely
> multiplexed and asynchronous?

It is implemented in this way, so you are sure it's multiplexed.

> I'm excited for HTTP/2, not for it's ability to improve the performance for
> a browser, but it's ability to improve performance in a data center.
>
> I personally think a single machine doing simple REST operations in a
> low-latency environment should be able to handle hundreds of thousands of
> requests. Unfortunately that has not been my experience in practice.
> I've done some prototyping and discovered that a bottleneck in HTTP/1.1 is
> the number of IP packets required. Even when using multiple connections,
> each request/response requires an IP packet to be sent. This puts an
> artificial limit on the number of concurrent requests/responses in that you
> cannot send more than the number of packets/s that your machine can manage.
> In my testing, this has been on the order of 10s of thousands packets/s. In
> addition, getting to that level requires the CPUs to be completely saturated
> which makes doing any useful work impossible.

Okay. We have clients that run 20k requests/s with little CPU, say 15%
or so (don't recall exact number now).

> HTTP/2 uses a single TCP connection. That allows multiple requests/responses
> to be transmitted within a single TCP/IP packet, which can increase the
> request rate to hundreds of thousands and even millions.

And we do have this optimization in place for writes.

> Unfortunately, I did not see that kind of improve when testing Jetty HTTP/2.
>
> Does anyone have any idea what could be throttling requests when using
> HTTP/2 client?
> Are there any tweaks I can do to remove the throttles on HTTP/2 requests?

There is no throttling mechanism in Jetty.

You have to explain in much more details the conditions you are testing on.
We are interested in such comparisons with HTTP/1.1 so if you detail
what you're doing we may help you out.

In my experience, most of the times it's the client that is the
bottleneck in load testing, but without details I cannot comment
further.
Is your code available in a public place ?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.


Back to the top