Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] HTTP/2 multiplexing

Hi,

On Mon, Jul 27, 2015 at 7:11 PM, Sam Leitch <sam@xxxxxxxxxx> wrote:
> It makes sense that changing the number of connections will make a huge
> difference since the HTTP/1.1 case will only allow 1 concurrent request. My
> hope was that even if I use HttpClient.setMaxConnectionsPerDestination(200),
> I would still see an improvement on the HTTP/2 side.
>
> The idea is that there is a limit on the rate of IP packets sent between
> machines. Therefore, even when using multiple connections there is a limit
> to the amount of traffic that can pass between two machines. (Note: number
> of packets, not total bandwidth)
>
> HTTP/2 uses a single TCP connection for multiple outstanding requests. Since
> TCP is a byte-pipe, it can allow multiple requests to be transmitted in a
> single IP packet. This would allow the request rate to increase while the
> packet rate remains the same, thus increasing throughput.
>
> The request rates I have seen for HTTP/1.1 are in the range for most tests
> I've done on Linux/MacOS systems (50k-75k packets/s). Using a single TCP
> socket with multiple requests per packet optimization, I've seen much higher
> throughput (200k-1000k requests/s). Unfortunately, my testing used a
> primitive protocol and I haven't spent the time to see if I can repeat the
> results using HTTP.

I think you're not stressing enough the client to produce the
multi-request TCP packet you're looking for.
Small requests will be written as soon as they're sent, and if you
have a fast, non-congested network, there is a high chance that the
write finishes immediately, before the next request is sent.
Multiple client threads may induce some queueing, or you have to
increase latency by adding a (slow) queueing mechanism that can queue
requests.

A request send will trigger a tcp write and the mechanism in HTTP/2
does a gathering of frames.
Either you slow down the tcp writes (so that requests can queue up,
e.g. larger requests), or you send requests faster than what they can
be written (e.g. multiple threads - a tight loop as in your code is
probably not enough).

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.


Back to the top