Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Preventing queuing of requests when all connections exhausted

On 1/31/15 3:16 PM, Simone Bordet wrote:

My idea for an interface would be to exempt unused/idle
MaxConnectionsPerDestination from the MaxRequestsQueuedPerDestination limit.
Now I am confused. You said your case was for when all connections are in use.
Why you want to exempt idle connections from a queue limit (which,
incidentally, is not even there) ?
In my case, I do not want any requests to sit in HttpDestination.exchanges waiting for another, pending request to complete. If a request would have to wait for another request to complete before being started, then it should immediately fail, presumably with a RejectedExecutionException.

An idle connection will soon pick up the request, as will a connection that does not exist but can be opened promptly.

My proposal is to define HttpClient.maxRequestsQueuedPerDestination as the limit of requests per destination that are permitted to wait for another request to complete.

Then I could set MaxRequestsQueuedPerDestination to zero. I can't think of a
use case that would require the current method of accounting.

I don't see how multiplexed connections affects it. Doesn't
MaxConnectionsPerDestination apply to virtual connections?
There are no virtual connections, whatever you mean.
There is one physical connection, so MaxConnectionsPerDestination is
implicitly overridden to 1.
By "connection" I mean a unit of request concurrency. If five requests are simultaneously being communicated with a destination, then there are five connections to that destination. It does not matter if the underlying implementation uses five TCP connections or multiplexes them over one TCP connection.

I would expect that with the multiplexing protocols such as SPDY HttpClient.maxConnectionsPerDestination would limit the number of requests that would simultaneously be sent multiplexed over the TCP connection. One would not want to initiate an unbound number of simultaneous requests.

But if one did, or if one wanted to apply a higher concurrency limit to multiplexed connections, then my proposal would still work--it would just mean the code would have to re-apply the MaxRequestsQueuedPerDestination limit when it finds out a destination doesn't support a multiplexed protocol.



Back to the top