Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-dev] Clarification on Request Timeouts

Hi Simone,

IN our case, since we use Jetty 9.3.x and use the HttpClientTransportOverHTTP2 mechanism to perform HTTP/2 communication. Also in this case, we would open just 1 TCP connection (as the connection pooling support for HTTP/2 is only available in Jetty 9.4.x). Now, Jetty starts rejecting requests in case the number exceeds the maxRequestsQueuedPerDestination. For a use case wherein we send bulk requests in a tight loop, we should never exceed this number and so to avoid that, we set the Semaphore on maxRequestsQueuedPerDestination. Obviously, if the server has high bandwidth and processing at a high rate, we would effectively not be queuing up requests, and processing them as requests are made. Just to pint out, our use case executes around 10000 HTTP/2 requests per second.

Also, the reason why we treat sync and async differently is that for async requests, we just send them and forget. We have application specific listeners hooked to Jetty listeners that invoke the call back functionality once the request is processed. For sync requests, the caller is blocked waiting for a response, and we cannot have the caller wait indefinitely. For this reason, we have imposed a timeout so as to process and return the response within the specified time.

Thanks
Neha

On Thu, Jun 1, 2017 at 12:46 PM, Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
Hi,

On Thu, Jun 1, 2017 at 7:32 PM, Neha Munjal <neha.munjal3@xxxxxxxxx> wrote:
> Hi Simone,
>
> I agree. The way we control this is via a Semaphore on the application side
> (client side) that has permits equal to the maxRequestsQueuedPerDestination
> setting.

But unfortunately this does not apply enough backpressure.
Ideally, you want to queue 0 requests, and always be on the verge of
queuing 1, just to be immediately sent over the network.
Depending on the number of connections you have to the server, you
want always at least 1 request outstanding per connection.
So your semaphore should be set on maxConnectionsPerDestination, and
you acquire a permit on send, and release it on complete.

> The send(..) API is only invoked in case this Semaphore has permits
> available, and any requests exceeding this parameter are blocked on the
> application side.
> Also, we are making use of Response.CompleteListener for async requests
> which releases the acquired permit once the request response conversation
> has completed.
>
> For sync requests we do not have any response Listener. We of course make
> use of the Semaphore to block any requests exceeding this parameter.
> But, still, the moment this semaphore gives way to a request and we call the
> send() API, it implies that the request is queued up. And if the server is
> really busy processing other requests, there is a possibility that some of
> these requests, that have a timeout imposed, may timeout.
>
> Our use case sends bulk requests in an asynchronous mode, with which I do
> not see any issues as there is no timeout associated with these requests.
> Just that there might be intermittent synchronous requests sent to the same
> client with a timeout imposed, that may timeout in case the server is really
> slow in processing the requests.

The fact that you don't impose a timeout on async requests does not
mean that they are sent over the network.
They will, like the sync ones, wait on the queue until there is a
connection available, possibly a long time.
Why you treat these 2 kind of requests (async and sync) differently
with respect to the timeout ?

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
jetty-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


Back to the top