Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Flow control in AsyncMiddleManServlet

Hi Simone,

On Tue, Feb 19, 2019 at 5:37 PM Simone Bordet <sbordet@xxxxxxxxxxx> wrote:
Hi,

On Tue, Feb 19, 2019 at 4:48 PM Raoul Duke <rduke496@xxxxxxxxx> wrote:
> What happens in the case where there are (say) 100 threads and 100 concurrent clients all sending furiously (at LAN speed) for very large uploads.
> it looks like the thread will spin in a "while" loop while there is still more data to read.  so if that is correct then couldn't all 100 threads be occupied without those
> long lived and very fast uploads such that concurrent client 101 is frozen out of getting its upload payload shunted?

You have 100 clients _trying_ to upload at network speed.
Let's assume we have exactly 100 threads available in the proxy to
handle them concurrently.
Each thread will read a chunk and pass it to the slow write to the
server. The write is non-blocking so it may take a long time but won't
block any thread.
If the write is synchronous, the thread will finish the write and go
back to read another chunk, and so on.
If the write is asynchronous, the thread will return to the thread
pool and will be able to handle another client.

Chances are that in your setup the 100 clients will eventually be
slowed down by TCP congestion, and therefore won't be able to upload a
network speed.
This is because the proxy is not reading fast enough because it has to
do slow writes.

The moment one I/O operation on the proxy goes asynchronous (i.e. read
0 bytes or write less bytes than expected), the thread will go back to
the thread pool and potentially be available for another client.

In the perfect case, all 100 threads will be busy reading and writing
so the 101st client will be a job queued in the thread pool waiting
for a thread to be freed.
In the real case, I expect that some I/O operation or some scheduling
imbalance (I assume you don't have 100 hardware cores on the server)
will make one thread available to serve the 101st client before all
the previous 100 are finished.
E.g. client #13 finishes first so its thread will be able to serve
client #101 while the other 99 are still running.

For HTTP/1.1, your knob is the thread pool max size: the larger it is,
the more concurrent clients you should be able to handle.
The smaller it is, the more you are queueing on the proxy and
therefore pushing back on the clients (because you won't read them).
If it is too small, the queueing on the proxy may be so large that a
client may timeout before the proxy has the chance to read from it.

Alternatively, you can configure the proxy with the QoSFilter, which
throttles the number of concurrent requests that can be served
concurrently.
Or you can use AcceptRateLimit or ConnectionLimit to throttle things
at at the TCP level.

Thanks for the superb write-up. it really gives a lot of context I was missing and avenues to explore.  I will do some further experiments / analysis and try to qualify my findings a bit better and will follow-up again if I have more questions/observations.

One follow-up question.  It occured to me that it may be an option to use HTTP/2 on the upstream connection and I wanted to ask you if that would be in any way helpful in the situation I am in?  for example: would it be reasonable to expect there would be less contention for the (conceptually discussed) 100 upstream threads if those requests could be multiplexed over existing backend connections (rather than each connection being head-of-line "blocking" on a particular PUT operation to complete before the connection can be used by another PUT as in HTTP/1.1).

RD
 

Back to the top