Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[jetty-users] Sizing thread pools with many connectors; sharing with HttpClient?

I'm nearing delivery of my new awesome Jetty Based Proxy Thing 2.0 and
deployed it to one of our test environments.

Due to braindead vendor load balancers and some internal testing needs,
the Jetty when deployed to this environment needs to have 6 (!) connectors.

At first, it ran fine -- but after serving (light) load for a few minutes the
server would start to hang more and more connections indefinitely, until eventually
idle timeout.  Eventually the entire thing wedges and only the most trivial of requests
finish.

After quite a bit of debugging, I realized that each connector seems to be
starting up a ManagedSelector and ReservedThreadExecutor.  These each "reserve"
threads from the pool in that they block waiting for work that needs to get handled
immediately.

I'd started with 20 threads, figuring that would be enough for a test environment with
only a couple of requests per second.

Thus, very few (or no) pool threads are ever available to do the actual normal priority work.
However the EWYK scheduler still seems to make partial progress (as long as tasks do
not hit the queue, they keep going) -- but anything that gets queued hangs forever.
The server ends up in a very confusing partially working state.

Reading through the docs, all I could find is this one liner:
"Configure with goal of limiting memory usage maximum available. Typically this is >50 and <500"

No mention of this pretty big pitfall.  If the tunable was strictly performance, that might be just
fine -- but the fact that there's liveness concerns makes me think that we could do better.

At least a documentation tip -- "each connector reserves 1 ManagedSelector + 1/8 * #CPU reserved threads by default"
would be welcome, but even better would be if the QueuedThreadPool could somehow assert that the configuration
is not writing reserved thread checks that it can't cash.

WDYT?


Relatedly, my proxy reaches out to many backends.  Each backend may have its own configuration for some
high level tuneables like timeouts and maximum number of connections.  So, each one gets its own HttpClient.

This ends up creating a fair number of often "global" resources -- threads mostly.

Is it recommended practice to share these?  i.e. create one Executor and Scheduler to share among
HttpClient instances, or go even further and just take them from the Server itself?

Seems like it might have some benefits -- why incur a handoff from the server thread to the client thread
when you're just ferrying data back and forth -- however given my adventures with sizing just the server
thread pool I worry I might get myself in trouble.  But if it's just a matter of sizing it appropriately...


Thanks for any guidance!
Steven

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


Back to the top