Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Avoid rapid threadpool growth to max when started under load

Hi,

On Thu, Oct 28, 2021 at 2:39 PM Travis Spencer <travis@xxxxxxxxx> wrote:
>
> Hi, Jetty list!
>
> We are embedding Jetty in our own product, and are currently on
> 9.4.43.v20210629. By default, the minimum number of Jetty request
> threads is set to 8 and the maximum is 500. For some requests, Nashorn
> JavaScript procedures are executed.

Just a reminder that Nashorn has been removed from OpenJDK, starting
with Java 15.
While Nashorn is maintained as a separate project, I removed it from
CometD and replaced it with GraalJS.

> We recently upgraded from Java 8 to 11 (but were using the G1GC even
> on 8). In this new Java version, Nashorn's optimistic typing is
> enabled by default. This seems to place a lock that all Jetty request
> threads (that use Nashorn) get blocked on. Often, a new Jetty node is
> spun up dynamically when the cluster is under heavy load. In such
> situations, the node gets bombarded immediately. Because the request
> threads get blocked directly as Nashorn performs this type
> optimization, the number of request threads grows very quickly to its
> max of 500. This causes so much context switching and overhead that
> the node takes longer to arrive at a steady state. For some
> deployments, this has been fixable by setting the maximum thread count
> to 100. This avoided the thrashing on startup (for systems where 100
> threads is bearable during upfront optimization of Nashorn types), but
> may cause it to be under utilized later when this initial logjam is
> resolved.
>
> For this reason, we're wondering if there is a straightforward way to
> add a maximum thread count during the initial couple minutes after
> starting. Right now, we're not doing anything exhotic with the Jetty
> threading or scheduling, but are aware that there's a lot possible.
> For this reason, we wanted to ask if someone can point us in the
> general direction (or if this kinda thing just isn't
> doable/recommended).

The simpler approach would be to call
QueuedThreadPool.setMaxThreads(newValue) after your initial burst.

Other than that, Jetty's QoSFilter is designed to allow only N
concurrent requests, suspending the excess requests to resume them
later when the requests that were allowed to be processed have been
completed.
However, N is currently a static value, so you will need to borrow the
idea implemented in QoSFilter and implement something similar but more
dynamic.

Another idea is to "prime" Nashorn by making some "fake" requests to
the node before it gets exposed to traffic so that when the traffic
hits Nashorn is already warmed up.

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.


Back to the top