Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Jetty LoadTests and no available selectors

Hi,

Am 12.06.2017 um 19:11 schrieb Simone Bordet:
Hi,

On Fri, Jun 9, 2017 at 2:43 PM, Simon Kulessa <simon.kulessa@xxxxxxxxx> wrote:
Hi,

we are trying to do some loadTest against a (jersey / async) rest service
running in an embedded jetty server (9.3.6.v20151106).
We are using bootique (0.21) for startup and configuration.
The server is running under FreeBSD.

After some time no requests are handled by the Server. We can see that the
request passed through the acceptor,
but after that nothing happens.
When monitoring the server via JMX, we can see that after peak load there
are no more 'selector threads'.
Is that supposed to be normal? Why are no new selectors spawned?
In Jetty 9.3.x there are no more "selector" threads, as we have
changed the threading implementation.

Only worker threads remain, but those are being disposed of after they reach
their inactive timeout
(so the general mechanism does seem to work for workers, but not for
selectors).

As far as I understand there is only one configurable ThreadPool (qtp),
which contains all the threads for acceptors, selectors and workers.
Is there a way to configure the server to always have a configure fixed
amount of selectors?
I'm guessing that for your case it won't change a thing.

I suspect that you're just sending more requests to the application
than it can handle given the thread pool configuration, and that the
client is so overwhelmed that it cannot read responses.
In short, you have a bad load test. It happens very commonly.

If you have a server thread pool of 10 threads, you can have at most
10 concurrent requests on the server, and if they block on the server,
it does not matter if you have more "selector" threads or not: even if
the server is able to parse the requests, these won't be handled
because there are no threads available.

As far as I see these threads are not blocked, basically all of them (before they are removed due to inactivity)
are showing similar thread dumps to this one:

"bootique-http-53" #53 prio=5 os_prio=0 tid=0x00007f4918033800 nid=0x4a5 waiting on condition [0x00007f48eeaf1000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000fb879de0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) at org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:546) at org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:47) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:609)
    at java.lang.Thread.run(Thread.java:745)

So you have to review whether your server is out of threads, and why
(typically the client is not reading fast enough).
The client is most often the bottleneck in load tests.
We are using a tsung master/client setup across multiple VM's to generated the load. Even if those have a problem, If I start a new client for a different machine, it should not be affected unless something on the server is blocking. The only thing I get is a client side timeout.

I saw that when using the constructor for ServerConnector you can set a
number for acceptors and selectors.
But checking via JMX that does not seem to be related to the number of
threads.

Another question would be:
What is the default behaviour supposed to be in case the application is
overloaded and a request can not be processed?
(related to a job queue of a fixed length configured using
maxQueuedRequests)
Is the request just silently discarded or is there a specific respond coming
back to the client?
A vanilla server will try to serve requests as much as it can, and
when it cannot because it ran out of threads, it will try to apply
backpressure to the client.
Jetty has a number of features to handle this case: from the "low
resources" monitor module, to the denial of service filter
(DoSFilter), to quality of service filter (QoSFilter) - see
http://www.eclipse.org/jetty/documentation/current/advanced-extras.html.
I gave the QoSFilter a try, but it doesn't solve the problem.

On a side note:
And at least one time all workers were blocked trying to get a semaphore (at places were no timeout for the acquisition is involved), but it looks like there was non available any more. But I suppose that is a different topic.
There is nothing baked in by default because almost everybody will
want a different behavior (drop silently, respond with 408, with 503,
throttle, etc.).
Common behaviors are coded into "external" components such as Servlet Filters.

For load testing you typically don't want throttle the server, you
want to control the client and make sure it can handle the load it
generates.
Most of the times, you will need multiple clients before Jetty even
breaks a sweat.

--
Mit freundlichen Grüssen
Simon Kulessa
Senior Developer
KOBIL Systems GmbH
Pfortenring 11
67547 Worms/Germany
fon  +49 (0)6241 3004-0
fax  +49 (0)6241 3004-80

Email: simon.kulessa@xxxxxxxxx
Web: www.kobil.com

KOBIL Systems GmbH, Pfortenring 11, 67547 Worms
Sitz und Registergericht Mainz  |  HRB 10856
Geschäftsführer: Ismet Koyun |  Sitz der Gesellschaft: Worms

Die Information in dieser E-Mail ist vertraulich und exklusiv für den Adressatenkreis bestimmt.
Unbefugte Empfänger haben kein Recht, vom Inhalt Kenntnis zu nehmen, fehlgeleitete E-Mails sind sofort zu löschen.
Die KOBIL Systems GmbH ist von der Richtigkeit des Inhalts und der Übertragung dieser E-Mail überzeugt.
Eine Haftung dafür ist jedoch ausgeschlossen.



Back to the top