Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] High CPU Load in Jetty 7.6.2 / JDK 1.7

Hi,

we need some more details, see inline.

On Fri, Apr 27, 2012 at 18:34, Devon Lazarus <Devon.Lazarus@xxxxxxxxx> wrote:
>
>> Just to confirm: are you using SSL ?
>
> We are not using SSL for communication between the client and server. We are behind a load balancer that is providing SSL termination services for us. We are, however, using Jetty NIO inside the application to talk to third-party APIs over SLL.
>

So a plaintext HTTP request arrives to Jetty, and you use Jetty's
HttpClient to talk to a remote server via SSL ?
Basically you're a kind of proxy ?
Or you have actually crafted your own proxying using directly Jetty
NIO internals ?


>> In 7.6.3 you still have the selector and the acceptor threads that consume most CPU ?
>
> Unfortunately, yes. 99% of the (real) CPU time as viewed through VisualVM using the JMX connectivity enabled within Jetty (start.ini, jetty-jmx.xml).
>
> Is it possible, that VisualVM is misreporting this? Our application actually makes a call out to a third-party API acting as a proxy for our clients. If that thread is stuck in wait() would it show up as CPU time for the selector/acceptor and not thread.wait()? That would also be a problem for us, but a different one than we're researching now.
>

Unlikely.

When you say 99% you mean: 99% of 1 core, or 99% across all cores ?

>> Is Jetty behind a HTTP 1.0 proxy ?
>> I ask because having the acceptor do so much work seems to indicate that connections
>> are opened at a high rate, which may explain the whole thing (HTTP 1.0 requires to
>> close connections after each request, which is a performance hit).
>
> No HTTP 1.1 proxy, however...
>
> 1) We are opening ton's of connections at a very high rate. We operate a similar application written in C# .NET that processes over 100MM API calls a week. We have attempted a port to Java and are trying to simulate the same load. We are comparing performance between the two and we are at about .5 Java to .NET-- meaning the Java implementation is 1/2 that of the .NET. That isn't right so we're trying to figure out what has happened.
>

Opening and closing connections at a high rate directly impacts the
activity of the selector; depending on "high rate", the behavior of
the selector thread may be normal.
What rates are we talking ? 100 M in a week is 165 requests/s, which
Jetty should be able to do while snoring (I've seen Jetty doing 45k+
requests/s)

> 2) Our "web client" is actually an embedded firmware product. Although it makes an HTTP 1.1 request, it closes the connection itself immediately after the last byte of the response has been received. Although this isn't semantically correct from an RFC point of view, the .NET IIS6 implementation does not suffer from the same CPU load issue.
>

IIUC, you have 2 sources of connection work: from the remote firmware
client to Jetty (open + request + response + close), and from Jetty to
the 3rd party API where you either use Jetty's HttpClient or Jetty
internals, right ?

What is the selector thread that is high in CPU ? the client-to-Jetty,
or the Jetty-to-3rdParty ?

Simon
-- 
http://cometd.org
http://intalio.com
http://bordet.blogspot.com
----
Finally, no matter how good the architecture and design are,
to deliver bug-free software with optimal performance and reliability,
the implementation technique must be flawless.   Victoria Livschitz


Back to the top