[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [jetty-dev] NIO inefficiency?
|
Hi,
On Fri, Aug 9, 2013 at 8:36 AM, Viktor Szathmary <phraktle@xxxxxxxxx> wrote:
> Hi,
>
> While profiling a high-traffic server, I have noticed the following hotspot
> and wanted to get your analysis:
>
> 5.2% - sun.nio.ch.NativeThread.current
> 3.2% - sun.nio.ch.SocketChannelImpl.write
> 3.2% - org.eclipse.jetty.io.AbstractEndPoint.write
> 2.4% - org.eclipse.jetty.server.HttpConnection$CommitCallback.process
> 2.4% - org.eclipse.jetty.util.IteratingCallback.iterate
> 2.4% - org.eclipse.jetty.server.HttpConnection.send
> 2.4% - org.eclipse.jetty.server.HttpChannel.sendResponse
> 2.4% - org.eclipse.jetty.server.HttpChannel.write
> 1.5% - org.eclipse.jetty.server.HttpOutput.flush
> 1.3% - java.util.zip.DeflaterOutputStream.flush
> ...
> 0.9% - org.eclipse.jetty.server.HttpOutput.close
> ...
> 0.8% - org.eclipse.jetty.server.HttpConnection$ContentCallback.process
> ...
> 2.0% - sun.nio.ch.SocketChannelImpl.read
> ...
>
>
> So basically the read/write/flush operations seem to be burning a whole lot
> of time in NativeThread.current (= pthread_self on Linux), which doesn't
> seem reasonable. Do you have any thoughts on why this is the case and if
> there are alternative approaches to reduce / avoid this? (e.g. combining
> the last flush and close?)
What profiler and what mode (sampling, instrumentation, etc.) ?
Indeed this seems an unreasonable behaviour, but often times profilers
introduce measurements artifacts that are not reported and that show
up like the case above: a method that should not even be list is
suddenly taking a considerable amount of time.
Won't be the first time I see (or chase) a profiler artifact :)
--
Simone Bordet
----
http://cometd.org
http://webtide.com
http://intalio.com
Developer advice, training, services and support
from the Jetty & CometD experts.
Intalio, the modern way to build business applications.