[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [jetty-dev] Help explaining why FutureCallback.get would be showing up as a hotspot in profiling?
|
Hi Greg,
Thanks for the response - I'm sorry I don't have more concrete details
for you, but if you have followup questions, I'm happy to try and hunt
down as many details as would be useful.
> what are you using to generate your load? ideally I'd like to try to
> generate the same load myself and try to reproduce.
Load currently is coming from a (comparatively) very small number of
production requests - on the order of 1 - 30 per minute (slowness
occurs seemingly irrespective of rate).
> Also some more
> info about what the request are like would be good. You say they do a
> scan of an index, so I'm guessing that is some 10s of ms of busy CPU
> time and then they write the response. How big is the response? How
> is it written? in lots of little writes or on one big write?
Responses are very large dynamically-generated JSON on the order of
1-10Mb+ (see my recent request for info on sizing output buffers).
It is written by Solr 3.6.1 using, IIRC, lots of little writes. Here's
a call graph showing the solr departure points:
---snip---
org.apache.solr.response.JSONWriter:writeStr:629 (method time = 0 ms,
total time = 3889 ms)
org.apache.solr.common.util.FastWriter:write:55 (method time = 0 ms,
total time = 3889 ms)
org.eclipse.jetty.server.HttpOutput:write:174 (method time = 0 ms,
total time = 3889 ms)
org.eclipse.jetty.server.HttpChannel:write:608 (method time = 0 ms,
total time = 3889 ms)
org.eclipse.jetty.server.HttpConnection:send:410 (method time = 0
ms, total time = 3889 ms)
org.eclipse.jetty.server.HttpConnection:send:372 (method time = 0
ms, total time = 3889 ms)
org.eclipse.jetty.server.HttpConnection:blockingWrite:419
(method time = 0 ms, total time = 3889 ms)
org.eclipse.jetty.util.FutureCallback:get:115 (method time =
3889 ms, total time = 3889 ms)
---snip---
Here's the partial code trace from the call graph above:
(aside: interesting comment at the top of writeStr:
// it might be more efficient to use a stringbuilder or write substrings
// if writing chars to the stream is slow.
)
629: writer.write(ch); <-- writes one char at a time???
55: sink.write(buf,0,pos);
174: _channel.write(_aggregate, false);
608: _transport.send(content, complete);
410: send(null, content, lastContent);
(aside: interesting TODO here: // TODO This is always blocking! One
of the important use-cases is to be able to write large static content
without a thread - this isn't static content, but is large...)
372: blockingWrite(chunk,content);
417-419: FutureCallback<Void> callback = new FutureCallback<>();
getEndPoint().write(null, callback, bytes);
callback.get();
115: _done.await();
It is chunked because I don't believe the response size is known in
advance. (I believe this is also why the request log just shows '-'
for the response size).
Let me know if there is any other info I can provide or settings I
should try tweaking.
Thanks again!
Aaron