Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[jetty-users] Performance of POST operation

Hello All, hello Greg and Simone, long time since we met on a certain red company on a certain three-digits project ;)

I'm responsible for a high performance http "stream" service, which on the first incarnations used Jetty (I think latest 7 or maybe first 8 versions at the time), but due to a concurrency bug (which was reported and solved in record time) and an issue caused by the synchronous writing nature of jetty (or my wrong understanding of it) and some bad clients that tended to keep the connection open but read no data and make the native socket.write block forever without any chance of interrupting it, we've moved to our own sockets implementation, recently even a Java 7 AsyncSocket version.

Some days ago I noticed the Servlets 3.1 API with the async extensions and Jetty 9 support, and because I'm getting too old and tired for reinventing the wheel, I'm back looking at Jetty 9 :)

Unfortunately I'm getting some unexpected results that I'd appreciate some help to confirm if they are indeed expected, or if I'm getting confused with the multiple documentation for the multiple versions of jetty out there and doing something completely wrong.

The code is quite simple. It's an embedded version, with a simple main(), with a (Jetty) Server, a ServletHandler with my own servlet, and just this code:

    protected void doPost(final HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException {
            final ServletInputStream input = request.getInputStream();
            response.setStatus(HttpServletResponse.SC_OK);
            response.flushBuffer();

            int read, total = 0;
            final byte[] buf = new byte[8 * 1024];
            while (( read = input.read(buf) ) > 0)
                total += read;

            System.out.println("doPost done total=" + total);
}

BTW, I also have a full request.startAsync() + ServletInputStream.setReadListener() async version that works wonderfully, and the performance is exactly the same. 

The post data is a pre-generated request with http chunks of about 64K, each including a lot of json lines.

With our own code, which includes the HTTP headers processing and the de-chunking, we're getting, on the same hardware, about 600MB/sec. (without chunking, setting content-length, would go above 1GB/sec, with line splitting about 300-400MB/sec, depending on the json line size).

(The absolute values are not relevant, only the relative ones.)

With this Jetty code I can barely reach 100MB/sec!

With our code we use and abuse ByteBuffers to push the data through all the layers until the last stage, and only there we do the memcpy of the data into a new ByteBuffer for further processing, so I wonder if there is any trick on Jetty to do something similar, or if this is the expected performance.

What am I doing wrong?

Thanks in advance,
Bruno D. Rodrigues

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


Back to the top