Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Jetty 9 not using resources properly - N/W issue in local loopback interface


On Sat, Oct 19, 2013 at 2:05 AM, Bruno D. Rodrigues <bruno.rodrigues@xxxxxxxxx> wrote:

A 18/10/2013, às 20:51, Bruno D. Rodrigues <bruno.rodrigues@xxxxxxxxx> escreveu:

>
> A 18/10/2013, às 20:26, Simone Bordet <sbordet@xxxxxxxxxxx> escreveu:
>
>> Hi,
>>
>> On Fri, Oct 18, 2013 at 9:06 PM, dinesh kumar <dinesh12b@xxxxxxxxx> wrote:
>>> Hi All,
>>> I am trying to run Jetty 9 on Ubuntu 12.10 (32 bit). The JVM i am using in
>>> JDK 1.7.0_40. I have setup a rest service on my server that uses RestLib.
>>> The rest service is a POST method that just receives the data and does no
>>> processing with it and responds a success.
>>>
>>> I want to see what is the maximum load the Jetty9 server will take with the
>>> given resources. I have a Intel i5 processor box with 8 GB memory. I have
>>> setup a Jmeter to test this rest in the localhost setting. I know this is
>>> not advisable but i would like to know this number (just out of curiosity).
>>>
>>> When i run the JMeter to test this POST method with 1 MB of payload data in
>>> the body, i am getting a through put of around 20 (for 100 users).
>>>
>>> I measured the the bandwidth using iperf to begin with
>>>
>>> iperf -c 127.0.0.1 -p 8080
>>> ------------------------------------------------------------
>>> Client connecting to 127.0.0.1, TCP port 8080
>>>
>>> TCP window size:  167 KByte (default)
>>>
>>> [  3] local 127.0.0.1 port 44130 connected with 127.0.0.1 port 8080
>>>
>>> [ ID] Interval       Transfer     Bandwidth
>>>
>>> [  3]  0.0-10.0 sec   196 MBytes   165 Mbits/sec
>>>
>>> the number 165 MB seems ridiculously small for me but that's one
>>> observation.
>>
>> You're not connecting iperf to Jetty, are you ?
>>
>> On my 4.5 years old laptop iperf on localhost gives me 16.2 Gbits/s.
>>
>> --
>> Simone Bordet
>
>
> I'd have a look at whatever RestLib is doing.
>
> My own tests using not REST POST, but a never ending HTTP chunked POST request (so passing through the whole jetty http headers and chunking stack, plus my own line/msg split, plus a clone of the message for posterior usage, measures as much throughput as my own NIO or AIO simpler raw versions with zero-copy bytebuffers - meaning Jetty is now almost as optimised as a raw socket!
>
> My own values from a MBPro 4xi7 is 3GB (24Gbit) for NIO/AIO zero processing (just reading bytes into null), 900MB (7.2Gbit) for my code and Jetty (reading bytes into null, but passing through the http headers and the chunking), down to 600MB (2.4Gbit) for the whole split + clone bytes. This is for a single request, consuming about 125% cpu (1 and 1/4 of a second cpu).
>
> Now you mention putting a 1MB file, which is a completely different kind of test. I've also done this test before, again both with jetty and my own code, and what I noticed is that if you start a new connection for each operation, no matter how much Jetty (or I tried in my code) to accept the connection and process the http headers asap, the raw performance is way smaller than a continuous put stream.
>
> Changing my test case to put those small files but using http keep alive (or even pipelining, which I discovered that ab that comes with MacOS does, not on purpose but due to a bug), the raw performance comes back to the raw stream values.
>
> It would be nice to know exactly what is that test doing. Opening one connection and putting multiple 8MB POSTS into it?

I'm crazy about performance tests. So I did a simple servlet doPost that reads data, counts the bytes, prepared a 8MB file, and launched ab and curl into it.

The worse case scenario - while true ; do curl -X "POST" -T /tmp/8M 'http://127.0.0.1:8080/' ; done - gives me 350MB (2800Mb/sec)

Using ab, even without parallelism - ab -c 1 -n 1000 -p /tmp/8M http://127.0.0.1:8080/  - I'm getting a weird value on the ab result - (1 128 127.43 kb/s sent) but in my counters I see 1051,5 MB/sec 8412,2 Mb/sec, so I guess ab's "kb" is really "KB".

with -c 100 and -n 10000 I get the same values.

I can share the sample code if you want.

A 19/10/2013, às 19:21, dinesh kumar <dinesh12b@xxxxxxxxx> escreveu:

It would be great to have the script. Please share the code.

Thanks,
Dinesh



Please note that this is the simplest version. It could even be more optimised.

public class J extends HttpServlet {

    private static long start;
    private static final AtomicLong count = new AtomicLong();

    public static void main(final String[] args) throws Exception {
        final Server server = new Server(8080);
        final ServletContextHandler servletHandler = new ServletContextHandler( //
                ServletContextHandler.NO_SECURITY | ServletContextHandler.NO_SESSIONS);
        servletHandler.addServlet(J.class, "/*");
        final HandlerCollection logHandler = new HandlerCollection();
        logHandler.setHandlers(new Handler[] {servletHandler});
        server.setHandler(logHandler);
        server.start();
        server.join();
    }

    @Override
    protected void doPost(final HttpServletRequest request, final HttpServletResponse response) throws ServletException,
            IOException {
        if (J.start == 0)
            J.start = System.currentTimeMillis();
        final ServletInputStream is = request.getInputStream();
        final byte[] b = new byte[1024];
        int read;
        while (( read = is.read(b) ) > 0)
            J.count.addAndGet(read);
        is.close();

        final OutputStream os = response.getOutputStream();
        os.close();
        final float speed = (float) J.count.get() * 1000 / ( System.currentTimeMillis() - J.start ) / 1024 / 1024;
        System.out.println(String.format("%.1f MB/sec %.1f Mb/sec", speed, speed * 8));
    }
}



Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


Back to the top