Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jetty-users] Jetty vs ModPerl : Unexpected results on a 24 cores server

Nicolas, one thing you would definitely want to change in your test scenarios is to use one acceptor per (virtual) CPU core. Using more than that number could cause unexpected results, including degraded performance. You might want to try re-running your benchmark and see if it makes any difference.

On Tue, Feb 1, 2011 at 9:06 PM, Chad La Joie <lajoie@xxxxxxxxx> wrote:
The Jetty devs can probably give specifics about Jetty, but JVMs,
especially HotSpot actually has a fair number of internal locks that
make scaling beyond 12-16 processor fairly inefficient.  If you have an
app with a large amount of garbage collection the extra cores can help
with that but for the most part you'd want to run multiple VMs.

In addition, and this is where Greg or Jesse or someone would need to
chime in, it's very possible that Jetty is tuned much more for overall
throughput (i.e., tuned to hand more connections but perhaps slightly
slower).  Certainly the load you reported would be a data point that
suggested this was the case.  So, if this were true, you'd be able to
have many more simultaneous connections with Jetty and Apache+mod_perl.

On 2/1/11 7:51 PM, Nicolas Guillaumin wrote:
> Hi all,
>
> I'm trying to benchmark the performance of a web app. between a Perl CGI
> implementation and a Java/Spring implementation.
> For each request this application basically reads a config file, forks a
> process, reads its stdout as XML, parses it and returns it transformed
> in HTML to the browser.
>
> The Perl version is run under Apache 2.2 with ModPerl::PerlRun.
> The Java version is run with Jetty 7.2.2 and an 1.6 JRE, with -Xms128m
> -Xmx1024m flags.
> Test platforms are running Linux CentOS 5.5 64bit.
>
> The benchmark is run with the "apachebench" tool, for 1000 requests and
> with various concurrency levels (1, 2, 4, 8, 16, 32, 64, 128).
> Results show the mean response time in milliseconds.
>
> When benchmarking on a 8 cores machine I get expected results: Jetty is
> faster than Perl and scales better:
> Concurrency   Response time (Perl)     Response time (Java)
> 1             150.070                  33.303
> 2             144.838                  31.131
> 4             149.410                  43.468
> 8             240.889                  74.856
> 16            474.270                  144.408
> 32            986.855                  307.052
> 64            2099.110                 600.310
> 128           4205.816                 1192.65
>
> However, when benchmarking on a 24 cores machine, Perl scales better
> past the 16 concurrent requests mark.
> Interestingly I also tried to use Apache and mod_proxy_balancer as a
> load balancer in front of 4 Jetty instances, and with this setup Java
> scales better:
>
> Concurrency  Response time (Perl)  Response time (Java) Response time
> (Apache + 4x Jetty)
> 1            207.515               40.372               40.562
> 2            207.097               40.015               42.031
> 4            205.543               61.728               41.77
> 8            211.422               107.032              48.167
> 16           222.341               216.419              71.996
> 32           335.424               456.043              129.596
> 64           695.801               945.73               283.36
> 128          1201.633              1932.592             518.138
>
> For what I know there is no specific locks on any resource (no database
> involved), and with the Java app. the config file is read once and kept
> in memory (Whereas the Perl implementation re-reads it each time).
> Looking at CPU activity when running the Perl benchmark shows about
> 80%/90% load for every 24 core, versus only 10%/15% load on every core
> for the Jetty benchmark.
>
> I tried to raise the "maxThreads" of the pool to 256, and even the
> "Acceptor" param of the connector to 64 but it doesn't significantly
> affects performance.
> The Apache configuration is as follow:
> <IfModule prefork.c>
> StartServers       8
> MinSpareServers    5
> MaxSpareServers   20
> ServerLimit      256
> MaxClients       256
> MaxRequestsPerChild  4000
> </IfModule>
>
> I would expect the Java version to continue to scale better
> independently of the number of cores, which seems to be the case when
> running 4x Jetty + a load balancer on the same box.
>
> Am I missing something in the Jetty config, or in my setup that could
> explain that ?
>
> One possible explanation could be that the OS spreads the load across
> all the cores better with Apache because there are multiple "httpd"
> processes, versus one single "java" process for Jetty.
> That could explain why running 4x Jetty is faster (4 OS processes
> instead of a single one).
>
> Any ideas ?
>
> Thanks,
>
> Nicolas
> _______________________________________________
> jetty-users mailing list
> jetty-users@xxxxxxxxxxx
> https://dev.eclipse.org/mailman/listinfo/jetty-users
>

--
Chad La Joie
http://itumi.biz
trusted identities, delivered
_______________________________________________
jetty-users mailing list
jetty-users@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/jetty-users


Back to the top