Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [wtp-dev] Performance tests



Naci,

Thanks for the response. I want to get some performance tests created for
WTP, but am wondering how these testcases should be packaged. I've a couple
of ideas, but want to hear from other people.

Approach 1. Follow what the Platform project is doing... Create a
"performance" source folder in existing tests plugins. Package these
performance testcases into their own test suite and create a "performance"
ant target in test.xml. So, performance tests are run exactly like
non-performance tests except that you need to pass in the "performance"
target in the build/test scripts. I think this approach is a little messy
because tests plugins that solely consist of non-performance testcases are
required to specify a dummy/empty "performance" target.

<target name="performance">
</target>

Likewise, pure performance plugins need to specify a dummy/empty
non-performance target (missing target can result in build breaks)...

Approach 2. I suggest we make a better seperation between performance tests
and non-performance tests by adding a "performance" folder to the CVS
structure and put performance plugins in that folder.

/home/webtools
     /wst
          /components
               /server
                    /features
                    /plugins
                    /tests
                    /peformance

Then create features/map files/build configuration files for these
performance plugins. Any suggestions/comments/ideas?

Thanks,

Jeff



                                                                           
             Naci Dai                                                      
             <naci.dai@eterati                                             
             on.com>                                                    To 
             Sent by:                  wtp-dev@xxxxxxxxxxx,                
             wtp-dev-admin@ecl         wtp-dev@xxxxxxxxxxx                 
             ipse.org                                                   cc 
                                                                           
                                                                   Subject 
             10/31/2004 11:48          Re: [wtp-dev] Performance tests     
             PM                                                            
                                                                           
                                                                           
             Please respond to                                             
                  wtp-dev                                                  
                                                                           
                                                                           





>Hi all,
>
>I've a few questions regarding performance tests. Are performance tests
and
>non-performance tests going to be run on the same build/test machine?

Currently, yes.  However, there are no performnace test committed yet.


>Performance tests are more sensitive to the environment they run in. Their
>results are only accurate if the initial environment are reasonablely the
>same every time. Running performance tests right after a build or after
>non-performance tests can affect their results. Ideally, rebooting the
>system before running each test can give us more accurate numbers;
however,
>this's probably not possible given time limits. Is this being taken into
>account?

No, but I would be interested in hearing more.  The build machines is an
IBM x-series xeon with 2 GB ram and red hat linux.  It seems to do well
with builds (i.e. the utilization is still very low).  It sounds too
radical to me to reboot a machine to have a "reset".  Linux should do a
good job managing these issues.   However, the build machine is rebooted
only a few times a month so it is likely that there are some issues for
performance testing.  Unfortunately, we (eteration) will not be able to
dedicate a seperate machine for perf-tests only.



>The standard to which performance tests have failed or passed is different
>from non-performance tests. Are there any documents that speak out what's
>consider a performance failure (ex. x% or x seconds regression) and what's
>the next course of action if this happens (ex. send an email, open a bug).
>Also, in some cases, performance regression can be justified, for example,
>a very compelling feature. Is there any process for justifying changes
that
>cause performance regressions? Have these questions been answered or
>documented somewhere? If not, I can help out on sorting out these issues.


Currently, there are no documents beyond that of base eclipse. But, please
do this.  It will be very valuable.


>Thanks,
>
>Jeff
>
>_______________________________________________
>wtp-dev mailing list
>wtp-dev@xxxxxxxxxxx
>http://dev.eclipse.org/mailman/listinfo/wtp-dev

Naci Dai,
Managing Director

eteration a.s.
Inonu cad. Sumer sok. Zitas D1-15
Kozyatagi, Istanbul 81090
+90 (532) 573 7783 (cell)
+90 (216) 361 5434 (phone)
+90 (216) 361 2034 (fax)
http://www.eteration.com
mailto:nacidai@xxxxxxx
mailto:naci@xxxxxxxxxxxxx



_______________________________________________
wtp-dev mailing list
wtp-dev@xxxxxxxxxxx
http://dev.eclipse.org/mailman/listinfo/wtp-dev




Back to the top