Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[wtp-dev] Performance tests



Hi all,

I've a few questions regarding performance tests. Are performance tests and
non-performance tests going to be run on the same build/test machine?
Performance tests are more sensitive to the environment they run in. Their
results are only accurate if the initial environment are reasonablely the
same every time. Running performance tests right after a build or after
non-performance tests can affect their results. Ideally, rebooting the
system before running each test can give us more accurate numbers; however,
this's probably not possible given time limits. Is this being taken into
account?

The standard to which performance tests have failed or passed is different
from non-performance tests. Are there any documents that speak out what's
consider a performance failure (ex. x% or x seconds regression) and what's
the next course of action if this happens (ex. send an email, open a bug).
Also, in some cases, performance regression can be justified, for example,
a very compelling feature. Is there any process for justifying changes that
cause performance regressions? Have these questions been answered or
documented somewhere? If not, I can help out on sorting out these issues.

Thanks,

Jeff



Back to the top