Thanks for the answer Mickael.
I will report my findings. For context, I'm looking into this in
order to make it easier to author performance tests, so they are
actually easier to contribute and run. And also I would like to
see if this approach could be used for the platform.
Pascal
On 25/02/2014 2:48 PM, Mickael Istria wrote:
On 02/25/2014 04:33 PM, Pascal
Rapicault wrote:
Hi,
Hi,
I'm exploring how to measure the "perceived performance" of
certain actions in Eclipse such as time to open a file, time to
switch between editor, etc. Given that the scenarios I care
about represent user triggered behaviour I was thinking that I
could be using SWTBot to drive those tests.
Do you think using SWTBot for this is a good idea, and is
feasible or would SWTBot cause too much "noise" for my
measurements to be reliable?
I've thought about it and I believe SWTBot is a great project for
that. Indeed, AFAIK, optimizing perceived performance starts by
creating stories which are close to actual usage scenario. SWTBot
is good at that.
There is theorically no impact on the application. SWTBot starts a
new thread to drive the test. This thread is mainly spending time
trying to find widgets, which is not something that affects the
application behaviour and performance in a noticeable way.
Out of that, SWTBot is just an API, it doesn't provide anything
specific to help the creation of performance tests. Issues of
creating and running performance tests (static environment,
difficulty to find scenarios matching "perceived performance",
difficulty to get good reports and compare them...) will still be
there.
There is currently some big interest in performance tests. So feel
free to share your conclusions when you get some. The
cross-project-issues-dev ML seems to be the best candidate.
Cheers,
_______________________________________________
swtbot-dev mailing list
swtbot-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/swtbot-dev
|