Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [stp-dev] RE: Topic for agenda of next IRC

Just FYI, there is a number of opensource test coverage collection tools, such as
 Emma: http://emma.sourceforge.net/
 Cobertura: http://cobertura.sourceforge.net/

and others.

From experience I know that some of these tools can collect coverage figures from multiple VMs and merge the results into one overall coverage figure.

David

Trieloff, Carl wrote:
Comments on this thread we need to consider.

Carl.




-----Original Message-----
From: Stefan Daume [mailto:Stefan.Daume@xxxxxxxxxxxxx]
Sent: Tuesday, January 31, 2006 8:43 AM
To: Trieloff, Carl
Cc: Antony.Miguel@xxxxxxxxxxxxx; Karl Reti (E-mail); Christophe Ney
(E-mail)
Subject: Re: Topic for agenda of next IRC


Carl,

something that I think needs clarification for the further discussions is how we imagine this coverage metrics to be produced.

Further to your comments I would like to be a bit more specific about concerns in the area of B2J. My concern is not really GUI testing (as you referred to). Large areas of the B2J plugin are only active when running as part of a distributed engine. Things like varying the distribution, testing over heterogeneous environments and ensuring that the distribution of threads is correct are extrememly hard and time consuimg to test with an automated test. We do have a produced a number of automated tests. I am not sure though that there is a tool we can use to check code coverage when they are running (it would have to work across multiple JVMs potentially on multiple machines).

So this (B2J) would be an example where we do have to make an exception. If we can agree on this we should be fine to proceed.

    Stefan


Trieloff, Carl wrote:

Stefen,

I agree that at time test coverage goals do not help the process, but I
think that these should be exceptions made. The great thing with test
coverage is that it makes it very easy for someone to be comfortable to merge
if the tests pass. We all know that is this not fail proof, but then if an
issue is uncovered the practice should be to add a test for that case.

The goal with this is to make it easier to merge and not have to manually retest. We have found both internally in IONA and with Celtix, the higher
the test coverage, the faster the project can be delivered with more people as
there are low counts of unforeseen circumstances.

Sometimes we bring code in with low coverage for some reason, but indefinitely
we find that at some point it is lower cost just to stop network and get the tests
in, else you never know if the new work has broken something.

I know GUI's are harder than runtime, so tests don't cover the visual aspects well, and
test can be bogus - but I think the statement is to set a culture of "test your own work"
and automate it. (Any suggesting in stating the goal differently would be appreciated)


Does that help
Carl.


-----Original Message-----
From: Stefan Daume [mailto:Stefan.Daume@xxxxxxxxxxxxx]
Sent: Friday, January 27, 2006 11:10 AM
To: Trieloff, Carl
Cc: antony.miguel@xxxxxxxxxxxxx
Subject: Topic for agenda of next IRC


Carl,

Antony and myself had a chat regarding one of the topics of the last weeks IRC namely the issue of STP coding policies and guidelines in general and the test coverage in particular. We would like to put this on the agenda for next week.

After the last IRC we discussed this particular issue on the basis of our experiences in TPTP. We are concerned about enforcing a fixed test coverage rate (70%). Coverage is an interesting measure but should not be a static threshold Our main concerns would be

- that it will stiffle overall STP progress; experience from other projects shows that this particularly true if you enforce dedicated test periods that do not match up with the progress in the individual components and subprojects - I think we all know that this measure is not a reliable indicator for stability or quality; a measure incorporating the time spent bugfixing vs new enhancements in relation to the severity and frequency of reported bugs would be more helpful - but it will be difficult to capture that in a single metric as it essentially captures the efficiency of the whole process - there are cases where tests cannot be automated and manual testing is required; this has a higher cost associated with it that is not captured in the coverage metric either

I think we will both be able to attend the IRC next week.

Talk to you later,

   Stefan

_______________________________________________
stp-dev mailing list
stp-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/stp-dev



Back to the top