Community
Participate
Working Groups
I think we should consider supporing two (or possibly more) distinct categories of WTP unit tests: Category 1: This category would effectively cover all of our current unit tests: -All tests in this category are expected to consistent: 100% pass rate absent product defects or 100% failure rate if there is an underlying defect. -All tests will be executed automatically on every build -The primary JUnit metric for each build will be generated based on the tests in this category Category 2: The tests in this category would be automatically run (ideally with every build) but the results would not contribute to the JUnit tally associated with the build. This category would include the following test types: -New unit tests whose reliability is in doubt (or perhaps all new tests need to run in this category for some trial period and "prove" their reliability before getting promoted to category 1) -Category 1 tests that have started to fail intermittently: If a clear product defect behind the failure was not found, these tests would be moved here rather than just disabled so that we continue to track the issue. A bug would need to be opened and either the test logic fixed (if that was the problem) or the underlying product defect fixed (if that was the problem). -New unit tests that are created deliberately to test unstable portions of WTP: As adopters well know, there are a large number of currently outstanding stability bugs in WTP (deadlocks, sporadic EMF failures, etc.) yet the WTP unit test themselves never hit these issues since we aim for completely reliable tests. From this perspective, our current tests are providing a false measure of confidence. So, it would be good to include "tests that exercise complex scenarios known to be subject to various types of intermittent failures" and make sure we get the tests realistic enough that they repro the same failures. Keeping these tests automatically running in this category and aiming to have a very low frequency of intermittent failure would be a good way to help improve product stability.
Interesting idea. I agree we need the set that "verifies the build" to be very solid. For the other categories, I'd suggest they be conceptualized independent of the builds. We _might_ want to run them with the builds, but probably better to run them independently and perhaps even more frequently ... perhaps nearly "continuously" so we get multiple runs with same code.
Hi, everyone. This bug has been tagged as "help wanted." What kind of help is needed? I'm interested in getting something done at the implementation level (e.g. changing the Ant script). Peter.
I am blindly closing old bugs as "won't fix" (I selected those opened over two years ago, mostly opened by me, with no comments made in past couple of years). This is entirely an effort to focus on relevant bugs ... so if anyone sees any of these old bugs closed as "won't fix" that _are_ still relevant, do feel free to reopen. It would be much appreciated to know it was still important and deserves some attention beyond a blind close. Apologies in advance for closing any prematurely.