Bug 243672 - Improve the way that build-tests run
Summary: Improve the way that build-tests run
Status: ASSIGNED
Alias: None
Product: WTP Releng
Classification: WebTools
Component: releng (show other bugs)
Version: 3.10   Edit
Hardware: PC Windows XP
: P2 enhancement (vote)
Target Milestone: ---   Edit
Assignee: webtools.releng CLA
QA Contact: Carl Anderson CLA
URL:
Whiteboard:
Keywords: plan
Depends on:
Blocks:
 
Reported: 2008-08-09 15:43 EDT by David Williams CLA
Modified: 2018-06-29 15:14 EDT (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description David Williams CLA 2008-08-09 15:43:37 EDT
There's a number if issues, or areas to improve. This bugzilla will serve as a high level bug to outline and discuss and document all those areas. 

One practical matter is to display the time of tests, along with other summary data (it is displayed in the details of each test, but not the summary page where the overall success/failures are summarized. 

Longer term, we should have two types of test buckets. One that runs with every build, but is relatively fast (such as no more than 5 minutes per any one suite). 
This short one could be called "build verification tests" ... their main purpose is to confirm things did get built, and the build is worth downloading and doing other tests on. 

Then, another test bucket that, say, runs once per day, or once per 5 builds, or similar. This might be called "function verification tests". These could be longer and test a number of functions working together. We might also want to put "problematic" randomly failing tests in this bucket as a temporary holding place until they are fixed? 

Another _big_ issue is that tests are ran by some overall "test script" that uses the file system. I've never understood why we don't have an extension point that can be discovered an ran. It's worth investigating this improvement. (It would make it easier, I think, to add tests, and for projects themselves to change their characteristics, etc.). It would also make it easier to "combine" tests ... if other projects used the same extension point, it'd simply be a matter of unzipping (or installing) their tests in our test environment.
Comment 1 David Carver CLA 2008-08-31 00:39:09 EDT
(In reply to comment #0)
> 
> Another _big_ issue is that tests are ran by some overall "test script" that
> uses the file system. I've never understood why we don't have an extension
> point that can be discovered an ran. It's worth investigating this improvement.
> (It would make it easier, I think, to add tests, and for projects themselves to
> change their characteristics, etc.). It would also make it easier to "combine"
> tests ... if other projects used the same extension point, it'd simply be a
> matter of unzipping (or installing) their tests in our test environment.

I'm not a big fan of the extension point route.  You already have a Junit Test Suite which can run all of the tests, so this basically acts as your extension point.  If you have multiple test suites, to run and don't want to hand code all the tests, then setup mother Test Suite plugin, that runs all the corresponding test suites.   It keeps with the Junit way of doing things and accomplishes the same thing as a "test suite extension point."

As for builds, my thoughts are pretty much in line with what you have.  A full verification or test suite of all tests, should only be necessary during a Nightly Integration build.   Typically the way I've seen this done is that a set of tests are run after every integration (i.e. when new code is checked into head).   If the build fails because of unit test failures, a message is sent to all those that just checked code in recently.   It is the responsibility of those that just checked code in to make sure they fix the build.

That evening a nightly build is run, that tests the integration between the various projects.  This runs all the tests and gets the latest code from head to test.   If this fails, it sends a message to everybody that checked code in for that day, and the programmers are responsible to get the build working again.

The key hear is making a failing build the highest priority over any other feature work that is being done.  It also helps to make sure that developers are only checking testable code in small stages instead of huge redesigns with no tests.  It's a different way of thinking of coding, but helps cut down on the number of integration errors that can occur.

A Milestone build with be what we work off of Release Tags, and would be the code that is going to be released.  Ideally, this is the code that is in head at a certain point during the week.  A milestone release goes through all of the Unit tests, just as a nightly build would.

Again, making Unit Test failures the highest priority is important.  Without it, you don't have a good build, and test failures happen for a reason...they shouldn't be commented out, but rewritten or corrected if they aren't right.

 

Comment 2 David Williams CLA 2008-08-31 01:27:05 EDT
(In reply to comment #1)
> (In reply to comment #0)
> > 
> > I'm not a big fan of the extension point route

I think maybe I wasn't clear on extension point. It is to remove the file system based scripts that currently run the tests after a build. As it is, the test plugins can not be compressed as jars, since the current script uses the file system to find test.xml files. That's not very Eclipse-like. There's also a "master script" that must be updated whenever someone wants to add or remove a test. Doesn't seem to have the right level of modularity.  In short, the "parameters" that are now specified in the test.xml file, would become parameters to an extension point. The specialized antRunner apps that currently run the post-build tests would continue to run the provided JUnit Suites. 

But, appreciate you comments ... keep 'em coming. 




Comment 3 David Carver CLA 2008-08-31 12:51:36 EDT
(In reply to comment #2)
> 
> I think maybe I wasn't clear on extension point. It is to remove the file
> system based scripts that currently run the tests after a build. As it is, the
> test plugins can not be compressed as jars, since the current script uses the
> file system to find test.xml files. That's not very Eclipse-like. There's also
> a "master script" that must be updated whenever someone wants to add or remove
> a test. Doesn't seem to have the right level of modularity.  In short, the
> "parameters" that are now specified in the test.xml file, would become
> parameters to an extension point. The specialized antRunner apps that currently
> run the post-build tests would continue to run the provided JUnit Suites. 
> 
> But, appreciate you comments ... keep 'em coming. 
> 

I would suggest finding out how Platform and other project handle this situation.  We can't be unique in this.  Maybe a posting on the Cross Project development list is in order?


Comment 4 David Williams CLA 2009-04-30 11:55:26 EDT
I'm (still) expecting to make some improvements soon, so setting a target. 
Comment 5 David Williams CLA 2011-09-21 12:37:47 EDT
mass change back to default assignee and qa contact. I'm not saying I won't work on some :) ... but, won't be all ... so, I think defaults would be best to start over.