[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [equinox-dev] optimizers, tests and temp files


Sigh.  Thanks for pointing this out.  That test was supposed to be commented out as it is used only to generate the test data that is then committed with the test itself.  In some cases we need to have stable test data to ensure that we do not break existing systems etc.  That is the intention here.

I agree with using "official" temp locations.  Most of the optimizer/processor tests use createTempFile for creating temporary work areas.


Jeff



John Arthorne/Ottawa/IBM@IBMCA
Sent by: equinox-dev-bounces@xxxxxxxxxxx

11/13/2007 11:26 AM

Please respond to
Equinox development mailing list <equinox-dev@xxxxxxxxxxx>

To
equinox-dev@xxxxxxxxxxx
cc
Subject
Re: [equinox-dev] optimizers, tests and temp files






I agree, tests should clean up after themselves, and files should have a name that helps to identify the test that created it.


On a related note, the tests you just released fail for me because they attempt to write at "
d:/packed.pack.gz". I don't have a d: drive, and this will do weird things when run on Linux/Mac/etc.  Files need to be written to System.getProperty("java.io.tmpdir"), so that they will always be in a location that is known to be writeable. This will be more of an issue when we are running on the Eclipse project test machines that have various OSes and file system layouts.



Jeff McAffer/Ottawa/IBM@IBMCA
Sent by: equinox-dev-bounces@xxxxxxxxxxx

11/12/2007 08:53 PM

Please respond to
Equinox development mailing list <equinox-dev@xxxxxxxxxxx>

To
equinox-dev@xxxxxxxxxxx
cc
Subject
[equinox-dev] optimizers, tests and temp files








I just committed another refactoring of the repo optimizers along with an improved version of the JarDelta optimizer and some basic tests.  We should have well optimized repos now! (next step is to pack the jar deltas!!)


In runnig the tests before committing there appear to be 11 of the Director tests failing.  I did not do anything in this area so assume that they were failing before this new code came along?  What is our policy on failing tests?  I've been commenting out my tests that fail until they can be fixed.  Others?


A number of tests use the TestMetadataRepository mechanism.  This is cool but unfortunately it leaves temp files down in Documents and Setting (Windows) at a pretty high rate (~30 per full test run).  There are a few other files being left around.  We should ensure that our tests run and clean up after themselves.  In the optimizer tests I've taken to naming the files and dirs something related to the name of the test (e.g. p2.optimizers.xxx) so that people encountering these files (leftover from crashed test runs etc) know what they are.  Do others think this to be a good idea?  


Related to this, we are all likely struggling to setup various repos etc temporarily for tests and inevitiably using different approaches.  Would it be worth spending a bit of time creating some test repo infrastructure, documenting this on the wiki (or wherever) and then making the tests consistent?  Most of the time I spend on this little project was in managing all the test code and updating multiple copies.  Taht is, until I refactored to eliminate duplicate code.  Now the test read well and are very easy to create.  Easy to create tests => more tests => better code...

Thoughts?


Jeff
_______________________________________________
equinox-dev mailing list
equinox-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/equinox-dev

_______________________________________________
equinox-dev mailing list
equinox-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/equinox-dev