[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cdt-dev] M7

Hi Marc,
Over in TCF-land, I've been working over last few months of a framework for unit-tests of the UI/view model layer of the debugger (see http://git.eclipse.org/c/tcf/org.eclipse.tcf.git/tree/tests/plugins/org.eclipse.tcf.debug.test/src/org/eclipse/tcf/debug/test).

Instead of trying to control and read SWT's lazy-loading trees, I use the virtual flexible hierarchy viewers to simulate the debug, variables, register views. It's still in early stages, but I'm getting close to having a meaningful stepping performance test.


On 05/09/2012 06:46 PM, Marc Khouzam wrote:
Great timing!

So, on the Debug front, yesterday I opened
Bug 378834 - Add Debug JUnit tests to Hudson (https://bugs.eclipse.org/378834)
I was focusing on Linux, but I would also like to have those run on Windows, as it would give us much greater
confidence on our situation for Windows.

The step after that is to get some UI tests.  I believe other parts of CDT are running some automated UI tests,
and I would appreciate knowing what tools they used for that.  I've had discussions about how to best implement
UI tests, and I'm now thinking that SWTBot may not be the best solution; it may be to sensitive to the actual layout
of the UI.  I was told we can trigger the code we want to test without actually 'faking' mouse movements and such.
I still have to look into it, but that may be a better way to go.  In the end, we don't want to test SWT, so as long
as our code is exercised, that should be enough.

Other things we could look into are such things as using Sonar, which would automatically run things like
FindBugs, Code Coverage, and other metrics, which would give us a quick status on our current code at every build.

If someone can help me get Bug 378834 resolved, we'd be making a good step forward for Debug.



________________________________________ From: cdt-dev-bounces@xxxxxxxxxxx [cdt-dev-bounces@xxxxxxxxxxx] On Behalf Of Cortell John-RAT042 [RAT042@xxxxxxxxxxxxx] Sent: May 9, 2012 7:22 PM To: CDT General developers list. Subject: Re: [cdt-dev] M7

Big +1

Utopian situation:

· Every feature/fix has an automated test case

·         Test suite execution becomes part of the build process, on Windows and at least one popular flavor of Linux

·         Any failures are reported on the list and considered a P1 issue; should be addressed ASAP.

·         Nothing is delivered without near 100% success

Any software house that is strongly committed to quality embraces these objectives. I don’t see that we try to meet any of these. Part of the problem is lack of infrastructure (test environments and lack of swtbot integration). Without the infrastructure, good intentions fall short. E.g., I remember when working on dsf-gdb, many man hours were spent writing tests. Great, but the tests required a developer to take the initiative to manually run them on his particular machine. Not so great. Also, many features can’t be tested because junit alone is inadequate; the features require using something like swtbot.


But this does point out how poor our test coverage is and that we need to get stricter on test failures, and possibly our code review process to make sure quality doesn't suffer like this in the future.
cdt-dev mailing list