Community
Participate
Working Groups
Various DSF-GDB junit tests call SyncUtil.SyncWaitForStop() to wait until the target has stopped. Problem is that the method waits indefinitely. So a test failure hangs up all the tests. It should wait for a maximum amount of time--some *reasonable* value. The value should be configurable on a per-system level and even on a per user level. To this end, I recommend we add a milliseconds timeout value to SyncUtil.SyncWaitForStop(). If -1 is specified, then some default is used. The default should come from a java system property, so that the number could be adjusted when the tests are invoked, to account for the performance of the host machine. Sounds reasonable?
(In reply to comment #0) > Various DSF-GDB junit tests call SyncUtil.SyncWaitForStop() to wait until the > target has stopped. Problem is that the method waits indefinitely. So a test > failure hangs up all the tests. It should wait for a maximum amount of > time--some *reasonable* value. The value should be configurable on a per-system > level and even on a per user level. To this end, I recommend we add a > milliseconds timeout value to SyncUtil.SyncWaitForStop(). If -1 is specified, > then some default is used. The default should come from a java system property, > so that the number could be adjusted when the tests are invoked, to account for > the performance of the host machine. Sounds reasonable? It sounds reasonable. However, this problem applies to other Sync calls, not just SyncWaitForStop. In fact, it also applies to certain calls in test classes themselves, e.g., MIBreakpointsTest.waitForBreakpointEvent used to wait forever. I think a more generic solution would be to limit each test case to a certain time. You can do that in JUnit4 using @Test(timeout=5000). It would be nice to have a way to add this to a parent class, but if this is not possible I guess an awk script can get this parameter added to all our tests. Does it make sense?
(In reply to comment #1) > I think a more generic solution would be to limit each test case to a certain > time. You can do that in JUnit4 using @Test(timeout=5000). It would be nice > to have a way to add this to a parent class, but if this is not possible I > guess an awk script can get this parameter added to all our tests. Does it > make sense? It's certainly more generic, but I'm not a fan of it. If a test takes roughly 45 seconds to complete but getting to the main breakpoint usually happens within 5 seconds, I really don't want to waste over a minute if the test fails outright (we'd build some slack in the @Test(timeout) value). Also, you don't always have to worry about whether changes to a test (adding or removing to it) need an adjustment to the cumulative timeout limit. I like tests individually setting expectations on a per instance basis (how long should 'this' particular asynchronous operation take), and using configurable defaults. That is, you identify a collection of common wait operations and then use defaults for those that can be overridden via system properties.
(In reply to comment #2) > (In reply to comment #1) > > I think a more generic solution would be to limit each test case to a certain > > time. You can do that in JUnit4 using @Test(timeout=5000). It would be nice > > to have a way to add this to a parent class, but if this is not possible I > > guess an awk script can get this parameter added to all our tests. Does it > > make sense? > > It's certainly more generic, but I'm not a fan of it. If a test takes roughly > 45 seconds to complete but getting to the main breakpoint usually happens > within 5 seconds, I really don't want to waste over a minute if the test fails > outright (we'd build some slack in the @Test(timeout) value). Also, you don't > always have to worry about whether changes to a test (adding or removing to it) > need an adjustment to the cumulative timeout limit. Those are good points. Adding a @Test(timeout=<long timeout>) could simply be a fallback for nightly runs (if we ever do those) so that we don't risk hanging all the tests because of a single one. > I like tests individually setting expectations on a per instance basis (how > long should 'this' particular asynchronous operation take), and using > configurable defaults. That is, you identify a collection of common wait > operations and then use defaults for those that can be overridden via system > properties. Ok, that would be a good solution, but more work :-)
(In reply to comment #3) > Ok, that would be a good solution, but more work :-) Indeed. Automated tests are key to stability and quality, so improvements to the test and its framework should have a high priority, IMO. I'll work on this. I'm also going to have to make sure the breakpoint tests work 100% before I try swapping out the breakpoint mediator. I'm going to mostly rely on the tests to validate that the mediator swap hasn't regressed things. I'll do some manual testing, too, of course.
Created attachment 160257 [details] Instrument ServiceEventWaitor This is the first step in establishing reasonable timeout values for test operations. I've instrumented ServiceEventWaitor to report actual times and flag poorly specified timeout values (too much, too little, or none at all). This instrumentation is selectively turned on via the plugin tracing mechanism. Committed to HEAD.
Created attachment 160363 [details] Part two of changes to make tests not wait indefinitely With these changes, SyncUtils calls no longer wait indefinitely. Default timeout values are used, and the value depends on the type of operation. The values can be overriden individually via system properties. Also, a multiplier property can be specified which adjusts the timeout values across the board (to be used when running on a slow machine). Committed to HEAD.
(In reply to comment #6) > Created an attachment (id=160363) [details] > Part two of changes to make tests not wait indefinitely > > With these changes, SyncUtils calls no longer wait indefinitely. Default > timeout values are used, and the value depends on the type of operation. The > values can be overriden individually via system properties. Also, a multiplier > property can be specified which adjusts the timeout values across the board (to > be used when running on a slow machine). > > Committed to HEAD. After this commit, the MIExpressions test are all failing with NPE (I only tried on Linux with GDB 6.8): java.lang.NullPointerException at org.eclipse.cdt.tests.dsf.gdb.framework.SyncUtil$DefaultTimeouts.get(SyncUtil.java:509) at org.eclipse.cdt.tests.dsf.gdb.framework.SyncUtil.runToLocation(SyncUtil.java:365) at org.eclipse.cdt.tests.dsf.gdb.tests.MIExpressionsTest.testChildNamingSameMethod(MIExpressionsTest.java:1294) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.cdt.tests.dsf.gdb.framework.BackgroundRunner.invokeSuperRunImpl(BackgroundRunner.java:37) at org.eclipse.cdt.tests.dsf.gdb.framework.BackgroundRunner$1.run(BackgroundRunner.java:56) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
(In reply to comment #7) Sorry. Oversight in SyncUtil.java. Fixed. Please update.
(In reply to comment #8) > (In reply to comment #7) > Sorry. Oversight in SyncUtil.java. Fixed. Please update. Thanks for the quick fix, it works.
Created attachment 160781 [details] Fix expressions tests There was a small issue with the last patch, where method SyncUtil.step(StepType s, int numSteps) got replaced with SyncUtil.step(StepType s, int timeout), but the calls to this method were not updated. So this patch adds a new method SyncUtil.step(int numSteps, StepType s) and updates the test to use it. The patch also fixed a failing test in MIRunControlTest after the fix to bug 225996 Committed to HEAD. After this commit, all JUnits for GDB 6.8 pass on Linux.
(In reply to comment #10) > Created an attachment (id=160781) [details] > Fix expressions tests > > There was a small issue with the last patch, where method > SyncUtil.step(StepType s, int numSteps) got replaced with > SyncUtil.step(StepType s, int timeout), but the calls to this method were not > updated. So this patch adds a new method SyncUtil.step(int numSteps, StepType > s) and updates the test to use it. > > The patch also fixed a failing test in MIRunControlTest after the fix to bug > 225996 > > Committed to HEAD. > After this commit, all JUnits for GDB 6.8 pass on Linux. Ouch. Thanks for the fix. This is similar to when you rely on a spellchecker and it doesn't catch a typo because the accidental letter still makes a proper word. :-)
(In reply to comment #11) > Ouch. Thanks for the fix. This is similar to when you rely on a spellchecker > and it doesn't catch a typo because the accidental letter still makes a proper > word. :-) Yeah, even in a code review, I would not have caught that one, it was a tricky one. Luckily, easy to fix.