Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [platform-debug-dev] Debug Model Poll - Serialized vs. Concurrent

Our goal is to avoid blocking the UI thread. Whether comminication with 
the target is slow, or there are just a large number of elements to 
retrieve (as in the 1000's of variables example), the Eclipse UI should 
remain responsive. This is similar to the CVS repositories view in 
Eclipse. The view is populated as the information becomes available, 
leaving the UI responsive. Thus, pushing the work to the background is a 
solution to avoid blocking the UI. Solving the 1000's of elements is a 
performance problem, but a separate issue. As noted, the tree viewer is 
problematic in that it does retrieve all children of an element, not only 
the visibile elements.

The debug platform will explore both issues. Currently, we are 
experimenting with a concurrency story that will allow a debug 
implementation to provide the rules of concurrency that apply to it (i.e. 
serialized access, concurrent reads, serialized updates,...). The hope is 
that we can provide a backwards compatible solution where debug models 
will inherit a pessimistic concurrency rule base (serialzied access) by 
default, but allow debuggers to provide their own rules to support 
concurrent access where possible. Feel free to look at the following 
document. Keep in mind, this is just a work in progress.

        
http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/platform-debug-home/r3_1/docs/concurrency/Concurrency-Debug.html?rev=HEAD&content-type=text/html

Thanks,

Darin

> 
> The PDT plugin requires serialized access to the many (8+) debug 
> engines that it connects to. 
> 
> But I have a further problem in that requests are not queued but 
> will be rejected if 
> the debug engine connection is busy with an existing request. 
> This surfaced when stackframe labels 
> were retrieved on another thread.    The labels would appear as 
> <null> because the request was rejected not just 
> queued. 
> 
> I wonder if the real underlying performance problems will be masked 
> by switching to background threads/jobs. 
> 
> It if really takes a long time to get a large number of local 
> variables then pushing it to a background job might 
> make the UI responsive before the list is retrieved, but as most 
> engines have indicated the requests are still 
> serialized and have to complete before the next one can be sent. 
> If the request to get the locals can't be cancelled (I know I can't 
> cancel a request once it has been 
> sent to the debug engine without terminating the session) then it 
> will mean that the locals won't be populated 
> in a timely manner and it could stop the user from stepping or doing
> anything else with the debuggee until that 
> original long running request has completed. 
> 
> Perhaps the UI could add to certain requests the max number of 
> entries required.    e.g. when asking for locals it 
> could request the number that would fit in the view.    The engine 
> could use this instead of getting them all as it does today. 
> I deal with some COBOL applications that can have 10,000 (yes that 
> is correct) "local variables".   Today we have told customers 
> to close the Variables view. 
> If the user scrolls then the next chunk of variables could be 
> requested.    Assuming engines could deal with this the 
> UI would become more responsive and possibly more consistent 
> (programs locals no longer slow down stepping as their number 
> increases) 
> A possible variation is asking for the list of variables but don't 
> get their values until they are displayed.    I know that this 
> would cause problems with showing changed variables but I am sure 
> many would trade off faster stepping for that feature. 
> 
> I have already opened a feature to cap the number of stackframes 
> requested to both improve stepping and UI clutter. 
> This might be another candidate. 
> 
> Alan Boxall - IBM Distributed Debugger
> 



Back to the top