Community
Participate
Working Groups
With the new global undo/redo story there is a high potential that unvalid undo objects are still sitting on the stack. Since these objects can hold to lot of memory we should try to get rid of them as soon as possible. Checking the validation state on resource change events might be very expensive and will slow down operations like save and build. Alternativly the workbench could shedule a background job on every nth resource change delta which checks if the top most workspace undo object is still valid.
Currently this function is provided by the undo/redo action handlers themselves. When an invalid operation is found on the top of the history, the history is flushed for that context. This is achieved using ObjectActionHandler>>setPruneHistory(true). The default is false, since there may be cases where this aggressive pruning is not wanted. Text, for example, sometimes puts invalid changes in the history knowing that they become valid as typing continues. It is clear that for the workspace context, we want active pruning to occur. It was an oversight that the resource navigator did not set this to true. I realize that if I overlook it, it's certainly easy for someone else to do. We could make it a parameter in the constructor of the UndoRedoActionGroup proposed in bug 87354?
When does the action handler actually check the top most element. If nothing changes in the undo history then I thought the history will not do any checking. However a workspace change could invalidate the top most undo without actually changing the undo history.
How does active pruning work currently? Can an undoable operation track conditions that may invalidate it, then notify the history (or its context) that it may no longer be valid? Is this the responsibility of each operation, or would it make more sense for the workspace context to track all resource changes and then re-evaluate all operations with that context? Either way is potentially quite expensive. The underlying problem here is that action enablement needs to be computed eagerly. If we had support for doing enablement on demand (when the containing menu was shown, or when triggered by key binding), this problem would go away. Kind of. We'd still need to poll relatively frequently if the action was on the toolbar. We want to allow this in the new commands support, but unfortunately this is unlikely to happen for 3.1.
At the very least, I think we should consider adding listeners and event notification to the IUndoableOperation interface. While we may special-case resource changes, it is possible for state on which an operation depends to change. If we want the UI to accurately reflect the underlying state, then the operations need a way of notifying the undo history when they need to change.
I think there is already enough support for this (in theory), although the pruning is not as active as it could be. There is an event notification on the operation history of event type OPERATION_CHANGED. Operations (or some other third party) can call IOperationHistory.operationChanged(IUndoableOperation) and this notifies all history listeners. When the operation changing is the next undoable or redoable operation in the current context, the action handler prunes the history (if #setPruneHistory (true) has been used.) The action handler actively checks the current operation and potentially prunes the history whenever any of the following occur: - operation is undone, redone, added or removed and operation has the undo context set in the action handler - operation fails with OPERATION_NOT_OK - operation changes with OPERATION_CHANGED Dirk is right that workspace changes that don't interoperate with the operation history could invalidate the top most element. In this case it would be up to the operation to know this and used the operationChanged API. We could have the workspace context watch for workspace changes, query the state of the top operation, and notify the history. My initial thought was that this might be too expensive. The side effect in this case is that the user would try to do the undo, be notified that it was not valid and then the history would be flushed. This doesn't seem too bad, unless the memory use is very high as Dirk suggests could happen. Note that we also have the ability to set a stricter undo limit on the workspace context and this will help with that problem. Dirk - do you believe the memory problem is already an issue? My feeling is that the API is there to support solving the problem if need be, but I'm not sure it needs to be solved already.
All workspace modifing operations (which typically will use the workspace context) can get invalidated by a workspace change. Since checking if an operation is still valid isn't a really fast operation I am a little bit reluctent to do this inside a workspace changed event which can be fired from within the UI thread. That's why I was thinking of doing this in the background in a Job. Regarding memory: I think this already is an issue. Consider the case someone renames IResource. The undo Object will be fairly big. After the rename it is very likely that the user save some file which invalidates the undo object. However the object will stay on the stack until the client tries to undo the rename refactoring. Refactoring currently aggressivly flushes the undo stack on workspace changes. However with the new undo story we agreed on the this isn't appropriate anymore. I modified all the refactoring code to not depend on flushing the undo stack anymore. However, no I am a little bit concerned about memory usage ;-).
Based on the discussions in 87675 and how complex the invalidation scenarios are there, I'm very nervous about introducing a background job to eagerly invalidate operations.
IMO this problem is unrelated to the discussion in bug 87675. What I am proposing is a Job that periodically checks the top most undo/redo object if it is still valid. If not it should be removed. This is analogous to what happens if the user tries to execute the undo/redo object and it is flaged as invalid then.
Rather than removing the top operation, the entire undo context should be flushed. Removing the top is risky. The action handlers already do this, so having a Job do it is not necessarily higher risk, but until we work out all the scenarios and have it running properly, I certainly don't want to introduce another variable. Since this is not an API-affecting request, it will be revisited for M7.
Can I ask why there is a single undo stack? Why not have one for each editor? This method seems prone to introducing new bugs. To me, it seems better to delegate the undo stacks between the editors, and is a better way to do it in the object oriented paradigm. Also, the undo stack stuff is still broken in today's nightly -- it seems to want to undo stuff on different editors than the one you are currently on.
You've asked a very reasonable question that has a long answer. The integrated undo stack was introduced to handle cases where undoable operations cross several boundaries. The issues are discussed in many different places, but a good starting point if you want background is the proposal (http://dev.eclipse.org/viewcvs/index.cgi/%7Echeckout%7E/platform-ui- home/R3_1/undo-redo-proposal/undo-redo%20support.html) and bug #37716. If you have a repeatable test case for the "undoing on the wrong editor" that would be extremely helpful. Please annotate bug #92070 with any information you can think of. There is something funny going on and my current suspicion is the action handlers themselves (see also bug #91986 and bug #92049).
This bug will be deferred for 3.1 for several reasons: - refactoring has already set up its own strategy for flushing the workbench/workspace undo context - we are not implementing any more undoable actions in 3.1 for the workbench/workspace context Basically, we will need to look at this more generally when we look at bug #87653 and start implementing a more global undo strategy for eclipse.
investigate for M5 while adding resource-based undoable operations.
moving to M6. I've done some prototyping, enough to feel that API additions are not needed in the undo framework for M5. Will pursue further in M6. One interesting idea here would be to have an object that monitors the state of a particular undo context and can prune it, etc., much as the action handlers do now. The action handlers could simply register with this object. The advantage would be that we wouldn't have a bunch of individual action handlers each checking validity for the same undo context. There might be a performance gain in having it done in one spot.
moving to RC1 for final decision on how much undo to release
unfortunately the RC1 cycle is only 10 days and there is not enough time to ensure proper testing and get feedback on the undo support in general. I will pick this back up in early 3.3. Removing milestone.
investigate for 3.3
Renaming this bug to make it more general. The support for marker operations released as part of bug #7691 begins to define a structure for visiting resource deltas and marking an operation invalid if something relevant to the operation changes. However, as discussed in this bug, the performance of having each operation check resource deltas to determine if they are valid does not scale well. I rather like the idea of the workspace undo context checking the topmost undoable workspace operation. This could be done periodically as Dirk originally suggested, or as part of resource listening. It only works for resource listening if we know that all operations on the workspace are purged once the top is invalid. This also helps reduce the redundance of having each action handler try to prune the workspace context when there is an invalid operation on top. Marking this for M2. It is important to reduce the amount of duplicated work going on in the action handlers and operations with respect to validation, now that more operations are participating.
There is a previous discussion somewhere on flushing things from the history which leads to a problem where other contexts that were interlocked with the non-undoable context become unlocked, and can be incorrectly undone as a result.
Sorry, clicked on the Radio right next to the Commit button.
that discussion is in bug #133655, which is tagged for 3.3, but not necessarily M2. Waiting to see how the rest of this plays out. That issue is something that would be added in the undo framework, whereas this bug is really about how we want to actively validate and flush history from the client side.
marking for M3, we'll have an interim solution for the marker op validation in M2. Some issues that have come up while trying different approaches: Triggering validation: - watching for workspace changes is the most effective trigger but you have to ensure that the workspace changes are not the result of executing another undoable operation. For example, you could add a marker to a file, and then delete the file. The workspace change generated from the deleted file shouldn't cause the marker op to be invalidated if the change is part of a new undoable operation. (We only want to validate the topmost op, because undoing it may restore the validity of ops underneath it.) So you have to be able to consider whether an operation is executing. If it is, then you can likely ignore the change because validation will occur when the history notifies listeners of the new operation or of any failure. - triggering validation by running a job periodically has the same issue...you wouldn't want to be checking the validity of the most recent op if there is another op in progress. See also bug #126016, which addresses the need for knowing whether an operation is executing, and which one it is.
Released an initial cut at this to HEAD >20061012. Here is the current strategy - which may be refined during this release as needed: - a WorkspaceUndoMonitor watches resource change events - every nth change (currently 10, we can tune this as we go), it checks the "topmost" workspace undo object to see if it is still valid. If it is not, it flushes the history. - the monitor knows if any operation is undoing/redoing/executing in the operation history by listening to operation history notifications. It ignores resource changes caused by operation history events (to avoid the problem of nullifying the topmost op while a new one is executing). - successful execution, undo, or redo will reset the workspace change count. This code is currently exercised by the marker undo support and is being further tested in my implementation of full workspace undo (not yet released but coming soon)...
verified in I20061031-0656, WinXP. Running the WorkspaceOperationTests suite, with tracing options on, you can see the WorkspaceUndoMonitor checking and flushing the history every 10 workspace operations, resetting the change count when new ops are added to the history, etc. Also, while manually testing resource navigator undo with tracing options on, you can see the monitor checking the history. Also tried it with some canned manual cases, such as deleting the file that contained a marker, performing some workspace changes, and seeing a previous op that updated the deleted marker become invalid and get flushed.