Community
Participate
Working Groups
PDE caches the state and extension points of target plugins from one session to the next. This resulted in a huge gain in performance. However, we don't cache the state/extension points of workspace plugins. So when starting a new session, we always parse all the workspace plugins to build the state. This is unnoticeable for a regular workspace, as we all keep a small number of projects in the workspace and keep the rest in the target. But it's a big performance hit in monster workspaces, where the user believes in importing the entire universe into the workspace.
Will attach shortly a profile of the time spent by PDE (14% of total cpu time) to recreate the workspace state from scratch on a monster workspace. It's all being spent reading files.
Created attachment 20018 [details] recreating the workspace state Caching the workspace state would significantly improve the situation on a large workspace. It will be negligible on a small workspace, as we would be dealing with a handful of manifest files (max of 2 per plugin).
Done. With this fix, here are some numbers that quantitate how long it takes for PDE to initialize all its plugins (workspace + target) when starting a new session on an existing workspace/target that have not changed since last shutdown: JRE: Sun 1.4.2_05 Big workspace (995 workspace plugins, 86 target plugins): 1.2 seconds full SDK workspace (86 workspace plugins, 86 target plugins): 360 ms Notes: Messy workspaces are not and must not be cached by PDE, as it would then be very hard to get rid of such a bad state. Therefore, we do not cache the workspace state if the workspace contains 1 or more of the following: 1. a plugin with a syntactically invalid manifest.mf/plugin.xml 2. a plugin whose bundle description is null 3. a plugin with no id/symbolic name. Blooper: when measuring numbers for the new caching story, I was initially getting relatively high numbers when reading the cached information. Reading the extensions cache was taking 1775 ms, which is alarmingly high for a 420K xml file. so why so long? When reading the xml file, I was using the Element.getElementsByTagname() parser API, because I did not want to deal with (and skip) empty text nodes. This call was the bottleneck. When I switched to using Element.getChildNodes() and looped through the children (skipping text nodes), the number went down from 1777ms to 100ms.