Community
Participate
Working Groups
System config: 2 GHz computer 1 GB of memory Workspace on local Drive Started tool with -vmargs -Xms1024M Imported 6 identical Rose models containing 5000 classes each and views on diagrams. Each model is 26M in disk size. All were imported in their own project (6 projects) Total Workspace size in bytes was slightly below 160M. After the import of all these models was completed, it took the following times for the help indexing to complete: 33% 1H20 69% 2H26 100% 3H26 (This is not acceptable) I tested this again using the already imported models: I flushed the index previously created (To do this, close the tool and delete folder configuration/org.eclipse.help.base) Opening two models (26M + 26M = 52M), slightly above the size limit referred in defect RATLC00531683, the Indexing speed is appropriate. Creating the index, after opening the 6 models (Total size slightly below 160M), which took about half an hour to complete (The opening), had the following results: The indexing when rather quickly (about 5 minutes) to 8% and stayed there for a while...until I gave up. What I believe is happening is this: The indexing will work normally if there is any free memory available. In the case of importing a Rose model, we would expect that this takes more memory. The added used memory possibly caused the indexing problem to show earlier. When opening the imported models from a clean startup, the indexing starts up OK but soon, the memory used by this application also reaches the limit of available memory, causing it to slow down as well. Conclusion: The indexing speed problem appears regardless of the way the models are loaded into the workspace but not always at the same level. It happens at a lower limit when models are imported from Rose since the Rose Import require additionnal memory. Users dealing with large artifacts in their workspace will suffer from this. I have always noted performance degradation when dealing with artifacts with total size larger than 60M approximately, depending on the amount of memory available and the command line option used. If this cannot be fixed, we need something to warn the user that the tool has reached the memory limit and that the performance will suffer. (Could be a pop up or a warning on the console)
Help is independent of workspace. I think it is out of memory issue. Probably what happens is the models eat up all or almost all memory, and there is not enough vm memory left to perform help indexing. An out of memory error happens somewhere during indexing. I recommend shutting down the workbench and restarting in such a case. Otherwise the indexing process is interrupted, and immediately restarted. An out of memory error could be better handled, but I am not sure it could be easily done, and restarting the workbench would most likely be required anyway.
Profiling memory during help indexing, reveals it uses between 3 and 18 MB of heap (that constantly approaches 18MB, and drops, probably at the place data is flushed to the disk). I see no leaks, all the memory is given back. Resolving as works for me. I suggest profiling your models, as the size of the model on disk does not necessary correspond to the size it takes in the memory.