Community
Participate
Working Groups
build I20040304 1 )load all of Platform UI projects + SWT and SWT win32. 2) Opened preferences and set the Java > compiler > Unused Imports option to error 3) dialog prompts for a rebuild due to the change, say yes (or ok?) 4) out of memory error during rebuild. !MESSAGE Internal Error !STACK 0 java.lang.reflect.InvocationTargetException at org.eclipse.jface.operation.ModalContext.run(ModalContext.java:283) at org.eclipse.jface.dialogs.ProgressMonitorDialog.run (ProgressMonitorDialog.java:394) at org.eclipse.jdt.internal.ui.preferences.OptionsConfigurationBlock.doFullBuild (OptionsConfigurationBlock.java:428) at org.eclipse.jdt.internal.ui.preferences.OptionsConfigurationBlock.performOk (OptionsConfigurationBlock.java:417) at org.eclipse.jdt.internal.ui.preferences.CompilerPreferencePage.performOk (CompilerPreferencePage.java:74) at org.eclipse.jface.preference.PreferenceDialog.okPressed (PreferenceDialog.java:777) at org.eclipse.jface.preference.PreferenceDialog.buttonPressed (PreferenceDialog.java:210) at org.eclipse.ui.internal.dialogs.WorkbenchPreferenceDialog.buttonPressed (WorkbenchPreferenceDialog.java:75) at org.eclipse.jface.dialogs.Dialog$1.widgetSelected(Dialog.java:402) at org.eclipse.swt.widgets.TypedListener.handleEvent (TypedListener.java:89) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:82) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:833) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:2352) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:2033) at org.eclipse.jface.window.Window.runEventLoop(Window.java:638) at org.eclipse.jface.window.Window.open(Window.java:618) at org.eclipse.ui.internal.OpenPreferencesAction.run (OpenPreferencesAction.java:72) at org.eclipse.jface.action.Action.runWithEvent(Action.java:881) at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection (ActionContributionItem.java:550) at org.eclipse.jface.action.ActionContributionItem.access$2 (ActionContributionItem.java:502) at org.eclipse.jface.action.ActionContributionItem$5.handleEvent (ActionContributionItem.java:435) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:82) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:833) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:2352) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:2033) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:1509) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:1480) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench (Workbench.java:257) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:139) at org.eclipse.ui.internal.ide.IDEApplication.run (IDEApplication.java:48) at org.eclipse.core.internal.runtime.PlatformActivator$1.run (PlatformActivator.java:260) at org.eclipse.core.runtime.adaptor.EclipseStarter.run (EclipseStarter.java:173) at org.eclipse.core.runtime.adaptor.EclipseStarter.run (EclipseStarter.java:106) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:324) at org.eclipse.core.launcher.Main.basicRun(Main.java:305) at org.eclipse.core.launcher.Main.run(Main.java:745) at org.eclipse.core.launcher.Main.main(Main.java:713) Caused by: java.lang.OutOfMemoryError
Moving to JDT for investigation. The stack track is only showing the one thread, but the problem seems like it must be somewhere else. Mike, how much memory were you running with? Is this repeatable?
this is how I start eclipse eclipse -vm d:\jdk1.4.1_01\jre\bin\java -data C:\workspaces\HEAD -debug no extra memory options. I have not had this happen again yet.
Must be standard memory peak during compile. We could reduce the amount of files built at once so as to reduce the peak, and make the build take longer, but this would need to be adjusted based on available memory.
Sounds like another place where knowing when memory was constrained would be useful.
Changing the memory requirement from -Xmx256Mb back to defaults is a major issue for us, as compiling, searching are memory intensive. We have to revisit all our algorithms/heuristics to take this new requirement into account, and this is not compatible with 1.5 effort.
relax. at this point, i've just been floating the idea that we need to think about working more effectively in low memory configurations. i have *not* suggested that we go back to the defaults. honestly though, to me it seems that if you are implementing something that takes a lot of memory (as this appears to) then you need to at least think about watching for low memory conditions. remember: if the vm runs out of memory then you can not trust *anything* about the memory state afterwards. it is essentially a completely unrecoverable state. can you estimate how much memory you are likely to use for a given compile?
All I tried to explain (still relaxed) is that we made our algorithms behave on VMs which had a decent amount of memory (-Xmx256Mb). For the past years, we instructed our users to add this command line arg each time they would get an OutOfMemoryError. This is why the memory requirement is a very delicate question for us. We are bounding some tasks already (compile 1000 files at once, search 500 files at once, ...). We could make these numbers smaller but this will take real investigation to get to an acceptable compromise (compile 500 files at once means perform 2 separate compilations, thus load/parse common referenced classfiles twice, open zips twice, etc...).
Ran some build tests (with jdk 1.4.2) on Jeff's big eclipse workspace (based on a 2.1 drop, 134 projects, 16,240 .java files, 12,436 .class files) & watched the heap size & peak. The heap starts at 44Mb with the Java perspective open but no editors. Why so much? Don't know... an empty workspace with the Java perspective open takes 6Mb. That's 38Mb to put 134 projects in the package explorer! With the Resource perspective open instead of the Java perspective, the heap is at 30Mb... closing it & opening the Java perspective takes us up to 44Mb... so what is in the 24Mb (30-6) to open the Resource perspective? After each full build & a GC, the heap settles down to 59Mb, regardless of peak. Need to look into the retained 15Mb... build states? Could not complete the full build with the default 64Mb or 75Mb, ran out of memory. With 80Mb, the heap grew to 75Mb & did most of the build b/w 67-75. It took 4:04. With 96Mb, the heap grew to 89Mb & took 3:30. With 160/250Mb, the heap grew to 128/129Mb & took 3:17-3:22. So we definitely reach a point where the VM won't use more memory even though its available. I also varied the # of files compiled in each compile loop from 100 to 200 to 500 to our normal 1000 (only a few projects have >250 source files... jdt.core @ 813 & ui.workbench @ 697 are the biggest)... memory usage did not drop, but build times went up. If we could reduce the 24-38Mb just to start up the workspace, we could possibly build in the default 64Mb, but for performance reasons we should continue to recommend at least 128Mb.
The Resource tree is likely taking most of the ~24Mb (30-6) growth (delta to open big workspace with Resource perspective). 6Mb of the 12-14Mb (42/44-30) is from the AllTypesCache. The build states & reference info do not account for any of the other 6-8Mb since a build hasn't happened yet... need to track this down. But the states & reference info do account for 7 of the 15Mb growth (actually it looks to be only 11Mb growth... need to wait 1 minute for another GC to reclaim more) after the first full build is complete... need to find the other ~5Mb.
With today's M9 build (20040519) on jdk1.4.2, I see the heap at: - 6Mb for Empty workspace open on Java perspective - 19Mb for Big workspace open on Resource perspective - 37Mb for Big workspace open on Java perspective Tried some full build tests with -Xmx128M - first full clean/build started with 37Mb allocated & peaked at 126Mb... it took 3:30 & after GC heap was at 53Mb... after the periodic workspace save, the heap dropped to 50Mb - did another full clean/build, which started with 50Mb allocated... it took 3:19 & after GC heap was at 53Mb... after the periodic workspace save, the heap dropped to 50Mb also So based on these numbers, I'm seeing 13Mb (19-6) for the resource tree & unknowns... 18Mb (37-19) for the AllTypesCache & unknowns... 13Mb (50-37) for build states, reference info & unknowns
Should also add... - 4Mb for Empty workspace open on Resource perspective
So that would be: 15Mb (19-4) for the resource tree & unknowns 16Mb (37-6-15) for the AllTypesCache & unknowns 13Mb (50-37) for build states, reference info & unknowns
Breaking down the 12-13Mb growth from a full build... We end up compiling 8,469 source files into 13,690 .class files. The earlier # in comment 8 of 16,240 source files includes all the source files used by the tests & are not compiled during a build. We create & keep 1 ReferenceCollection per source file. Each ReferenceCollection has 2 arrays of interned compound char[][] & simple char []. Adding it up, I'm seeing 3Mb for the ReferenceCollections(2) & intern sets (1). A decent chunk of the build growth (4-5Mb) is from the JavaModelCache, which is being populated during builds from ManifestConsistencyChecker.validJava(), which calls JavaProject.findType(), for tons of types!
Can someone please tell me why anyone would write code like this!!!! IType baseTypeElement = javaProject.findType(baseType); if (baseTypeElement != null) { } if (baseInterface != null) { IJavaElement baseInterfaceElement = javaProject.findType(baseInterface); if (baseInterfaceElement != null) { } } And no I didn't remove the contents of the blocks...
Please enter a defect against PDE for the ManifestConsistencyChecker. It shouldn't programmatically cause the model to be largely populated for free.
ManifestConsistencyChecker.validJava() is populating the JavaModel to the tune of ~3Mb... see bug 63438.
Released a change to reclaim 900K out of the build states, by sharing some char [] using String.subString().
Status: some fixes made. confirm we are not populating java model. open other defects (if discovered). Must assess by RC3.
Tests with RC2 - default memory : Now, there's no Out Of Memory on initial test case (ie. with 12 org.eclipse.ui.* + 2 org.eclipse.swt.* projects). It also works with 56 projects loaded in my workspace (pde-ui project and all its pre-req...). However, OOM still happens while performing full build on 87 loaded projects in workspace... Set this bug fixed as this clearly demonstrates that we've improved performance on initial test case. If number of projects is too large, then finally there won't be other solution than apply initial workaround and modify VM heap size...
I confirm that Java Model is populated *only* when Plug-in Development -> Compiler -> Unknown classes is activated. Although 2 unnecessary findType(...) calls were removed in validateJava(...) method of ManifestConsistencyChecker, there's still one call remaining despite of what is said in bug 63438 comment 2... I will make some additional tests to in order to size memory still consumed by this remaining call and append results to that bug...
Note that when PDE compiler option is activated then OutOfMemory error happens during full build in workspace with 56 projects. The limit with this option is 46 projects (jdt-ui plugin + all its pre-req)... I will reopen bug 63438 as PDE option still populates java model and so consumes memory unnecessarily...
Verified for 3.0RC3 I200406180800