[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
RE: [platform-core-dev] How much memory can a Workspace take?
|
Hi John,
I did in fact store away the entire .metadata of my case, but
there are no large .snap files. They are all just 16 bytes
large.
The hugest files in the org.eclipse.core.resources metadata
are
.root/16.tree (14 MBytes
large)
projectFoo/.markers (2 MBytes
large)
I also have a CDT PDOM which is 383 MBytes large, but there
doesn't seem to be anything CDT-like in the heap dump.
I agree that we shouldn't just blindly treat space for speed.
But we do need to keep an eye even on transient memory
usage, even if it is garbage collected after a while. The
purpose of the ElementTree seems to be storing data in a
space efficient manner. The StringPool additionally highlights
this.
I'd really like to see some performance data for James' patch,
but as I have mentioned on the bug I can't run the
performance tests on my machine...
Cheers,
--
Martin Oberhuber, Senior Member of Technical
Staff, Wind River
Target Management Project
Lead, DSDP PMC Member
Hi Martin, just catching up on
this after being away for the past week. I see there has been some useful
discussion in the bug report about some micro-optimizations we could make
during path manipulation, which is great. Obviously we have to ensure we don't
just trade off speed for space and create a new problem elsewere, and as you
commented creating new strings can result in increased memory usage due to
reduced char[] sharing.
I just
wanted to point out another angle on this original problem. I think the core
problem here is not just low level string manipulation, but rather that this
particular case is an extreme worst-case scenario involving crash recovery.
When recovering from a crash we apply a series of incremental tree snapshots
that were recorded to disk in the previous session. If the previous session
had run for a very long time or had a large number of changes, the problem
could be related to applying such a large chain of deltas to the tree. You
could check for the size of the
.metadata\.plugins\org.eclipse.core.resources\.snap file to see if it is very
large in your case).
The snapshot
delta trees would eventually get garbage collected but there is going to be a
performance peak during/after recovery where those trees are still retained.
It's possible this problem could be addressed by re-rooting the tree after
restoring the snapshots but before doing the refresh
(SaveManager#restoreSnapshots is where the snapshots are loaded during crash
recovery). When the tree is rerooted then all of the old delta trees can be
garbage collected before the refresh occurs (refresh is also memory
intensive). Some kind of fix in this direction may allow us to completely
avoid the big memory peak that resulted in the OOME in this case. I haven't
had a chance to try this out but it would be an interesting thing to explore.
I suggest entering a separate bug for this (OOME during crash recovery), to
keep it separate from the existing generic "tree is too big" bug.
John
"Oberhuber, Martin"
<Martin.Oberhuber@xxxxxxxxxxxxx> Sent by: platform-core-dev-bounces@xxxxxxxxxxx
09/29/2009 06:30 AM
Please respond
to "Eclipse Platform Core component developers list."
<platform-core-dev@xxxxxxxxxxx> |
|
To
| "Eclipse Platform Core component
developers list." <platform-core-dev@xxxxxxxxxxx>
|
cc
|
|
Subject
| [platform-core-dev] How much
memory can a Workspace take? |
|
Hi fellow committers,
I got a report about an OutOfMemoryError generated during a
Workspace Refresh operation. When looking at the .hprof file with the
MemoryAnalyzer, it turned out that pretty much all the memory was taken
by core/Workspace only:
- 100 MB (53%) in
org.eclipse.core.internal.resources.Workspace (occupied by
ElementTree)
- 52 MB (27%) in
org.eclipse.core.internal.dtree.DeltaDataTree (this is a different
additional DeltaDataTree)
- 37 MB all the rest
The workspace
was arguably very large, but still I am surprised
1. that the
numbers are that large
2.
that in addition to the
Workspace, there is such a large additional DeltaDataTree,
3. that
I see an OutOfMemoryError reported with 189MB total where I used -vmargs
-Xmx320m ... what happened to the extra 131MB ?
Do my numbers sound reasonable, or could there be a
bug that leads the Workspace to take an excessive amount of memory? How much
memory do we expect a workspace to take for, say 50000 files? How much
*additional* memory may a workspace delta take that is generated during
a Refresh? Would any additional investigation using the Coretools make sense
at this point?
I can make the .hprof file available on request
(it's a 200MB ZIP file).
Many thanks for any pointers!
Cheers,
--
Martin Oberhuber, Senior Member of Technical Staff,
Wind River
Target Management Project Lead, DSDP
PMC Member
http://www.eclipse.org/dsdp/tm
_______________________________________________
platform-core-dev
mailing
list
platform-core-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/platform-core-dev