While reading in a generic CTF trace using
TraceCompass, my Java VM is running out of memory causing
Eclipse to shut down. This is with Neon.3 with latest release
of TraceCompass.
The error:
Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_memory(0x000000078f900000, 1048576, 0) failed;
error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime
Environment to continue.
# Native memory allocation (mmap) failed to map 1048576
bytes for committing reserved memory.
# An error report file with more information is saved
as:
#
/home/rocky/eclipse/cupid-dev/eclipse/hs_err_pid14944.log
I am wondering if I am pushing the limits farther than
intended with TraceCompass, or if there is something else
going on? The trace itself is custom, generated using the
Babeltrace C library. It seems to read fine from the
command line using Babeltrace.
When I import into TraceCompass, it will appears to be
successful, but after clicking around the trace event log
for a few seconds, the JVM will fail completely with the
error above and exit.
The trace itself is CTF with 50 streams. The stream files
are between 4MB and 13MB each and the total size of all
streams is about 350MB. I assume the 350MB of binary event
data will blow up substantially when converted into Java
objects, but still this does not seem terribly bad.
However, I wanted to check to see if right off the bat it is
clear that I am pushing TraceCompass past its intended use,
or if these kinds of stats should be reasonable with the
tool.
It may be that the underlying issue is mostly due to the
large number of streams, not necessarily the size of the
data, but that is a complete guess with little foundation.
It does appear that there are a lot of Java threads in the
hs_err_pidXXX.log file, so I was just wondering if they are
getting out of hand with the number of streams.
If it would help, I am happy to provide both the full CTF
trace itself to see if you can reproduce and also the
hs_err_pidXXX.log file.
Thanks,
Rocky