Download
Getting Started
Members
Projects
Community
Marketplace
Events
Planet Eclipse
Newsletter
Videos
Participate
Report a Bug
Forums
Mailing Lists
Wiki
IRC
How to Contribute
Working Groups
Automotive
Internet of Things
LocationTech
Long-Term Support
PolarSys
Science
OpenMDM
More
Community
Marketplace
Events
Planet Eclipse
Newsletter
Videos
Participate
Report a Bug
Forums
Mailing Lists
Wiki
IRC
How to Contribute
Working Groups
Automotive
Internet of Things
LocationTech
Long-Term Support
PolarSys
Science
OpenMDM
Toggle navigation
Bugzilla – Attachment 289371 Details for
Bug 583209
Migrate MAT Content from Eclipse Wiki
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
Log In
[x]
|
Terms of Use
|
Copyright Agent
MAT Content from Eclipse Wiki
Eclipsepedia-20240429142613.xml (text/xml), 181.51 KB, created by
Krum Tsvetkov
on 2024-04-29 10:34:01 EDT
(
hide
)
Description:
MAT Content from Eclipse Wiki
Filename:
MIME Type:
Creator:
Krum Tsvetkov
Created:
2024-04-29 10:34:01 EDT
Size:
181.51 KB
patch
obsolete
><mediawiki xmlns="http://www.mediawiki.org/xml/export-0.10/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mediawiki.org/xml/export-0.10/ http://www.mediawiki.org/xml/export-0.10.xsd" version="0.10" xml:lang="en"> > <siteinfo> > <sitename>Eclipsepedia</sitename> > <dbname>my_wiki</dbname> > <base>https://wiki.eclipse.org/Main_Page</base> > <generator>MediaWiki 1.26.4</generator> > <case>first-letter</case> > <namespaces> > <namespace key="-2" case="first-letter">Media</namespace> > <namespace key="-1" case="first-letter">Special</namespace> > <namespace key="0" case="first-letter" /> > <namespace key="1" case="first-letter">Talk</namespace> > <namespace key="2" case="first-letter">User</namespace> > <namespace key="3" case="first-letter">User talk</namespace> > <namespace key="4" case="first-letter">Eclipsepedia</namespace> > <namespace key="5" case="first-letter">Eclipsepedia talk</namespace> > <namespace key="6" case="first-letter">File</namespace> > <namespace key="7" case="first-letter">File talk</namespace> > <namespace key="8" case="first-letter">MediaWiki</namespace> > <namespace key="9" case="first-letter">MediaWiki talk</namespace> > <namespace key="10" case="first-letter">Template</namespace> > <namespace key="11" case="first-letter">Template talk</namespace> > <namespace key="12" case="first-letter">Help</namespace> > <namespace key="13" case="first-letter">Help talk</namespace> > <namespace key="14" case="first-letter">Category</namespace> > <namespace key="15" case="first-letter">Category talk</namespace> > </namespaces> > </siteinfo> > <page> > <title>MemoryAnalyzer</title> > <ns>0</ns> > <id>12371</id> > <revision> > <id>446547</id> > <parentid>443430</parentid> > <timestamp>2022-12-28T11:12:00Z</timestamp> > <contributor> > <username>Erik.brangs.gmx.de</username> > <id>10039</id> > </contributor> > <minor/> > <comment>Change some links to HTTPS</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="10191">== About == >The [https://eclipse.org/mat Eclipse Memory Analyzer] tool (MAT) is a fast and feature-rich heap dump analyzer that helps you find '''memory leaks''' and analyze high '''memory consumption''' issues. > >With Memory Analyzer one can easily >* find the biggest objects, as MAT provides reasonable accumulated size (retained size) >* explore the object graph, both inbound and outbound references >* compute paths from the garbage collector roots to interesting objects >* find memory waste, like redundant String objects, empty collection objects, etc... > >[[Category:Tools Project]][[Category:Memory Analyzer]] > >== Getting Started == > >=== Installation === > >See the [https://eclipse.org/mat/downloads.php download page] for installation instructions. > >=== Basic Tutorials === > >Both the [https://help.eclipse.org/index.jsp?topic=%2Forg.eclipse.mat.ui.help%2Fgettingstarted%2Fbasictutorial.html Basic Tutorial] chapter in the MAT documentation and the [https://www.vogella.com/articles/EclipseMemoryAnalyser/article.html Eclipse Memory Analyzer Tutorial] by Lars Vogel are a good first reading, if you are just starting with MAT. > >=== Further Reading === > >Check [[MemoryAnalyzer/Learning Material]]. You will find there a collection of presentations and web articles on Memory Analyzer, which are also a good resource for learning. These pages [https://help.eclipse.org/topic/org.eclipse.mat.ui.help/tasks/queryingheapobjects.html Querying Heap Objects (OQL)] [https://help.eclipse.org/topic/org.eclipse.mat.ui.help/reference/querymatrix.html OQL Syntax][[MemoryAnalyzer/OQL]] also explain some of the ways to use Object Query Language (OQL) > >== Getting a Heap Dump == > >==== HPROF dumps from Sun Virtual Machines ==== > >The Memory Analyzer can work with ''HPROF binary formatted heap dumps''. Those heap dumps are written by Sun HotSpot and any VM derived from HotSpot. Depending on your scenario, your OS platform and your JDK version, you have different options to acquire a heap dump. > >'''Non-interactive''' > >If you run your application with the VM flag '''-XX:+HeapDumpOnOutOfMemoryError''' a heap dump is written on the first Out Of Memory Error. There is no overhead involved unless a OOM actually occurs. This flag is a must for production systems as it is often the only way to further analyze the problem. > >As per [https://stackoverflow.com/questions/542979/using-heapdumponoutofmemoryerror-parameter-for-heap-dump-for-jboss this article], the heap dump will be generated in the "current directory" of the JVM by default. It can be explicitly redirected with '''-XX:HeapDumpPath=''' for example ''-XX:HeapDumpPath=/disk2/dumps'' . Note that the dump file can be huge, up to Gigabytes, so ensure that the target file system has enough space. > > >'''Interactive''' > >As a developer, you want to trigger a heap dump on demand. On '''Windows, use JDK 6 and JConsole'''. On '''Linux and Mac OS X''', you can also use '''jmap''' that comes with JDK 5. > >Via MAT: >* tutorial [https://community.bonitasoft.com/blog/effective-way-fight-duplicated-libs-and-version-conflicting-classes-using-memory-analyzer-tool here] > >Via Java VM parameters: > >* -XX:+HeapDumpOnOutOfMemoryError writes heap dump on OutOfMemoryError (recommended) >* -XX:+HeapDumpOnCtrlBreak writes heap dump together with thread dump on CTRL+BREAK >* -agentlib:hprof=heap=dump,format=b combines the above two settings (old way; not recommended as the VM frequently dies after CTRL+BREAK with strange errors) > >Via Tools: > >* Sun (Linux, Solaris; not on Windows) [http://java.sun.com/j2se/1.5.0/docs/tooldocs/share/jmap.html JMap Java 5]: '''jmap -heap:format=b <pid>''' >* Sun (Linux, Solaris; Windows see link) [http://java.sun.com/javase/6/docs/technotes/tools/share/jmap.html JMap Java 6]: '''jmap.exe -dump:format=b,file=HeapDump.hprof <pid>''' >* Sun (Linus, Solaris) JMap with Core Dump File: '''jmap -dump:format=b,file=HeapDump.hprof /path/to/bin/java core_dump_file''' >* Sun JConsole: Launch jconsole.exe and invoke operation dumpHeap() on HotSpotDiagnostic MBean >* SAP JVMMon: Launch jvmmon.exe and call menu for dumping the heap > >Heap dump will be written to the working directory. > >{| border="1" >|- >! Vendor / Release >! VM Parameter >! >! >! VM Tools >! >|- >! >! On OoM >! On Ctrl+Break >! Agent >! JMap >! JConsole >|- >! Sun, HP >| >| >| >| >| >|- >| 1.4.2_12 >| Yes >| Yes >| Yes >| >| >|- >| 1.5.0_07 >| Yes >| Yes (Since 1.5.0_15) >| Yes >| Yes (Only Solaris and Linux) >| >|- >| 1.6.0_00 >| Yes >| >| Yes >| Yes >| Yes >|- >! SAP >| >| >| >| >| >|- >| 1.5.0_07 >| Yes >| Yes >| Yes >| Yes (Only Solaris and Linux) >| >|} > >==== System Dumps and Heap Dumps from IBM Virtual Machines ==== > >Memory Analyzer may read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files with the [http://www.ibm.com/developerworks/java/jdk/tools/dtfj.html IBM DTFJ feature] installed. Once installed, then '''File''' &gt; '''Open Heap Dump''' should give the following options for the file types: > >* All known formats >* HPROF binary heap dumps >* IBM 1.4.2 SDFF >* IBM Javadumps >* IBM SDK for Java (J9) system dumps >* IBM SDK for Java Portable Heap Dumps > >For a comparison of dump types, see [http://www.ibm.com/developerworks/library/j-memoryanalyzer/#table1 Debugging from dumps]. System dumps are simply operating system core dumps; therefore, they are a superset of portable heap dumps. System dumps are far superior than PHDs, particularly for more accurate GC roots, thread-based analysis, and unlike PHDs, system dumps contain memory contents like HPROFs. Older versions of IBM Java (e.g. &lt; 5.0SR12, &lt; 6.0SR9) require running jextract on the operating system core dump which produced a zip file that contained the core dump, XML or SDFF file, and shared libraries. The IBM DTFJ feature still supports reading these jextracted zips; however, newer versions of IBM Java do not require jextract for use in MAT since DTFJ is able to directly read each supported operating system's core dump format. Simply ensure that the operating system core dump file ends with the '''.dmp''' suffix for visibility in the MAT Open Heap Dump selection. It is also common to zip core dumps because they are so large and compress very well. If a core dump is compressed with '''.zip''', the IBM DTFJ feature in MAT is able to decompress the ZIP file and read the core from inside (just like a jextracted zip). The only significant downsides to system dumps over PHDs is that they are much larger, they usually take longer to produce, they may be useless if they are manually taken in the middle of an exclusive event that manipulates the underlying Java heap such as a garbage collection, and they sometimes require operating system configuration ([http://www.ibm.com/support/knowledgecenter/SSYKE2_7.1.0/com.ibm.java.lnx.71.doc/diag/problem_determination/linux_setup.html Linux], [http://www.ibm.com/support/knowledgecenter/SSYKE2_7.1.0/com.ibm.java.aix.71.doc/diag/problem_determination/aix_setup_full_core.html AIX]) to ensure non-truncation. > >In recent versions of IBM Java (&gt; 6.0.1), by default, when an OutOfMemoryError is thrown, IBM Java [http://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/diag/tools/dumpagents_defaults.html produces] a system dump, PHD, javacore, and Snap file on the first occurrence for that process (although often the core dump is suppressed by the default 0 core ulimit on operating systems such as Linux). For the next three occurrences, it produces only a PHD, javacore, and Snap. If you only plan to use system dumps, and you've configured your operating system correctly as per the links above (particularly core and file ulimits), then you may disable PHD generation with -Xdump:heap:none. For versions of IBM Java older than 6.0.1, you may switch from PHDs to system dumps using -Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk -Xdump:heap:none > >In addition to an OutOfMemoryError, system dumps may be produced using operating system tools (e.g. gcore in gdb for Linux, gencore for AIX, Task Manager for Windows, SVCDUMP for z/OS, etc.), using the [http://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/diag/tools/diagnostics_summary.html IBM Java APIs], using the various options of [http://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.lnx.80.doc/diag/tools/dump_agents.html -Xdump], using [https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=7d3dc078-131f-404c-8b4d-68b3b9ddd07a Java Surgery], and more. > >Versions of IBM Java older than IBM JDK 1.4.2 SR12, 5.0 SR8a and 6.0 SR2 are known to produce inaccurate GC root information. > >==== What if the Heap Dump is NOT Written on OutOfMemoryError? ==== > >Heap dumps are not written on OutOfMemoryError for the following reasons: > >* Application creates and throws OutOfMemoryError on its own >* Another resource like threads per process is exhausted >* C heap is exhausted > >As for the C heap, the best way to see that you won't get a heap dump is if it happens in C code (eArray.cpp in the example below): > > # An unexpected error has been detected by SAP Java Virtual Machine: > # java.lang.OutOfMemoryError: requested 2048000 bytes for eArray.cpp:80: GrET*. Out of swap space or heap resource limit exceeded (check with limits or ulimit)? > # Internal Error (\\...\hotspot\src\share\vm\memory\allocation.inline.hpp, 26), pid=6000, tid=468 > >C heap problems may arise for different reasons, e.g. out of swap space situations, process limits exhaustion or just address space limitations, e.g. heavy fragmentation or just the depletion of it on machines with limited address space like 32 bit machines. The hs_err-file will help you with more information on this type of error. Java heap dumps wouldn't be of any help, anyways. > >Also please note that a heap dump is written only on the first OutOfMemoryError. If the application chooses to catch it and continues to run, the next OutOfMemoryError will never cause a heap dump to be written! > >== Extending Memory Analyzer == > >Memory Analyzer is extensible, so new queries and dump formats can be added. Please see >[[MemoryAnalyzer/Extending_Memory_Analyzer]] for details.</text> > <sha1>qpg1cfozzc8qll3fwfm30nae1x5yw1u</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Adding a new heap dump format to the Memory Analyzer</title> > <ns>0</ns> > <id>26696</id> > <revision> > <id>446975</id> > <parentid>445340</parentid> > <timestamp>2023-02-23T11:20:15Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>Add stack frames</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="12860">== Introduction == > >To add support for a new heap dump format to the Memory Analyzer, you will need to create an Eclipse plug-in and use the provided extension points. > >The HPROF and DTFJ plugins can be used as a reference if you do not know how something can be done. You can also find general information in the forum thread in the 4th post in the thread [http://www.eclipse.org/forums/index.php/mv/msg/153571/486076/#msg_486076] in the old forum. An overview picture for the APIs can be found in the slides for the graduation review [http://archive.eclipse.org/projects/www/project-slides/Helios/MAT_Helios_Release.pdf] on page 11. > >[[Category:Memory Analyzer]] > >== Relevant Extension Points == > >The relevant extensions points for a new heap dump format include: > >*the parser extension point (for parsing the new format) >*the trigger heap dump extension point (to enable the user to trigger a heap dump from the VM with MAT) > >When MAT reads a new heap dump, the parse method in the class '''SnapshotFactoryImpl''' will be called. It handles the reading of a new heap dump (new means that no indexes for this heap dump exist). This method calls the index builder (provided by the parser extension point), a '''SnapshotImplBuilder''' and the '''GarbageCleaner'''. The GarbageCleaner is used to purge unreachable objects from the heap dump. The array returned by its clean methods can be used to remove unreachable objects from the indexes. After the parse method is done, MAT will have a '''SnapshotImpl''' for the heap dump, which contains the most important information. > >=== The parser extension point === > >Using the parser extension points requires you to provide implementations for 2 interfaces: > >*'''IIndexBuilder''' >*'''IObjectReader''' > >==== IIndexBuilder ==== > >As the API reference says, the index builder is responsible for reading the structural information of the heap and building indexes out of it. This information is required to be able to use MAT, so the IndexBuilder is the first thing you will need to get working. > >The main work that has to be done in the index builder consists of parsing your new heap dump format and filling MAT's data structures. Your implementation of the IndexBuilder will fill in the data into an '''IIPreliminaryIndex'''. Implementations of this interface provide methods to fill the respective data structures. The data structures are: > >*Identifiers - This data structures holds the '''long''' addresses for all objects present in the heap dump. ALL addresses must be contained and there must not be duplicates. After collecting all addresses, '''sort()''' needs to be called on the identifiers data structure. This will enable getting an '''integer''' id for each address by calling '''reverse(address)''' on the identifier. The id is necessary for the other data structures. Negative numbers are not valid ids. If a negative number is returned, a call to '''sort()''' may be missing or the address is not present in the identifiers data structure. >*ClassesById - Maps an id to a ClassImpl containing information about this class. The comments in ClassImpl should prove sufficient to understand what's going on. If you have questions about UsedHeapSize the [http://www.eclipse.org/forums/index.php/mv/msg/163200/517929/#msg_517929 4th] and [http://www.eclipse.org/forums/index.php/mv/msg/163200/518191/#msg_518191 5th] post in the thread [http://www.eclipse.org/forums/index.php/mv/msg/163200/518191/] in the old forum may help you. >*ObjectToId - Maps the id of an object to the id corresponding to the '''ClassImpl''' of the object's class. >*gcRoots - Maps the id of a garbage collection root to information about the garbage collection root (e.g. what type of root it is). It is very important that you do not miss any roots because the GarbageCleaner will purge unreachable objects from your dump and discard the information. >*array2size - maps an id of an object (not necessarily an array) to the size of that object, in bytes. This data structure must contain an entry for every array in your dump. It may contain an entry for a non-array object, if that object's size differs from the instance size set in the corresponding '''ClassImpl''' (this can be the case if Address-bashed hashing is used). >*outbound - maps an id of an object to its outbound references. Similarly to gcRoots, missing references may cause objects in your dump to appear as unreachable. >*thread2objects2roots - This is used to show garbage collection roots associated with a thread. It is a hash map going from thread id to another hash map. The second hash map maps all the object ids referenced by the thread to a list of GC Root information for each object, holding the reason why the object is referenced, such as a Java local variable, JNI Local, reference from a native stack. The thread itself is the main GC root, and these maps are used to annotate references from the thread. Objects referenced via a thread do not need to be included in the gcRoots map unless they are also global GC roots. >* threads index file - This is used to show where the thread locals are in the stack frames. >This is just a text file named "{prefix}.threads". >The threads file format is multiple sections as follows: ><pre> >Thread 0x7ffe04c1890 > > at java.lang.Object.wait(JI)V (Native Method) > at java.lang.Object.wait()V (Object.java:167) > at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.getNextEvent()Lorg/eclipse/osgi/framework/eventmgr/EventManager$EventThread$Queued; (EventManager.java:397) > at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run()V (EventManager.java:333) > >locals: >objecId=0x7ffe04c1890, line=0 >objecId=0x7ffe04c1890, line=2 >objecId=0x7ffe04c1890, line=2 >objecId=0x7ffe04c1890, line=3 > ></pre> >**"Thread" is matched to find the start of a section for a thread. >**The thread address is optional â but if omitted then none of the information is stored for that thread. >**The stack frame data is just text, but should be in the same format as a Java stack trace. >**A blank line ends the stack trace. >**"locals" starts the local variable information >**The line number in the stack trace (0-based) is matched by the decimal number following the "line=". >**If the line number is found then the the object id is matched using the "0x" and the "," comma to delimit the hex address of the object on the stack frame. >**A blank line ends the local variable section. > >There are some constraints on the indexes that must be met. For example, the first outbound reference logged for each object must be to the object's class. More information on these constraints can be found in the thread [http://www.eclipse.org/forums/index.php?t=msg&th=163200&start=0&S=86b5235a33dd47bfed74cb351e531fbf] in the old forum. Take care that the references for the objects in the dump are correct because the GarbageCleaner will remove unreachable objects. If unreachable objects should be kept, the "keep_unreachable_objects" can be set (see HPROF or DTFJ for how this can be done). > >Memory Analyzer 1.2 will be able to check indices for any parser. Either start Memory Analyzer from inside Eclipse using the run configuration trace option: >org.eclipse.mat.parser debug enabled >or create a file .options containing >org.eclipse.mat.parser/debug=true >and start Memory Analyzer with the -debug option. See [[FAQ_How_do_I_use_the_platform_debug_tracing_facility%3F]]. > >==== IObjectReader ==== > >As the API reference says, the object reader provides detailed information about objects, e.g. values of instance fields. To do so, random access of the heap dump is needed. Luckily, the developers of MAT provide the classes '''BufferedRandomAccessInputStream''' and '''PositionInputStream'''. They can be used like this: '''new PositionInputStream(new BufferedRandomAccessInputStream(new RandomAccessFile(fileName)))''' > >There are several kinds of Objects that the read method can return: >* '''InstanceImpl''' for normal objects >* '''ClassloaderImpl''' for classloaders >* '''ObjectArrayImpl''' for non-primitive arrays >* '''PrimitiveArrayImpl''' for primitive arrays > >The '''<A> A getAddon(Class<A> addon);''' method can be used to return extra information specific to the heap dump type. It is also used to return objects without an object ID, for example discarded and unindexed objects or unreachable objects. MAT calls getAddon for the class '''ObjectReference.class'''. The parser then returns an instance of a subclass of this class. MAT then fills in the object address, and calls getObject, and the parser can then return an object corresponding to the address, without needing an object ID for '''IObjectReader.read()'''. > >=== The thread resolver extension point === > >It is possible to return extra information such as native thread stacks using this extension point. > >=== The name resolver extension point === > >This perhaps could be used to return details of stack frames. > >=== The trigger heap dump extension point === > >TODO > >== Methods as classes == > >The DTFJ parser has an experimental mode where stack frames are treated as pseudo objects. This has the advantage that a Java thread refers to >its stack frames, and the stack frames refer to the locals, so it is easier to see where a local is used. > > ><pre> >Class Name | Shallow Heap | Retained Heap >---------------------------------------------------------------------------------------------------------------------------------------------------- >com.ibm.ws.util.ThreadPool$Worker @ 0x5db193e0 Default : 3 Thread | 136 | 11,740 >|- <class> class com.ibm.ws.util.ThreadPool$Worker @ 0x57e82930 | 13,245 | 13,421 >|- <Java Stack Frame> com.ibm.io.async.AsyncLibrary.getCompletionData3([JIIJ)I @ 0x925dcd0 (AsyncLibrary.java:625) | 256 | 256 >| |- <class> class com.ibm.io.async.AsyncLibrary.getCompletionData3([JIIJ)I @ 0x5404a1d0 | 0 | 0 >| |- <Java Local> long[32] @ 0x5db1f460 | 272 | 272 >| '- Total: 2 entries | | >|- <Java Stack Frame> com.ibm.io.async.ResultHandler.runEventProcessingLoop(Z)V @ 0x925dd70 (ResultHandler.java:530)| 160 | 6,608 >|- <Java Stack Frame> com.ibm.io.async.ResultHandler$2.run()V @ 0x925dd84 (ResultHandler.java:905) | 20 | 20 >'- <Java Stack Frame> com.ibm.ws.util.ThreadPool$Worker.run()V @ 0x925dda4 (ThreadPool.java:1,550) | 32 | 32 >---------------------------------------------------------------------------------------------------------------------------------------------------- ></pre> > >So the stack frame object is of a type of the method, and the class hierarchy is as follows: > > ><pre> >Class |Type of Class >-------------------------------------------------------------------------------------------------------------------------------------|-------------------- ><native memory> |<native memory type> >|- <method> |<method type> >| |- sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;|<method type> >| |- java.lang.Thread.sleep(J)V |<method type> >| |- org.apache.axis2.jaxws.server.EndpointController.invokeOneWay(Lorg/apache/axis2/jaxws/server/EndpointInvocationContext;)V |<method type> >'- <native memory type> |<native memory type> > '- <method type> |<native memory type> >-------------------------------------------------------------------------------------------------------------------------------------|---------- ></pre> > >So <native memory> is a new type, with no superclass (a sibling to java.lang.object). Its class is <native memory type> not java.lang.Class. ><method> is a new type, of class <method type></text> > <sha1>5dk77xesrkk0qkzvqt5sqcgys08h0sg</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/API policy</title> > <ns>0</ns> > <id>24950</id> > <revision> > <id>339025</id> > <parentid>278193</parentid> > <timestamp>2013-06-05T07:11:00Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <minor/> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="900">= Memory Analyzer API Policy = >This document provides the current API Policy for Memory Analyzer. > > >== Declared API == >The declared APIs in Memory Analyzer are provided as public and documented. The API compatibility between different versions of Memory Analyzer should be reflected by the version numbers, following the Eclipse [http://wiki.eclipse.org/Version_Numbering versioning policy] <br> > >Changes to the API - addind new APIs or deprecating APIs should be documented (e.g. in Bugzilla) and communicated to the community (e.g. via the newsgroups).<br> > >Deprecated API should be available for at lease one major release. > > >== Provisional and internal API == >Provisional APIs should be used while development is occurring. If successfully adopted, they might become declared APIs. If not, they can be removed. In any situation, the community should be notified.<br> > >[[Category:Memory Analyzer]]</text> > <sha1>4e6oqzg0rw06ifq3318tqpux4skiggd</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Building MAT With Tycho</title> > <ns>0</ns> > <id>31455</id> > <revision> > <id>448252</id> > <parentid>447966</parentid> > <timestamp>2023-12-07T07:04:55Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* Use Java 17 for the Build */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="6890">== Introduction == > >This page describes how Memory Analyzer can be built using Maven/Tycho. The build will >* build all MAT bundles >* execute all tests >* (optional) run [https://spotbugs.github.io/ SpotBugs] static checks >* build eclipse features containing the MAT plugins and produce an update site (p2 repository) with them >* produce standalone (Eclipse RCP based) products for different OS platforms >* produce a software bill of materials listing >* sign and upload the produced artifacts (when executed on the Eclipse Jenkins server) > >== Prerequisites == > >=== Clone the Source Code from Git === >MAT sources are in a Git repository, therefore you need a git client. Have a look at [[MemoryAnalyzer/Contributor_Reference#Get the source]] > >=== Use Java 17 for the Build === >Memory Analyzer 1.15 requires Java 17 for the build and tests as it is based on Eclipse 4.30 2021-12, and uses Tycho 4.0.3, even though currently the highest level required to compile the Memory Analyzer plugins is 1.8. It requires Java 17 to run, and this is checked on start up. >Make sure the JAVA_HOME environment variable is set to point to a JDK 17 installation. > >Previous versions of Memory Analyzer required the MAT build has to be run with Java 1.8. For those, make sure the JAVA_HOME environment variable is set to point to a JDK 1.8 installation. > >=== Install and Configure Maven === >The Memory Analyzer build requires a Maven 3.9.* installation (3.8 won't work with Tycho 4.0.3). It is already present on the Jenkins server at Eclipse. For local build one can download it from [http://maven.apache.org/download.html here]. > >If you need to set a proxy for Maven, a snippet like this can be added to the Maven settings file: ><source lang="xml"> > <proxies> > <proxy> > <active>true</active> > <protocol>http</protocol> > <port>8080</port> > <host>myproxy_host</host> > <nonProxyHosts>non_proxy_hosts</nonProxyHosts> > </proxy> > </proxies> ></source> >More information on Maven settings: http://maven.apache.org/ref/3.9.5/maven-settings/settings.html > >== Building MAT from Sources == > >=== Execute the build === >* Open a console and go into the ''<mat_src>/parent'' folder (it contains the parent pom.xml) >* To build MAT with the default profile (build-snapshot) simply execute ><pre>mvn clean install</pre> >* This will cause a fresh build of all bundles, execute tests, build eclipse features, an update site (p2 repository) and standalone products >* If you want to also FindBugs checks are performed as part of the build, then execute ><pre>mvn clean install spotbugs:spotbugs</pre> > >=== Where to find the results? === >You can find the results of the build in the corresponding .../target/ directories for each plugin, feature, etc... Of particular interest are: >* ''<mat_src>/org.eclipse.mat.updatesite/target/site/'' - it contains a p2 repository with MAT features >* ''<mat_src>/org.eclipse.mat.product/target/products/'' - it contains all standalone RCP applications >* [https://ci.eclipse.org/mat/job/tycho-mat-nightly/lastSuccessfulBuild/artifact/ Last build artifacts: bill of materials] > >== Building MAT Standalone RCPs from an Existing MAT Update Site == > >=== Configure and execute the build === >* Open a console and go into the ''<mat_src>/parent'' folder (it contains the parent pom.xml) >* To produce only the standalone products, using an already existing MAT repository (i.e. without building the bundles again) specify that the ''build-release-rcp'' profile is used when you start maven: ><pre>mvn clean install -P build-release-rcp</pre> >* It will take the already existing MAT plugins/features from the repository specified by the ''mat-release-repo-url'' property in ''<mat_src>/parent/pom.xml''. One can overwrite this location when calling maven. For example, to build products with the older 1.5.0 release, use: ><pre>mvn clean install -P build-release-rcp -Dmat-release-repo-url=http://download.eclipse.org/mat/1.5/update-site/</pre> > >=== Where to find the results? === >You can find the standalone products under ''<mat_src>/org.eclipse.mat.product/target/products/'' > >== Further Information == >* The platforms for which RCPs are built are specified in the ''<mat_src>/parent/pom.xml'' file > >== Known Problems == >=== Wrong file permissions === >When building MAT on a Windows box, the RCPs for any other OS will not have the proper permissions (e.g. the executables won't have the x flag). Building under Linux or other non-Windows OS helps. > >== Jenkins Job at Eclipse == >The Jenkins continuous integration instance of the Memory Analyzer Project at the Eclipse is https://ci.eclipse.org/mat/. > >=== Snapshot / Nightly builds === >The [https://ci.eclipse.org/mat/job/tycho-mat-nightly/ ''tycho-mat-nightly''] job checks regularly for changes to the Git repository and produces a snapshot build (see Building MAT from Sources) above. [2020: The automatic detection of changes is now working after problems in 2018,2019] > >The job is additionally configured to sign the plugins and features in the update site, and to upload all artifacts to the download server. >One can download such nightly/snapshot builds here: http://www.eclipse.org/mat/snapshotBuilds.php > >Info: >* Signing is activated by the build-server profile (i.e. with parameter '-P build-server' added to the maven command) >* The macOS builds are also notarized by the [https://ci.eclipse.org/mat/job/mac-sign/ mac-sign job], which is automatically triggered after a successful snapshot build. > >=== Release Builds === >The job [https://ci.eclipse.org/mat/job/mat-standalone-packages/ ''mat-standalone-packages''] can only be triggered manually to build the MAT standalone packages/products using an already existing MAT update site. This can be used in the following scenario - MAT has contributed its bundles and features to the simultaneous Eclipse release as part of a milestone or a release candidate. After the simultaneous release is complete, we would like to have exactly these bundles also packed in the standalone packages, potentially including also the final version of the dependencies (part of the same simultaneous release). > >The job is configured to use the ''build-release-rcp'' profile when calling maven. > >The job may need to be changed for each new release. > >After building the packages the macOS build needs to be notarized using the [https://ci.eclipse.org/mat/job/mac-sign/ ''mac-sign job''], with the parameter of the actual relative location of the dmg file on the download server. > >The downloads can then be tested. > >The job [https://ci.eclipse.org/mat/job/mat-promote-release/ ''mat-promote-release''] copies the files to their final location so they can be downloaded by all the users. > >The job [https://ci.eclipse.org/mat/job/update_latest_update-site/ ''update_latest_update-site''] copies a particular release update site to the /mat/latest/update-site > >[[Category:Memory Analyzer]]</text> > <sha1>pgjgndmzb7gxd238g8q30rmuhvdd4np</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Contributor Reference</title> > <ns>0</ns> > <id>13479</id> > <revision> > <id>448261</id> > <parentid>448260</parentid> > <timestamp>2023-12-11T07:39:42Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* Download Statistics */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="24021">== Getting Started == > >This page is meant to help you contribute to the Memory Analyzer project. > >[[Category:Memory Analyzer]] > >== Workspace == > >=== Setup === > >Install Eclipse and configure it to develop Java 5 applications. >* You can use 3.x or 4.x as you prefer. > >Setup your development environment: >* Via the Update Manager ''Help -> Install New Software...'' install: >** Eclipse BIRT Framework >** IBM Diagnostic Tool Framework for Java (See [https://www.ibm.com/docs/en/sdk-java-technology/8?topic=interfaces-dtfj IBM Diagnostic Tool Framework for Java Version 1.12] . An Update Site is at https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/runtimes/tools/dtfj/ .) This is needed to compile and run with the DTFJ adapter which is part of Memory Analyzer and allows Memory Analyzer to read dumps from IBM virtual machines for Java. >* Configure the Code Formatter Template: >** ''Preferences -> Java -> Code Style -> Formatter -> Import...'' and import this [http://www.eclipse.org/mat/dev/mat_code_formatter.xml template]. > >=== Get the source=== > >Since December 2014 the Memory Analyzer code is stored in a git repository. The URLs to access it are: > ><nowiki>ssh://<username>@git.eclipse.org:29418/mat/org.eclipse.mat</nowiki> > ><nowiki>https://<username>@git.eclipse.org/r/mat/org.eclipse.mat</nowiki> > >A web browsable repository is available at >https://git.eclipse.org/c/mat/org.eclipse.mat.git/ > >The Eclipse Memory Analyzer project also uses Gerrit, and the Gerrit code reviews are at: https://git.eclipse.org/r/#/q/mat/org.eclipse.mat > >and the Gerrit version of the source repositories is available at: > <nowiki>ssh://git.eclipse.org:29418/mat/org.eclipse.mat</nowiki> > https://git.eclipse.org/r/mat/org.eclipse.mat.git > >You can clone the repository using your favorite Git client. More information: [[Git]], [[EGit/User Guide]]. > >There are eclipse .project files, so that the projects (plugins, features, etc...) can be easily imported into the Eclipse IDE. > >If you do not intend to build the MAT update site and standalone distributions, then it is enough to import only the MAT plugins. > >If you want to run locally the Maven/Tycho build for MAT, which produces an update site and standalone RCP products, follow the instructions on the build Wiki page: [[MemoryAnalyzer/Building_MAT_With_Tycho]] > >If you do not have BIRT installed then there will be compilation errors in the org.eclipse.mat.chart and org.eclipse.mat.chart.ui projects. > >If you do not have the IBM DTFJ feature installed then there will be compilation errors in the org.eclipse.mat.dtfj project. > >=== Configure API Tooling Baseline === > >In order to guarantee that no API breaking changes are introduced we recomment using the PDE API Tooling and defining the latest released version of MAT as an API Baseline. Here is a short description how this could be done: > >* Download the latest released version in order to use it as an API Baseline >** Go to the [http://www.eclipse.org/mat/downloads.php MAT download page] >** Download the "Archived Update Site" zip file for the latest release >** Unzip the file somewhere locally >* Configure the API Baseline in the IDE >** In the IDE open '' Window -> Preferences -> Plug-in Development -> API Baselines '' >** Press ''Add Baseline'' >** Select ''An Existing Eclipse installation Directory'' as the source for this baseline. >** Browse and select as ''Location'' the directory in which the zip was extracted >** Enter a name for the baseline, click ''Finish'' and confirm the rest of the dialogs > >Once the API Tooling is properly setup, one will see errors reported if API changes are introduced. > >=== Launch Configuration === > >Launch the Memory Analyzer as '''stand-alone RCP''': >* Create a new ''Eclipse Application'' configuration >* Run a product: ''org.eclipse.mat.ui.rcp.MemoryAnalyzer'' >* Launch with: ''plug-ins selected below only'' >** Deselect ''org.eclipse.mat.tests'' and ''org.eclipse.mat.ui.rcp.tests'' >** Deselect ''Target Platform'' and click ''Add Required Plug-ins'' >** Select ''org.eclipse.pde.runtime'' (3.3) or ''org.eclipse.ui.views.log'' (3.4 or later) to include the Error Log >** Select ''com.ibm.dtfj.api'' ''com.ibm.dtfj.j9'' ''com.ibm.dtfj.phd'' ''com.ibm.dtfj.sov'' if you have installed the IBM DTFJ feature and wish to process dumps from IBM virtual machines >** Select ''com.ibm.java.doc.tools.dtfj'' for help for IBM DTFJ >** Eclipse >= Neon: Select ''org.eclipse.equinox.ds'' and ''org.eclipse.equinox.event'' >or as '''feature plugged into the IDE''': >* Create a new ''Eclipse Application'' configuration >* Run a product: ''org.eclipse.sdk.ide'' >* Launch with: ''plug-ins selected below only'' >** Deselect ''org.eclipse.mat.tests'' and ''org.eclipse.mat.ui.rcp'' >** Select ''com.ibm.dtfj.api'' ''com.ibm.dtfj.j9'' ''com.ibm.dtfj.phd'' ''com.ibm.dtfj.sov'' if you have installed the IBM DTFJ feature and wish to process dumps from IBM virtual machines >** Select ''com.ibm.java.doc.tools.dtfj'' for help for IBM DTFJ >** Eclipse >= Neon: Select ''org.eclipse.equinox.ds'' and ''org.eclipse.equinox.event'' >** Eclipse >= Oxygen: Select ''org.eclipse.equinox.event'' > >=== Create a Stand-Alone RCP === > >See [[MemoryAnalyzer/Building MAT With Tycho]] if you want to produce a standalone MAT. > >=== JUnit Tests === > >The unit tests a placed in the ''org.eclipse.mat.tests'' project. Execute the tests by right-clicking on the project and choose ''Run As... -> JUnit Plug-in Test''. > >The following VM arguments are required in the run configuration for the JUnit Plug-in Test: >''-Xmx850m -ea'' > >For the ''org.eclipse.mat.rcp.tests'' project install SWTBot - API from [https://www.eclipse.org/swtbot/]. > >=== Build Help with DITA === > >* Download [https://github.com/dita-ot/dita-ot/releases/download/3.7/dita-ot-3.7.zip DITA-OT 3.7] and unzip it into somewhere on your disk, e.g. C:\dita-ot-3.7. Please stick to this DITA version, it is the one with which the help pages are currently built. Using a different version results in many unnecessary file changes to the generated files (which are also committed in the git repository). [Previously DITA 1.7.4 was used.] > >* In plugin '''org.eclipse.mat.ui.help''' select '''DitaBuild.xml''' and configure the runtime configuration: >** right click ''Run As > Ant Build...'' >** Refresh > Refresh resources upon completion. > The project containing the selected resource >** configure the DITA directory and libraries: >*** add property dita.dir (this overrides the version in DitaBuild.xml) >**** Properties >**** Add Property >**** Variables >**** Edit Variables >**** New >***** Name: dita.dir >***** Value: the location of DITA, e.g. C:\dita-ot-3.7 >***** OK >** Alternatively to run DITA-OT from the command line >*** Set the dita directory variable, e.g. <code>set DITA_DIR=C:\dita-ot-3.7</code> >*** Add DITA to the path, e.g. <code>set PATH=%DITA_DIR%\bin;%PATH%</code> >*** change to the org.eclipse.mat.ui.help directory and run one of the following: >**** <code>ant -f DitaBuild.xml</code> [attempts to not change HTML files which have no content changes] >**** <code>ant -f DitaBuild.xml -Djustnew=true</code> [attempts to not change HTML files which have no content changes] >**** <code>ant -f DitaBuild.xml -Djustnew=false</code> [HTML files are as they come from DITA build, some HTML files may be changed which have no content changes] >* To modify Help documentation modify xml files >** XML Buddy - might not be available anymore >*** [http://www.xmlbuddy.com Download XMLBuddy] and copy a product directory (e.g., com.objfac.xmleditor_2.0_72) to the plugins directory of your Eclipse installation. >*** Configure XMLBuddy editor as described [http://www.ditainfocenter.com/eclipsehelp/index.jsp?topic=/ditaotug_top/settingup/configuring_xmlbuddy.html here] >** or use the XML editor from Eclipse Web Tools >*** Window > Preferences > XML > XML files > Validation > Enable markup validation >*** Window > Preferences > Validator > XML Validator > Settings > Include Group > Add Rule > File extensions : dita >*** Window > Preferences > XML > XML Catalog > User supplied entries > Add XML Catalog Element > Delegate Catalog >**** Key type to match: URI >**** Matching start string: -//OASIS//DTD DITA >**** Delegate to this XML catalog file: %DITA_DIR%/plugins/org.oasis-open.dita.v1_3/catalog.xml >**** [substitute %DITA_DIR% with the actual path] >*** Note that the validation does not seem to work with Eclipse 2022-03 any more - some previous versions did work. >** or or use the XML editor from Eclipse Web Tools and [https://projects.eclipse.org/projects/mylyn.docs.vex Vex] >*** It may be easier to still use the XML Editor, as the Vex editor deliberately doesn't show tags, but Vex provides DTD files for DITA, making it possible for XML validation and content assist for DITA files >* Run ant on DitaBuild.xml to build html files. > >=== Build OQL Parser using JavaCC === > >* Download [https://javacc.org/downloads/javacc-5.0.tar.gz JavaCC 5.0 tar.gz] or [https://javacc.org/downloads/javacc-5.0.zip JavaCC 5.0 zip] and unpack it. >* Copy javacc.jar to the root of the '''org.eclipse.mat.parser''' project >* In plugin '''org.eclipse.mat.parser''' select '''build_javacc.xml''' >** right click ''Run As > Ant Build...'' >* Select package '''org.eclipse.mat.parser.internal.oql.parser''' >** Source > Organize Imports >** Source > Format >** Ignore the choice conflict message and non-ASCII character message >** Synchronize with the source repository to add the copyright header etc. back in > >=== Creating and editing icons === > >Consider using [https://marketplace.eclipse.org/content/eclipaint EcliPaint]. > >For Mac, consider using [https://pypi.org/project/icnsutil/ icnsutil from PyPI] to help build the icns file. > ><source lang="bash"> >#!/bin/sh >cp memory_analyzer_16.png icon_16x16.png >cp memory_analyzer_32.png icon_32x32.png >cp memory_analyzer_48.png icon_48x48.png >cp memory_analyzer_64.png icon_64x64.png >cp memory_analyzer_128.png icon_128x128.png >cp memory_analyzer_256.png icon_256x256.png >icnsutil convert icon_16x16.argb icon_16x16.png >icnsutil convert icon_32x32.argb icon_32x32.png >cp icon_32x32.png icon_16x16@2x.png >cp icon_48x48.png icon_24x24@2x.png >cp icon_64x64.png icon_32x32@2x.png >cp icon_128x128.png icon_64x64@2x.png >cp icon_256x256.png icon_128x128@2x.png >icnsutil c memory_analyzer.icns icon_16x16.argb icon_16x16@2x.png icon_24x24@2x.png icon_32x32.argb icon_32x32@2x.png icon_48x48.png icon_128x128.png icon_128x128@2x.png icon_256x256.png --toc >icnsutil i memory_analyzer.icns >rm icon_* ></source> > > >Also see how the icons look in high-contrast mode. See [https://bugs.eclipse.org/bugs/show_bug.cgi?id=342543 Bug 342543 Icon decorators not visible in high contrast mode] >Also consider dark theme: Window > Preferences > General > Appearance > Theme > >== Building MAT with Maven/Tycho == > >The following page describes how Memory Analyzer (p2 repository and standalone RCP applications) can be build using Maven/Tycho: [[MemoryAnalyzer/Building MAT With Tycho]] > >== Testing using Docker == > >It is possible to [[MemoryAnalyzer/Docker|run Memory Analyzer in a Docker container]], which might allow testing on different Linux distributions > >== Testing using Windows Subsystem for Linux == > >It is possible to [[MemoryAnalyzer/WSL|run Memory Analyzer under WSL]], which might allow testing of a Linux distributions when running Windows. > >== Ideas for Contributions == > >This is just a short list of ideas. If you are missing a feature and have some time to contribute, please do not hesitate to contact us. > >* Extensions to the tool adding application knowledge: MAT provides some extension points which can help you to plug-in pieces of information that give meaning to your specific object structures. It would be nice to have in MAT application knowlegde about prominent open source projects, e.g. different Eclipse components, servers like Tomcat and Glassfish, etc... Some examples: >** using an org.eclipse.mat.api.nameResolver extension one can specify what description should be shown next to an object, similar to a toString() method. See for example [https://bugs.eclipse.org/bugs/show_bug.cgi?id=273915 bug 273915 ] >** using an org.eclipse.mat.api.requestResolver extension one can add to the leak report information about what a threads was executing, e.g. tell which HTTP request it was processing, list the URL, parameters, etc... For example [https://bugs.eclipse.org/bugs/show_bug.cgi?id=318989 bug 318989] proposes an extension which points to the ruby script a thread is executing >** using an org.eclipse.mat.api.query extension one can add a useful query which will be available all the other queries/commands seen in the tool. See for example [https://bugs.eclipse.org/bugs/show_bug.cgi?id=256154 bug 256154] >* Documentation >* Unit Tests > >== Contributing code == > >Eclipse Memory Analyzer uses [[Gerrit]] for contributions, including from committers. Contributions are pushed to Gerrit using these [[#Get_the_source|URLs]], not directly to Git. After an approval process (including the [[Development_Resources#Everyone:_IP_Cleanliness|Eclipse IP process]]) by MAT committers, the contributions are pushed by Gerrit to the main Eclipse Git repository. The code is then build by [https://ci.eclipse.org/mat/job/tycho-mat-nightly/ Jenkins]. See also [[Development_Resources/Contributing_via_Git]]. > >Example Workflow > >* Open a bug at https://bugs.eclipse.org/bugs/enter_bug.cgi?product=MAT >* Open Eclipse's Git perspective >* Right click on the repository and click Pull to get the latest changes >* Right click on the repository and click Switch To > New Branch and enter an arbitrary branch name (perhaps with the bug number) and click Finish >* Make your code changes >* Format your code to the project standards using [http://www.eclipse.org/mat/dev/mat_code_formatter.xml MAT code style format] >* Run org.eclipse.mat.tests as a Junit Plugin Test and org.eclipse.mat.rcp.tests as an SWTBot test and ensure all pass >* Switch to the Team Synchronizing perspective and review your changes >* Add the changes you want to commit to the index (e.g. right click on the project and click Add to Index) >* Right click on the projects/files and click Commit... >* Use the following commit message template: > >Bug $BUGID Commit message... > >Example: > >Bug 497127 Handle copying heapdumps greater than 2GB on certain platforms. > >* Click Commit (do not click Commit and Push) >* In the Git perspective, expand org.eclipse.mat > Remotes, then right click on origin, then click Gerrit Configuration..., set destination branch to master and click Finish. >* Right click on org.eclipse.mat and click Push to Gerrit... and click Finish >* The results dialog should show the Gerrit link at the bottom such as https://git.eclipse.org/r/76397 >* In the Git perspective, right click the repository and click Switch To > master (now you're back on master without your changes) >* Gerrit requires two code review votes and one verify vote. For trivial changes, you may give the +2/+1 yourself. Otherwise, ideally, two other committers would review and approve your change, but given the size of the team and their limited availability, you may also give your own +1. In the trivial case, add a comment such as "Simple patch; judging it doesn't require others' review so adding +2 code review". To request that a committer reviews your change, click Add next to Reviewers and find the name >* Once the +2 Code Review/+1 Verified exists, a "Submit" button will show up which you can click to push the change to master. >* The bug report will be updated automatically >* Change the bug report to Resolved after testing the changes from a nightly build > > >If you are asked to modify your Gerrit change, you will need to amend your commit and add a patch set: https://gerrit-review.googlesource.com/Documentation/intro-user.html#upload-patch-set > >Note in particular that the amended commit should have "\nChange-Id: ..." at the bottom which matches the Gerrit change. > >If you have the Gerrit commit hook installed, the steps to submit a new patch set are: > >* Make your changes in the original patch set branch >* <code>git add</code> your changed files >* <code>git commit --amend</code> >* <code>git push origin HEAD:refs/for/master</code> > >== Uses Mylyn for handling Bugzilla tasks == > >From the ''Task Repositories'' view add your Bugzilla user ID and password: > >* Server: https://bugs.eclipse.org/bugs >* Label: Eclipse.org >* User ID: my_name@example.com >* Password: ******** > >From the Task List then create a Query: New > Query > Eclipse.org > Create query using form > Next > >* Product: MAT >* Status: UNCONFIRMED NEW ASSIGNED REOPENED > >This lets you handle Bugzilla from inside Eclipse. > >== Writing plugins for the Memory Analyzer == > >If you want to write a plugin for the Memory Analyzer, you can find information on the following pages: > >*[[MemoryAnalyzer/Reading Data from Heap Dumps]] >*[[MemoryAnalyzer/Extending Memory Analyzer]] >*[[MemoryAnalyzer/Adding a new heap dump format to the Memory Analyzer]] > >You should develop and test your plug-ins using the Eclipse environment, making sure that your plug-in is listed in the run configuration. Memory Analyzer uses the p2 installer [[Equinox p2 Getting Started]]. To get your plug-in installed in standalone version of MAT then you need to build an Eclipse feature or update site including your plug-in, then install the feature or update site into MAT. > >Help &gt; Install New Software &gt; Add &gt; Archive &gt; your exported feature > >You may need to deselect 'Group items by category' if your feature does not have categories. > >If you wish to import an plug-in without building a feature then you need to create a dropins folder under the mat directory in MAT, put your plug-in jar there, then restart MAT with > MemoryAnalyzer -console >and type > start org.eclipse.equinox.p2.reconciler.dropins >in the console so that the p2 installer will look in the plugins directory for your plug-ins. > >== New version development process == >* Document new version at [https://projects.eclipse.org/projects/tools.mat MAT project page] >* Create Bugzilla entries for target milestone [https://dev.eclipse.org/committers/bugs/bugz_manager.php Bugzilla Manager] >* Update references to old release in the code e.g. 1.X -> 1.Y excluding update sites >** See pom.xml e.g. <code>&lt;version&gt;1.9.1-SNAPSHOT&lt;/version&gt;</code> >** See manifest.mf e.g. <code>&lt;Bundle-Version: 1.9.1.qualifier</code> >** See feature.xml, excluding updateSiteName >** See org.eclipse.mat.ui.rcp about.mappings >** org.eclipse.mat.product mat.product >** org.eclipse.mat.ui.rcp.feature rootfiles/.eclipseproduct (hidden file, may need to use navigator view) > >* Develop features and fix bugs >* If a plugin depends on new function in another plugin, update the dependency version in manifest.mf >* If creating a new plugin, add it to the JavaDoc build process in extrabuild.xml, use package-info.java to mark packages as not API or API as appropriate. Consider carefully adding new APIs. >* If the Java version changes then the minor version must increase, also change: >** .classpath >** .settings/org.eclipse.jdt.core.prefs >** manifest.mf <code>Bundle-RequiredExecutionEnvironment: J2SE-1.5</code> >** Update org.eclipse.ui.help extrabuild.xml for the new JavaDoc compile level, and for the link to the Java class library documentation >** Consider keeping org.eclipse.mat.ibmdumps at a lower level as it uses classes for the exec dump provider and the attach jar which may be executed on lower level JVMs. >* Update copyright date in source code if updated in a new year >* If the RCP is to be built against a newer version of Eclipse, then: >** create a new target platform in org.eclipse.mat.targetdef >** update org.eclipse.mat.ui.help extrabuild.xml to add a link to the Eclipse help for the platform >** create a new /org.eclipse.mat.product/mat-<Eclipse-rel>.product file - normally use the same basename as for the target >** create a new /org.eclipse.mat.product/mat-<Eclipse-rel>.p2.inf file - normally use the same basename as for the target >* Check for regressions/changes in report outputs using the regression test suite >** Check out the previous release from Git >** Get it compiled - may need to change target platform >** In org.eclipse.mat.test, run the <code>org.eclipse.mat.tests.application</code> with ><code> >-regression >./dumps >"-Xmx500m -DMAT_HPROF_DUMP_NR=#1 --add-exports=java.base/jdk.internal.org.objectweb.asm=ALL-UNNAMED" ></code>. This will run the tests, and establish a baseline if one does not exist >** Switch back to master >** Rerun the test which will detect any changes. Examine report.xml to understand whether the changes are expected. >* Towards the end, change the update site references >* Update copyright date in feature.properties etc. if feature/plugin updated in a new year >** See feature.xml, including updateSiteName >* Also write a New and Noteworthy document replacing the previous release, and add a link to the old document on the MAT website. This should be done in org.eclipse.mat.help in noteworthy.dita, and take the generated noteworthy.html, modify it if needed and add to the website. >* Follow [[MemoryAnalyzer/Contributor Reference#Simultaneous release policies]] >* After release create Bugzilla entry for the new version [https://dev.eclipse.org/committers/bugs/bugz_manager.php Bugzilla Manager] >* [https://wiki.eclipse.org/Babel/FAQ#How_do_I_add_my_project_to_Babel.3F Add a new Babel definition] so the messages files could be translated. > >== Simultaneous release policies == >* Create a release record in [https://projects.eclipse.org/projects/tools.mat Eclipse Memory Analyzer project page] >* For reference, read [https://github.com/orgs/eclipse-simrel/discussions/3 contribute to the Simultaneous Release Build] >* [https://ci.eclipse.org/mat/job/prepare_simrel_contribution/ Build] to copy update site build to SimRel location. >* Follow the SimRel process to update mat.aggrcon in the SimRel build. >* Preserve the [https://ci.eclipse.org/mat/job/tycho-mat-nightly/ tycho-build-nightly] as 'Keep forever' and label it with the release build e.g. 'Photon RC2' >* The [http://git.eclipse.org/c/simrel/org.eclipse.simrel.build.git/tree/mat.aggrcon MAT configuration file] which is [https://github.com/orgs/eclipse-simrel/discussions/3 updated using Git in GitHub.com/eclipse-simrel] to match the SimRel location. >* Tag in Git with 'R_1.X.Y' the source used to generate the final build for a release. See [https://git.eclipse.org/c/mat/org.eclipse.mat.git/refs/ MAT Git Refs] >* Complete the Eclipse release process, including getting a review if needed at [https://projects.eclipse.org/projects/tools.mat Eclipse Memory Analyzer project page] >* To release, also run the [https://ci.eclipse.org/mat/job/mat-standalone-packages/ Stand-alone packaging build], notarize the Mac x86_64 and aarch64 .dmg files and copy the results to the download site using [https://ci.eclipse.org/mat/job/mat-promote-release/ promote release]. >* Add the version name '1.XX' to [https://dev.eclipse.org/committers/bugs/bugz_manager.php Bugzilla manager] so users can report bugs against the new version >* Also release on [https://marketplace.eclipse.org/content/memory-analyzer-0 Eclipse Marketplace] >* Also consider archiving some old releases > > >MAT Policies - to satisfy [[SimRel/Simultaneous_Release_Requirements/Appendix]] >* [[MemoryAnalyzer/Ramp Down Plan|Ramp Down Plan]] >* [[MemoryAnalyzer/Retention_policy|Retention Policy]] >* [[MemoryAnalyzer/API_policy|API Policy]] >* [[MemoryAnalyzer/MAT_Capabilities|Capabilities]] > >== Download Statistics == >Eclipse committers once logged in at accounts.eclipse.org can see download statistics at >[https://dev.eclipse.org/committers/committertools/stats.php|Eclipse Download Stats]. >These are from downloads via the Find a Mirror script for stand-alone MAT and from p2 downloads from an update site. >Search for '/mat/' for mirror downloads and 'org.eclipse.mat.api' for p2 downloads. > >== Maintaining MAT Website == > >See [[MemoryAnalyzer/Contributor Reference/Website]]</text> > <sha1>orkq7ynl2vbtt7ti68626iz7j2ty6wu</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Contributor Reference/Website</title> > <ns>0</ns> > <id>39615</id> > <revision> > <id>446045</id> > <parentid>425557</parentid> > <timestamp>2022-10-14T09:13:37Z</timestamp> > <contributor> > <username>Krum.tsvetkov.sap.com</username> > <id>3945</id> > </contributor> > <comment>/* Get the MAT Website Git Repository */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="2141">== MAT Website == > >This page contains information about the MAT web site http://www.eclipse.org/mat/ - where the source is, how to make updates, etc... > >The MAT Website resides in Git. > >== Setting up the Tools == > >=== EGit === > >To work with Git in Eclipse, you'll need to install (if not available already) the EGit tools from > ><code><nowiki>http://download.eclipse.org/egit/updates/</nowiki></code> > >A detailed EGit user guide is available here: [[EGit/User Guide]] > >=== SSH Key === >For the ssh communication, you will need to setup a key. >Follow the guide [[Git#Setting up ssh keys]] > >=== Username/E-Mail === >The following resources can be useful to properly configure your environment: >* Introduction [[Git#Committers_new_to_Git]] >* Setup your user/e-mail [[Platform-releng/Git Workflows#Configure the workspace]] > >== Get the MAT Website Git Repository == >MAT Website repository moved to GitHub: https://github.com/eclipse/mat-website > >Clone the repository https://github.com/eclipse/mat-website (see [https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository Cloning a Repository] if you need help). > >The content of the master branch is displayed on the website. See [https://www.eclipse.org/projects/handbook/#resources-website Project Websites] for more details such as changes taking up to 5 minutes to be copied to the live website. > >== Import the project == >There is one general project for the website content. >Once you cloned the repository you can import the project. > >See [[EGit/User Guide#Importing projects]] > >== Making changes == >Well, there is nothing specific for MAT. >Once you have cloned the repository and imported the project in the IDE, you can make local changes and commit them locally. >Before committing, be sure the have properly set the user/e-mail (see above). > >Once you think the changes are ready to be put on the website, push your changes. >The new content will be visible on the website in a few minutes. > >See: >* [[EGit/User Guide#Fetching_from_upstream]] >* [[EGit/User Guide#Committing Changes]] >* [[EGit/User Guide#Pushing to upstream]] > >[[Category:Memory Analyzer]]</text> > <sha1>92ztkoghdmus4phpc63iisw2ndzbpbb</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Docker</title> > <ns>0</ns> > <id>60486</id> > <revision> > <id>438279</id> > <parentid>438277</parentid> > <timestamp>2020-02-16T15:15:50Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <minor/> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="1558">== Docker and Eclipse Memory Analyzer == > >[[Category:Memory Analyzer]] > >It is possible to run Eclipse Memory Analyzer in a Docker container. >A useful Docker image is the following [https://hub.docker.com/r/kgibm/fedorawasdebug] > >It is also possible to have minimal images to allow Eclipse Memory Analyzer to be tested in various Linux distributions. >These Dockerfiles allow testing of snapshot builds. > > <nowiki> >FROM ubuntu >#If docker build gets the wrong time , might need apt-get -o Acquire::Max-FutureTime=86400 update >RUN apt-get update && apt-get install -y default-jdk wget unzip libwebkit2gtk-4.0 firefox ># Download snapshot build, just for testing >RUN wget "http://www.eclipse.org/downloads/download.php?file=/mat/snapshots/rcp/org.eclipse.mat.ui.rcp.MemoryAnalyzer-linux.gtk.x86_64.zip&mirror_id=1" -O /tmp/org.eclipse.mat.ui.rcp.MemoryAnalyzer-linux.gtk.x86_64.zip >RUN unzip /tmp/org.eclipse.mat.ui.rcp.MemoryAnalyzer-linux.gtk.x86_64.zip -d /opt >ENV DISPLAY host.docker.internal:0.0 >CMD ["/opt/mat/MemoryAnalyzer"] ></nowiki> > > <nowiki> >FROM fedora >RUN yum install -y wget unzip java-1.8.0-openjdk.x86_64 webkitgtk4 firefox ># Download snapshot build, just for testing >RUN wget "http://www.eclipse.org/downloads/download.php?file=/mat/snapshots/rcp/org.eclipse.mat.ui.rcp.MemoryAnalyzer-linux.gtk.x86_64.zip&mirror_id=1" -O /tmp/org.eclipse.mat.ui.rcp.MemoryAnalyzer-linux.gtk.x86_64.zip >RUN unzip /tmp/org.eclipse.mat.ui.rcp.MemoryAnalyzer-linux.gtk.x86_64.zip -d /opt >ENV DISPLAY host.docker.internal:0.0 >CMD ["/opt/mat/MemoryAnalyzer"] ></nowiki></text> > <sha1>sfn7kqf885trx4t51vz2iqx2x0smra8</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Extending Memory Analyzer</title> > <ns>0</ns> > <id>27732</id> > <revision> > <id>443526</id> > <parentid>441275</parentid> > <timestamp>2021-06-23T10:25:11Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* Setting up a development environment for writing extensions */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="22661">== Introduction == > >The Memory Analyzer tool offers several possibilities to extend it. This page contains an overview of the different extension points and what can be achieved with them. > >Within the extensions one will usually extract certain pieces of information from the objects in the heap dump. See [[MemoryAnalyzer/Reading Data from Heap Dumps]] for more details on how to read data from a heap dump. > >== Setting up a development environment for writing extensions == > >It is not necessary to download the source of Memory Analyzer to be able to write extensions. A recent binary version is sufficient. > >#Have a copy of an Eclipse Java Development environment installed >#Download a recent copy of of Memory Analyzer [from October 2010 or later] >##Download Memory Analyzer 1.12.0 [https://www.eclipse.org/mat/downloads.php Download site] >##Update using update site (if you haven't got the latest version) [http://download.eclipse.org/mat/snapshots/update-site/ Update site] >#Create MAT as a target platform: >##Windows-&gt;Preferences-&gt;Plug-in Development-&gt;Target Platform >##Add-&gt;Nothing-&gt;Next >##Name: MAT >##Locations-&gt;Add-&gt;Installation >##Location: path_to_MAT/mat >##Finish >##Select MAT as active target platform >#Create a new plug-in project: >##File-&gt;New-&gt;Other-&gt;Plug-in project >##Name: MAT Extension >##-&gt;Next >##Execution Environment: JavaSE-1.8 (that's all MAT currently requires) >##No activator (unless you are doing something complicated) >##No UI contribution >##No API analysis >##No template >##-&gt;Finish >##Dependencies >###add org.eclipse.mat.api >###Save (cntl-S) >##Extensions >###select org.eclipse.mat.api.nameResolver >###-&gt;Finish >###click on impl >###Adjust package name and class name to suit >###-&gt;Finish >##Add for example >##:@Subject("java.lang.Runtime") >##:before the class definition >##Organize imports (cntl-shift-O) >##Edit the code to perform the required function. For example >##:In >##: public String resolve(IObject object) >##:Change >##: return null; >##:to >##: return "The Java runtime of size"+object.getUsedHeapSize(); >##:Note the hover javadoc help for IObject, IClassSpecificNameResolver. Note the method list for object. >##Save >#To test: >##Select Plug-in, Run As-&gt;Eclipse Application >#To package >##File->Export->Plug-in Development->Deployable plug-ins and fragments >##->next >##select plug-in >##Destination: Directory: path_to_MAT/mat >##->Finish > >== The Name Resolver Extension == > >The name resolver extension point provides a mechanism to give a readable description of an object, similar to what a toString() method will do. Some extensions which MAT provides are for example for to show the content of objects representing String, to show the bundle symbolic name for Equinox classloaders, to show the name for Thread objects, etc⦠> >The extension should implement the IClassSpecificNameResolver interface which defines a single method. ><source lang="java"> >public String resolve(IObject object) throws SnapshotException; ></source> > >The method takes an IObject as an argument and should return a string representation. > >To specify the class for which the resolver should be used one can use the @Subject( annotation. > >The method getClassSpecificName of IObject will look for extensions which match the class of the object and execute the resolve() method to return the proper String. Thus it is relatively easy to return a description based on one or more String fields, as strings are already resolved. > >Here is a sample implementation that will return the name of an Eclipse Job: > ><source lang="java"> >@Subject("org.eclipse.core.runtime.jobs.Job") >public class JobNameResolver implements IClassSpecificNameResolver >{ > > @Override > public String resolve(IObject object) throws SnapshotException > { > IObject name = (IObject) object.resolveValue("name"); > if (name != null) return name.getClassSpecificName(); > return null; > } > >} ></source> > >== Queries in Memory Analyzer == > >=== Introduction to Queries === > >Most of the functionality in Memory Analyzer which is exposed to the user of the tool is provided via queries (implementing the IQuery interface, for example "Histogram", "Retained Set", etc... Queries extract and process data from the heap dump using the MAT's API, and provide the result to the user in the form of a table, a tree, free text, etc... Queries show up in the "Queries" menu of the tool, and often in the context menus on objects. >An important feature of the queries is that they can "collaborate", i.e. the user can use (part of) the result of one query and pass it as input parameters to another query. >Here is an example of such "cooperation" - you select "Histogram" to show a class histogram of all objects, then choose say java.util.HashMap and from the context menu call "Retained Set". You can then select a line in this retained set and pass the corresponding objects to yet another (possibly created by you) query. > >=== The IQuery Interface === > >To implement a query one needs to implement the [http://help.eclipse.org/juno/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/query/IQuery.html IQuery] interface. >The implementation should provide a (default) constructor without parameters. > >The IQuery interface defines just one method: > ><source lang="java"> > public IResult execute(IProgressListener listener) throws Exception { ></source> > >As a parameter one gets only a progress listener (IProgressListener) to report progress. All other input that a query needs is deaclared with annotations on the fields of the query and injected from Memory Analyzer at runtime. The fields used for arguments injection should be declared public. For more details on getting input, see [[#Passing Arguments to a Query]]. >The return type of the execute method is IResult, which is just a marker interface. The different result types are described in the section [[#Query Results]]. > >=== Scope === > >Queries are stateless. Every time the user executes a query a new instance of the IQuery implementation is created. The required input is injected into the fields of the instance and the execute method is called. > >=== Describing the Query with Annotations === > >When you write a query it will appear in the context menus, and Memory Analyzer will open an Arguments Wizard for specifying the prequired arguments, and this wizard will also show some help for the query and its arguments. >The metadata - how a query will be named, under which category (sub-menu) it will appear, the help text, etc... are provided by annotating the query. > >The following meta-data related annotations are available: >* @CommandName â used for command line and query browser >* @Name â the visible name on the menu, nn| to set order >* @Category â the menu section (sub-menu), / to cascade, nn| to set order >* @Help â explanation of query >* @HelpUrl â link into the help system >* @Icon â icon for the query (shown in the menu) >* @Usage â example usage â defaults to command name + args > >The values can also be externalized. To do so, put them into an annotations.properties file in the package directory. > >Sample code snippet: > ><source lang="java"> >@Category("Sample Queries") >@Name("List Jobs Query") >@Help("This is a sample query, which lists all jobs with a given name") >public class SampleQuery implements IQuery { > >... ></source> > >To externalize these values, a annotations.properties will look something like: ><pre> >SampleQuery.category = Sample Queries >SampleQuery.name = List Jobs Query >SampleQuery.help = This is a sample query, which lists all jobs with a given name ></pre> > >=== Passing Arguments to a Query === > >A nice property of queries is that they can interact with each other. In other words, parts of the result from one query (say one line in a histogram) can be passed to a different query using the context menus. Therefore, queries should just declare what kind of arguments they require and delegate to the Memory Analyzer to collect this information and inject it into the queries before executing them. Memory Analyzer does this by opening the Arguments Wizard. > >To declare an input parameter a query has to define a ''public'' field and annotate it with the ''@Argument'' annotation. To provide a help message specific on the concrete argument, add also the ''@Help'' annotation to the public field. > >The following types are currently supported as arguments: > >; ISnapshot : the snapshot corresponding to the currently open editor >; IHeapObjectArgument : good way of getting objects >; String, Pattern, int, boolean, float, double : these get supplied via query wizard or via command line >; IContextObject : row with one object >; IContextObjectSet : row with multiple objects or OQL query to return those objects >; IQueryContext : a more general way of extracting information about the snapshot which is not tied to the snapshot API >; arrays or lists of the above : use them for multiple items >; enums : can be used to provide a fixed choice list >; File : for input or output files > > >==== Comparison Queries ==== > >Comparison queries are run from the Compare Basket but are invoked in a similar way. Each row of the Compare Basket is a whole result of a previous query; either a tree or table. Queries with arguments suitable for a comparison operation are only offered in the Compare Menu and not from the editor pane. Comparison arguments are as follows: > >; List or array of IResultTable : for comparison queries only operating on tables >; List or array of IResultTree : for comparison queries only operating on trees >; List or array of IStructuredResult : for comparison queries operating on tables and trees >; List or array of RefinedTable : for comparison queries only operating on tables, uses the filtered and sorted version of the previous result with any derived columns like retained size >; List or array of RefinedTree : for comparison queries only operating on tables, uses the filtered and sorted version of the previous result with any derived columns like retained size >; List or array of RefinedStructuredResult : for comparison queries operating on tables and trees, uses the filtered and sorted version of the previous result with any derived columns like retained size >; List or array of ISnapshot : the snapshots corresponding to the tables / trees, in the same order > >Consider using RefinedStructuredResult for your comparison queries as the query may then be more flexible for the end user. > >Standard arguments available to comparison queries > >; String, Pattern, int, boolean, float, double : these get supplied via query wizard or via command line >; enums : can be used to provide a fixed choice list >; File : for input or output files > >See [https://git.eclipse.org/c/mat/org.eclipse.mat.git/plain/plugins/org.eclipse.mat.api/src/org/eclipse/mat/internal/snapshot/inspections/CompareTablesQuery.java CompareTablesQuery.java] for an example. > >==== Qualifications on Query Arguments ==== > >The following parameters on the @Argument annotation can be used to specify some further restrictions / hints: >; isMandatory : a boolean parameter to tell MAT if it can execute the query without the argument >; flag : a String used instead of the field name to identify the argument in the command line and in the query browser >; Advice : qualifies the way data is inserted into the field >*; HEAP_OBJECT : the int or Integer is an object id, not a number >*; SECONDARY_SNAPSHOT : the snapshot is another snapshot, which should be prompted for, not the current one >*; CLASS_NAME_PATTERN : the pattern will be used to match class names >*; DIRECTORY : the file parameter is meant to be a directory >*; SAVE : the file parameter is meant to be used to save data > >=== Reading data from supplied arguments === > >See [[MemoryAnalyzer/Reading Data from Heap Dumps]] for how to extract data from supplied arguments, including ISnapshot, IObject and object IDs. > >=== Calling One Query from Another === >Supplied queries are not a Memory Analyzer API, so user written queries should not link to them directly. It is possible to call them by name, though the query names and arguments can vary from release to release. ><source lang="java"> >String query = "SELECT s, toString(s) from java.lang.String s"; >IResult ir = SnapshotQuery.lookup("oql", snapshot).setArgument("queryString", query).execute(listener); ></source> >or ><source lang="java"> >SnapshotQuery query = SnapshotQuery.parse("dominator_tree -groupby BY_CLASSLOADER", snapshot); >IResultTree t = (IResultTree)query.execute(new VoidProgressListener()); ></source> >which shows how an enum argument can be set. Setting them directly via setArgument often doesn't work as the enum type is inaccessible. > >=== Query Results === > >; TextResult : A simple result that renders its input as text. >; IStructuredResult : A way of display data about lots of objects >*; IResultTable : A table of objects >**; Histogram : A table of objects where each row is all the objects of one class >**; ListResult : A way of displaying a Java List of things, where the fields from each thing are also given >**; PropertyResult : A way of displaying details about one object based on a list of attributes >*; IResultTree : A tree of objects >; IResultPie : A pie chart >; QuerySpec : A good way of displaying the results of executing another query > >TODO improve description, add remaining results > >== Reports in Memory Analyzer == >=== Introduction === >Several queries can be combined into a report which could then be run from the Run Report... menu option or in batch mode. >=== Report definition === >The report definition is written in XML and can be validated using the schema held in [https://git.eclipse.org/c/mat/org.eclipse.mat.git/tree/plugins/org.eclipse.mat.report/schema/report.xsd org.eclipse.mat.report/schema/report.xsd]. The result of running a report will be an HTML page or a CSV data file. Queries can be run using the <code>query</code> element, using the <code>command</code> element to specify which command should be run. Other reports can be run using the <code>template</code> element. The <code>section</code> element can be used to combine multiple <code>query</code>, <code>template</code> and <code>section</code> elements and is displayed in a report as a collapsible part or a separate file. > >Two examples show how the report definition is written. [https://git.eclipse.org/c/mat/org.eclipse.mat.git/tree/plugins/org.eclipse.mat.api/META-INF/reports/overview.xml overview.xml] has a <code>section</code> and [https://git.eclipse.org/c/mat/org.eclipse.mat.git/tree/plugins/org.eclipse.mat.api/META-INF/reports/suspects.xml suspects.xml] also has a <code>template</code> element referencing another report to be run and included. > >The <code>param</code> element allows output to be controlled. The params can be used to control generation of tables - as HTML or CSV files, to limit or increase the number of lines displayed, and to omit, sort or filter columns. Some params just control the current section - some also control any inner sections unless overridden. A param can also be used elsewhere in the report as <code>${param_name}</code> - in the command name and other param values. Values are documented in [https://help.eclipse.org/2019-09/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/report/Params.html <code>param.</code> keys], [https://help.eclipse.org/2019-09/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/report/Params.Html.html <code>param.html.</code> keys] and [https://help.eclipse.org/2019-09/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/report/Params.Rendering.html <code>param.rendering.</code> keys], more conveniently seen at [https://help.eclipse.org/2019-09/topic/org.eclipse.mat.ui.help/doc/constant-values.html#org.eclipse.mat.report.Params.FILENAME Constant field values]. Since Memory Analyzer 1.9, parameters can be passed to a report definition using <code>ParseHeapDump</code> as options thus: <code>ParseHeapDump myheapdump.hprof -myparam=myparam_value myreport.xml</code>. A param value given on the command line will override a value given in a report definition. > >The report definition can be tested as Run Expert System Test > Run Report. > >=== Report definition extension point === > >If a report definition is incorporated into a plug-in then the report definition extension point should be used. Then when the new plug-in is installed into MAT the report will be available to run from the Memory Analyzer GUI. > >Values in the report definition can then be externalized using "%myval Default value" where myval is also defined in the plugin.properties file. > >=== Executing a Report in Unattended Mode === >Once you create a Report extension you can find it next to other reports like the "Leak Suspects" in the Memory Analyzer GUI. >Besides this, one can execute reports in an unnatended mode by running the "org.eclipse.mat.api.parse" application. > >Here is an example how to setup a Run Configuration to execute the report "my_repory" located in the plugin "my_report_plugin": >* Run As -> Run Configurations >* Set "Run An Application" to "org.eclipse.mat.api.parse" >* Set the Program arguments to ><pre>${file_prompt} my_report_plugin:my_repory</pre> > >When you run this you will get a popup to select a heap dump file. It will be then parsed and the report will be executed for it. The result however is not open in the IDE, it is saved in the file system next to the dump. > >==== Parameters ==== > >You can also define options on the command line. >This can be as ><pre>${file_prompt} "-my_parm=my special value" my_report_plugin:my_repory</pre> > >and then access the variable inside the report XML as ><source lang="XML"> ><section name="My report" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > xmlns="http://www.eclipse.org/mat/report.xsd" > xsi:schemaLocation="http://www.eclipse.org/mat/report.xsd platform:/resource/org.eclipse.mat.report/schema/report.xsd"> > <param key="my_parm" value="default value" /> > <query name="OQL Query command"> > <command>my_query -myopt "${my_parm}"</command> > </query> ></section> ></source> > >== Request Resolvers == > >=== Introduction === > >A request resolver is a piece of coding which is capable of extracting detais about what a thread was doing, using the information from the thread object and its java local objects. The information provided by a request resolver is included in the Leak Suspects report. > >When is this useful? There are often OutOfMemoryErrors which are not caused by a memory leak, but rather by some "greedy" operation - an attempt to load a huge file fully into memory, an attempt to build scan a whole DB table and keep the results in memory, etc... >In such cases the Leak Suspect report will often point to the thread as the suspect object, because its local objects (objects on the thread's stack) are eating too much memory. In such cases it is very helpful to know what the thread was doing and to get some insights on this activity. > >Examples: >* tell that a thread was processing an HTTP request and extract the concrete request and some parameters >* show an SQL statement that has been processed when an OOM error occured >* display the name of the Eclipse Job which has been processed >* etc... > >=== The IRequestDetailsResolver Interface === > >To implement a request resolver one needs to implement the [http://help.eclipse.org/juno/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/snapshot/extension/IRequestDetailsResolver.html IRequestDetailsResolver] interface. >To specify for which type of local objects this request resolver can provide information, use the the @Subject annotation on the implementation class. > >The interface defines just one method: > ><source lang="java"> > void complement(ISnapshot snapshot, IThreadInfo thread, int[] javaLocals, int thisJavaLocal, > IProgressListener listener) throws SnapshotException; ></source> > >The complement method will be called by Memory Analyzer whenever it collects information for a thread and the thread has a local object (somewhere on the stack) which matches the type specified by @Subject. As parameters one will receive all necessary context information to extract the needed information: > >* snapshot - the whole dump >* thread - a IThreadInfo object representing the thread being analyzed >* javaLocals - all the local variables, as object IDs >* thisJavaLocal - the object ID of the local object matching the @Subject > >Within the complement() method one should extract helpful information about the activity of the thread and add it to the IThreadInfo object using the addRequest() method. The addRequest() method takes two parameters - a String with short description appearing on the first page of the Leak Suspects report, and an IResult with more details about the request (could be a table with all properties, etc...). > >Here is a code sample for a request resolver: ><source lang="java"> >/* Specify that I can extract information from ProgressManager$JobMonitor objects */ >@Subject("org.eclipse.ui.internal.progress.ProgressManager$JobMonitor") >public class JobRequestResolver implements IRequestDetailsResolver { > > @Override > public void complement(ISnapshot snapshot, IThreadInfo thread, > int[] javaLocals, int thisJavaLocal, IProgressListener listener) > throws SnapshotException { > > IObject monitor = snapshot.getObject(thisJavaLocal); // get the IOjbect for the JobMonitor > IObject job = (IObject) monitor.resolveValue("job"); // get the value of the job field > String jobName = job.getClassSpecificName(); // get the symbolic represenation of the job > > String summary = "This thread executes the job [" + jobName + "]"; > IResult sampleDetails = new TextResult("Job object is = [" + job.getDisplayName() + "]"); > > thread.addRequest(summary, sampleDetails); // add the request information > > thread.addKeyword(jobName); // add the job name to the keywords > } >} ></source> > >This sample request resolver will add to the leak suspect report a line like: > >This thread executes the job [Sample greedy job] > >if the thread was processing an Eclipse job. It will add as details the object instance of the Job implementation. > >== Adding a New Heap Dump Format == > >Memory Analyzer can also be extended to support more heap dump formats. A detailed description how to do this can be found here: >[[MemoryAnalyzer/Adding a new heap dump format to the Memory Analyzer]] > >== Contributing back to the project == > >If your extension to Memory Analyzer would be useful to other people, please consider [[MemoryAnalyzer/Contributor Reference|contributing]] it back to the project. > >[[Category:Memory Analyzer]]</text> > <sha1>hxrm90ieg5hwrqjg6v2jf9plnezbdvv</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/FAQ</title> > <ns>0</ns> > <id>13805</id> > <revision> > <id>447283</id> > <parentid>447282</parentid> > <timestamp>2023-04-27T09:14:55Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* Out of Memory Error while Running the Memory Analyzer */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="11807">= Frequently Asked Questions = >[[MemoryAnalyzer]], [http://www.eclipse.org/mat Home Page], [http://www.eclipse.org/newsportal/thread.php?group=eclipse.technology.memory-analyzer Forum] > >[[Category:Memory Analyzer]][[Category:FAQ]] > >== Problems Starting the Memory Analyzer == > >=== java.lang.RuntimeException: No application id has been found. ''or'' Incompatible JVM === > >Memory Analyzer 1.14 and later needs a '''Java 17''' VM or later VM to run. >Memory Analyzer 1.12 and later needs a '''Java 11''' VM or later VM to run. >The Memory Analyzer 1.8 to 1.11 needs a '''Java 1.8''' VM or later VM to run (of course, heap dumps from JDK 1.4.2_12 on are supported). >If in doubt, provide the runtime VM on the command line: ><blockquote>MemoryAnalyzer.exe -vm <i>path/to/java17/bin</i></blockquote> > >Alternatively, edit the <code>MemoryAnalyzer.ini</code> to contain (on <strong>two lines</strong>), and before any <code>-vmargs</code> lines: > ><blockquote><code>-vm<br/> >path/to/java8/bin</code></blockquote> > >(This error happens because the MAT plug-in requires a JDK 1.8 via its manifest.mf file and the OSGi runtime dutifully does not activate the plug-in.) >Memory Analyzer version 1.1 will give a better error message pop-up. ><blockquote> >Incompatible JVM ><br/> >Version 1.4.2 of the JVM is not suitable for this product. Version 1.5.0 or greater is required. ></blockquote> >or ><blockquote> >Incompatible JVM ><br/> >Version 11.0.17 of the JVM is not suitable for this product. Version 17 or greater is required. ></blockquote> >/ > >=== Out of Memory Error while Running the Memory Analyzer === > >Well, analyzing big heap dumps can also require more heap space. Give it some more memory (possible by running on a 64-bit machine): > ><blockquote>MemoryAnalyzer.exe -vmargs -Xmx4g -XX:-UseGCOverheadLimit</blockquote> > >Alternatively, edit the <code>MemoryAnalyzer.ini</code> to contain: > ><blockquote>-vmargs<br/> >-Xmx2g<br/> >-XX:-UseGCOverheadLimit</blockquote> > >The <code>-vmargs</code> lines must come last in the MemoryAnalyzer.ini file. > >As a rough guide, Memory Analyzer itself needs 32 to 64 bytes for each object in the analyzed heap, so -Xmx2g might allow a heap dump containing 30 to 60 million objects to be analyzed. Memory Analyzer 1.3 using -Xmx58g has successfully analyzed a heap dump containing over 948 million objects. > >The initial parse and generation of the dominator tree uses the most memory, so it can be useful to do the initial parse on a large machine, then copy the heap dump and index files to a more convenient machine for further analysis. > >For more details, check out the section [http://help.eclipse.org/juno/topic/org.eclipse.platform.doc.user/tasks/running_eclipse.htm Running Eclipse] in the Help Center. It also contains more details if you are running on Mac OS X. > >If you are running the Memory Analyzer inside your Eclipse SDK, you need to edit the <code>eclipse.ini</code> file. > >=== How to run on 64bit VM while the native SWT are 32bit === > >In short: if you run a 64bit VM, then all native parts also must be 64bit. But what if - like Motif on AIX - native SWT libraries are only available as 32bit version? One can still run the command line parsing on 64bit by executing the following command: > ><blockquote><code>/usr/java5_64/jre/bin/java -jar plugins/org.eclipse.equinox.launcher_1*.jar -consoleLog -application org.eclipse.mat.api.parse path/to/dump.dmp.zip org.eclipse.mat.api:suspects org.eclipse.mat.api:overview org.eclipse.mat.api:top_components</code></blockquote> > >or the latest version of Memory Analyzer has this <code>ParseHeapDump.sh</code> script, which relies on having java in the path. ><pre><nowiki> >#!/bin/sh ># ># This script parses a heap dump. ># Adjust the path to java, version 5 or later, and the heap size as required. ># Suitable for 64-bit and 32-bit Java, but a 64-bit Java is required ># for larger heap sizes. ># ># Usage: ParseHeapDump.sh <path/to/dump.dmp.zip> [report]* ># ># The leak report has the id org.eclipse.mat.api:suspects ># The top component report has the id org.eclipse.mat.api:top_components ># > >java -Xmx3072M -jar "`dirname "$0"`"/plugins/org.eclipse.equinox.launcher_1*.jar -consoleLog -application org.eclipse.mat.api.parse "$@" ></nowiki></pre> > >Using <code>plugins/org.eclipse.equinox.launcher_1*.jar</code> finds a version of the <strong>Equinox Launcher</strong> available in your installation without having to specify the exact name of the launcher file, as this version changes regularly! > >The <code>org.eclipse.mat.api:suspects</code> argument creates a ZIP file containing the leak suspect report. This argument is optional. > >The <code>org.eclipse.mat.api:overview</code> argument creates a ZIP file containing the overview report. This argument is optional. > >The <code>org.eclipse.mat.api:top_components</code> argument creates a ZIP file containing the top components report. This argument is optional. > >With Memory Analyzer 0.8, but not Memory Analyzer 1.0 or later, the IBM DTFJ adapter has to be initialized in advance. For parsing IBM dumps with the IBM DTFJ adapter you Memory Analyzer 0.8 should use this command: > ><blockquote><code>/usr/java5_64/jre/bin/java -Dosgi.bundles=org.eclipse.mat.dtfj@4:start,org.eclipse.equinox.common@2:start,org.eclipse.update.configurator@3:start,org.eclipse.core.runtime@start >-jar plugins/org.eclipse.equinox.launcher_*.jar -consoleLog -application org.eclipse.mat.api.parse path/to/mydump.dmp.zip org.eclipse.mat.api:suspects org.eclipse.mat.api:overview org.eclipse.mat.api:top_components</code></blockquote> > >== Problems Getting Heap Dumps == > >=== Error: Found instance segment but expected class segment === > >This error indicates an inconsistent heap dump: The data in the heap dump is written in various segments. In this case, an address expected in a class segment is written into a instance segment. > >The problem has been reported in heap dumps generated by ''jmap'' on ''Linux'' and ''Solaris'' operation systems and ''jdk1.5.0_13'' and below. Solution: use latest jdk/jmap version or use jconsole to write the heap dump (needs jdk6). > >=== Error: Invalid heap dump file. Unsupported segment type 0 at position XZY === > >This almost always means the heap dumps has not been written properly by the Virtual Machine. The Memory Analyzer is not able to read the heap dump. > >If you are able to read the dump with other tools, please file a [https://bugs.eclipse.org/bugs/enter_bug.cgi?product=MAT bug report]. Using the HPROF options with [[#Enable_Debug_Output]] may help in debugging this problem. > >=== Parser found N HPROF dumps in file X. Using dump index 0. See FAQ. === > >This warning message is printed to the log file, if the heap dump is written via the (obsolete and unstable) HPROF agent. The agent can write multiple heap dumps into one HPROF file. Memory Analyzer 1.2 and earlier has no UI support to decide which heap dump to read. By default, MAT takes the first heap dump. If you want to read an alternative dump, one has to start MAT with the system property MAT_HPROF_DUMP_NR=<index>. > >Memory Analyzer 1.3 provides a dialog for the user to select the appropriate dump. > >=== OutOfMemoryError: Requested length of new long[xxxx] exceeds limit of 2,147,483,639 === > >Eclipse MAT currently only supports heap sizes with up to ~2billion objects, as it uses Java arrays internally when processing the file (which are limited to 2^31 entries). > >To work around this, Eclipse MAT supports a setting to "discard" some % of objects so that only a fraction of objects are loaded. > >To configure it, follow the recommendation: Consider enabling object discard, see Window > Preferences > Memory Analyzer > Enable discard. Then, you should be able to open the file with Eclipse MAT. > >This is useful as it allows to load heaps with many objects. However, it may miss some linkages and object references from processing and results will be less accurate as a result. > >See [https://help.eclipse.org/latest/topic/org.eclipse.mat.ui.help/tasks/configure_mat.html#task_configure_mat__discard MAT Configuration] help page for more information. > >== Enable Debug Output == > >To show debug output of MAT: > >1. Create or append to the file ".options" in the eclipse main directory the lines: ><pre> >org.eclipse.mat.parser/debug=true >org.eclipse.mat.report/debug=true >org.eclipse.mat.dtfj/debug=true >org.eclipse.mat.dtfj/debug/verbose=true >org.eclipse.mat.hprof/debug=true >org.eclipse.mat.hprof/debug/parser=true ></pre> >Edit this file to remove some lines if you are not interested in output from a particular plug-in. > >On macOS, this file should be placed in [https://www.eclipse.org/eclipse/development/readme_eclipse_4.21.php#mozTocId993987 *.app/Contents/MacOS/.options] > >2. Start eclipse with the -debug option. This can be done by appending -debug to the eclipse.ini file in the same directory as the .options file. > >3. Be sure to also enable the -consoleLog option to actually see the output. > >4. If you want to enable debug output for the stand-alone Memory Analyzer create the options file in the mat directory and start memory analyzer using <tt>MemoryAnalyzer -debug -consoleLog</tt> > >See [[FAQ_How_do_I_use_the_platform_debug_tracing_facility]] for a general explanation of how the debug trace works in Eclipse. > >== Problems Interpreting Results == > >=== MAT Does Not Show the Complete Heap === > >''Symptom:'' When monitoring the memory usage interactively, the used heap size is much bigger than what MAT reports. > >During the index creation, the Memory Analyzer removes unreachable objects >because the various garbage collector algorithms tend to leave some garbage >behind (if the object is too small, moving and re-assigning addresses is to >expensive). This should, however, be no more than '''3 to 4 percent'''. >If you want to know what objects are removed, enable debug output as explained here: >[[MemoryAnalyzer/FAQ#Enable_Debug_Output]] > >Another reason could be that the heap dump was not written properly. Especially older VM (1.4, 1.5) can have problems if the heap dump is written via jmap. > >Otherwise, feel free to report a [https://bugs.eclipse.org/bugs/enter_bug.cgi?product=MAT bug]. > >== How to analyse unreachable objects == > >By default unreachable objects are removed from the heap dump while parsing and will not appear in class histogram, dominator tree, etc. Yet it is possible to open a histogram of unreachable objects. You can do it: > >1. From the link on the Overview page > >2. From the Query Browser via '''Java Basics --> Unreachable Objects Histogram''' > >This histogram has no object graph behind it(unreachable objects are removed during the parsing of the heap dump, only class names are stored). Thus it is not possible to see e.g. a list of references for a particular unreachable object. > >But there is a possibility to keep unreachable objects while parsing. For this you need to either: >* parse the heap dump from the command line providing the argument '''-keep_unreachable_objects''', i.e. <code>ParseHeapDump.bat -keep_unreachable_objects <heap dump></code> >or >* set the preference using 'Window' > 'Preferences' > 'Memory Analyzer' > 'Keep Unreachable Objects', then parse the dump. Memory Analyzer version 1.1 and later has this preference page option to select keep_unreachable_objects. > >== Crashes on Linux == > >Depending on the type of crash, consider testing with one or more of these options in MemoryAnalyzer.ini: > >* -Dorg.eclipse.swt.browser.XULRunnerPath=/usr/lib/xulrunner-compat/ >** Normally you must first install your distribution's xulrunner-compat package >* -Dorg.eclipse.swt.browser.UseWebKitGTK=true > > >== Extending Memory Analyzer == > >=== Is it possible to extend the Memory Analyzer to analyze the memory consumption of C or C++ programs? === > >No, this is not possible. The design of the Memory Analyzer is specific to Java heap dumps.</text> > <sha1>hjsx4rc7qi3s9qkva9tkwuzbpplc020</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Indexes</title> > <ns>0</ns> > <id>61332</id> > <revision> > <id>443335</id> > <parentid>443319</parentid> > <timestamp>2021-05-28T08:26:42Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="2750"> >== Index == >Memory Analyzer uses several indexes to enable access to different parts of the snapshot. > >[[Category:Tools Project]][[Category:Memory Analyzer]] > >; IDENTIFIER : IntToLong - object ID to object address >; O2CLASS : object ID to class ID >; A2SIZE : array object ID (or other non-fixed size object) to encoded size (32-bits) >; INBOUND : object ID to list of object IDs >; OUTBOUND : object ID to list of object IDs >; DOMINATED : Dominated: object id to N dominated object ids >; O2RETAINED : object ID to long >; DOMINATOR : Dominator of: object id to the id of its dominator >; I2RETAINED : cache of size of class, classloader (read/write) > >=== IntIndexReader === >For an index file like O2CLASS, the file is stored as many ArrayIntCompressed followed by an index: >[https://help.eclipse.org/2021-03/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/parser/index/IndexReader.IntIndexReader.html IntIndexReader] There is a special adjustment to cope with >2^31 entries files as that can be needed for 1 to N files. >On reading there is a SoftReference cache of those ArrayIntCompressed pages. The data is read using a SimpleBufferedRandomAccessInputStream which just has local buffer. > >=== LongIndexReader === >[https://help.eclipse.org/2021-03/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/parser/index/IndexReader.LongIndexReader.html LongIndexReader] is similar (without the adjustment for 2^31 entries) > >=== PositionIndexReader === >[https://help.eclipse.org/2021-03/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/parser/index/IndexReader.PositionIndexReader.html PositionIndexReader] is similar (without the adjustment for 2^31 entries) > >=== 1 to N reader === >[https://help.eclipse.org/2021-03/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/parser/index/IndexReader.IntIndex1NReader.html IntIndex1NReader] >Has two parts, a body and an header, and a final long of the position of the split in the file. >For an input, use the header to find the position of the start in the body and the header via (index+1) >to find the position of the next entry. >Read data between the two from the body > > >== Random Access File caching == > >HPROF random access to GZIP compressed files to read fields and array contents > >* org.eclipse.mat.hprof.DefaultPositionInputStream >** org.eclipse.mat.parser.io.BufferedRandomAccessInputStream >*** HashMapLongObject >**** [*N] page >***** [*N] SoftReference >****** buffer byte[512] >*** org.eclipse.mat.hprof.CompressedRandomAccessFile >**** org.eclipse.mat.hprof.SeekableStream >***** [*N} org.eclipse.mat.hprof.SeekableStream$PosStream >****** SoftReference >******* org.eclipse.mat.hprof.GZIPInputStream2 >******** io.nayuki.deflate.InflaterInputStream >********* inputBuffer byte[16384] >********* dictionary byte[32768]</text> > <sha1>ligj7ujub8ailyipqd179ogk5mh9qaa</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Learning Material</title> > <ns>0</ns> > <id>55317</id> > <revision> > <id>447968</id> > <parentid>447967</parentid> > <timestamp>2023-11-10T07:57:06Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* Tutorials */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="3356">[[Category:Memory Analyzer]] > >= Tutorials = >[http://www.vogella.com/articles/EclipseMemoryAnalyser/article.html Eclipse Memory Analyzer Tutorial] by Lars Vogel. July 6, 2016 > >[http://memoryanalyzer.blogspot.de/2008/05/automated-heap-dump-analysis-finding.html Automated Heap Dump Analysis: Finding Memory Leaks with One Click] by Krum Tsvetkov, part of the Memory Analyzer [http://memoryanalyzer.blogspot.in/ blog] > >[http://eclipsesource.com/blogs/2013/01/21/10-tips-for-using-the-eclipse-memory-analyzer/ 10 Tips for using the Eclipse Memory Analyzer] by Ian Bull. January 21, 2013 > >[https://vimeo.com/21356498 Java memory analysis using Eclipse Memory Analyzer] by Chris Grindstaff > >[http://kohlerm.blogspot.de/2008/05/analyzing-memory-consumption-of-eclipse.html/ Analyzing the Memory Consumption of Eclipse] by Markus Kohler. > >[http://community.bonitasoft.com/effective-way-fight-duplicated-libs-and-version-conflicting-classes-using-memory-analyzer-tool Detect duplicated/conflicting libs/classes] by Aurelien Pupier > >[https://blogs.sap.com/2007/11/04/analyzing-java-collections-usage-with-memory-analyzer/ Analyzing Java Collections Usage with Memory Analyzer] > >The Memory Analyzer grew up at SAP. Back then, Krum blogged about [https://blogs.sap.com/2007/07/02/finding-memory-leaks-with-sap-memory-analyzer/ Finding Memory Leaks with SAP Memory Analyzer]. The content is still relevant. > >[https://www.ibm.com/developerworks/library/j-memoryanalyzer/ Debugging from dumps - Diagnose more than memory leaks with Memory Analyzer] by Chris Bailey, Andrew Johnson, and Kevin Grigorenko. March 15, 2011 > >[https://publib.boulder.ibm.com/httpserv/cookbook/Major_Tools-Eclipse_Memory_Analyzer_Tool.html IBM WebSphere Application Server Performance Cookbook: Eclipse Memory Analyzer Tool] Kevin Grigorenko. > >[https://github.com/IBM/webspherelab/blob/main/WAS_Troubleshooting_Perf_Lab.md#heap-dumps WebSphere Performance and Troubleshooting Lab: Heap Dumps] > >= Presentations = > >[https://www.youtube.com/watch?v=sLoifF_YA4w Eclipse Memory Analyzer Tool Video], San Diego Java User Group, Kevin Grigorenko, June 2019 > >[https://wiki.eclipse.org/images/0/0b/EDKRK2012_MAT.pdf The MAT Tutorial] by Szymon Ptaszkiewicz, Eclipse Day, Kraków, 13 September 2013. > >[http://www.slideshare.net/AJohnson1/extending-eclipse-memory-analyzer Eclipse Summit Europe], Ludwigsburg, 4 November '10 > >[http://www.slideshare.net/AJohnson1/practical-lessons-in-memory-analysis TheServerSide Java Symposium - Europe], Prague, October '09 > >[https://www.slideshare.net/AJohnson1/ps-ts-41183041182301finv1 TS-4118 JavaOne], San Francisco, June '09 > >[http://www.slideshare.net/nayashkova/eclipse-memory-analyzer-presentation-763314 Eclipse Summit Europe] Ludwigsburg, 20 November, '08 > >[https://www.oracle.com/technetwork/systems/ts-5729-159371.pdf Automated Heap Dump Analysis for Developers Testers and Technical Support Employees], TS-5729, JavaOne, San Francisco, May '08 > >[https://www.eclipsecon.org/2008/sub/attachments/Memory_Analysis_Simplified_Automated_Heap_Dump_Analysis_for_Developers_Testers_and_Technical_Support_Employees.pdf EclipseCon], March '08 (Slides) > >[http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00ca7f0d-8ee6-2910-5d82-fc3e8dd25300 JavaOne][https://docs.huihoo.com/javaone/2007/java-se/TS-21935.pdf TS-21935 JavaOne], San Francisco, May '07</text> > <sha1>g8ymnejcqghqul3ehofeo6seqham70x</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/MAT Capabilities</title> > <ns>0</ns> > <id>25034</id> > <revision> > <id>339551</id> > <parentid>339027</parentid> > <timestamp>2013-06-10T14:22:32Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <minor/> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="1235">This document provides some sample [[Eclipse/Capabilities|capability]] definitions for the Memory Analyzer (MAT). It describes: >#Where to find the existing Capabilities plug-in in SVN >#Or how to implement your own Capabilities for Memory Analyzer > >=== Existing Capabilities Plug-in === > >The plug-in org.eclipse.mat.ui.capabilities contains Capabilities definition for Memory Analyzer > >The plug-in can be found in MAT's SVN source repository: > > * Host: dev.eclipse.org > * Repository Path: /svnroot/tools/org.eclipse.mat > * User: anonymous > * Password: &lt;empty&gt; > * Connection URL (http): http://dev.eclipse.org/svnroot/tools/org.eclipse.mat > * Path (in trunk) /plugins/org.eclipse.mat.ui.capabilities > >=== Capabilities Implementation === > >The code snippet below shows how to turn off Memory Analyzer functionality in the workbench via Capabilities: > > <nowiki> > <extension point="org.eclipse.ui.activities"> > <activity > id="org.eclipse.mat" > name="%activity.name" > description="%activity.description" /> > > <activityPatternBinding > activityId="org.eclipse.mat" > pattern="org\.eclipse\.mat\..*"/> > </extension> > </nowiki> > >[[Category:Memory Analyzer]]</text> > <sha1>32m6qx0zc2yoqz9vamc2pwe9it19zw3</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/OQL</title> > <ns>0</ns> > <id>59194</id> > <revision> > <id>444754</id> > <parentid>444753</parentid> > <timestamp>2022-02-01T13:39:40Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* Displaying all the fields of objects */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="49453">== Object Query Language == > >Object Query Language is an SQL like language used by Memory Analyzer for exploring a heap dump. There is documentation in the [https://help.eclipse.org/2019-09/topic/org.eclipse.mat.ui.help/tasks/queryingheapobjects.html help] but this wiki allows newer features to be explained and discussed before the documentation is updated. > >[[Category:Memory Analyzer]] > >Simple > ><syntaxhighlight lang="tsql"> >SELECT * FROM java.lang.String ></syntaxhighlight> > >Displays all String objects as a tree. > ><syntaxhighlight lang="tsql"> >SELECT s as String,s.value as "characters" FROM java.lang.String s ></syntaxhighlight> > >Displays all String objects as a table. > ><syntaxhighlight lang="tsql"> >SELECT s as String,s.value as "characters", inbounds(s),inbounds(s).@length FROM java.lang.String s ></syntaxhighlight> > > <nowiki> >String |characters |inbounds(s)| inbounds(s).@length >-------------------------------------------------------------------------------------------------------------- >java.lang.String [id=0x22e58820]|char[] [id=0x22e60f50;length=16;size=48] |[I@620f7a39| 1 >java.lang.String [id=0x22e59150]|char[] [id=0x22e62ff0;length=6;size=24] |[I@1f7b8d59| 1 >java.lang.String [id=0x22e5b560]|char[] [id=0x22e6b730;length=537;size=1088]|[I@28551755| 1 >-------------------------------------------------------------------------------------------------------------- ></nowiki> > >There are two sorts of objects encountered with OQL, IObject which represent Java objects in the snapshot and regular >Java objects generated by OQL processing. > >''java.lang.String [id=0x22e58820]'' is a IInstance representing a String from the snapshot. > >''char[] [id=0x22e60f50;length=16;size=48]'' is an IPrimitiveArray representing a character array from the snapshot. > >''[I@620f7a39'' is a regular Java integer array holding a several of ints which are the object IDs Memory Analyzer uses >to represent IObjects in the snapshot. > >== OQL (Memory Analyzer) versus SQL (MAT/Calcite) == > >As well as the built-in OQL, there is an extension plug-in for MAT called [https://github.com/vlsi/mat-calcite-plugin MAT Calcite] which adds SQL processing > >{| class="wikitable" >|- >!Topic >!OQL >!SQL >|- >|General syntax >|<syntaxhighlight lang="tsql">SELECT s FROM java.lang.String s</syntaxhighlight> >|<syntaxhighlight lang="sql">SELECT s.this FROM "java.lang.String" s</syntaxhighlight> >|- >|Built-in functions >|<syntaxhighlight lang="tsql">SELECT toString(s), classof(s), >s.@objectAddress, s.@usedHeapSize, s.@retainedHeapSize >FROM java.lang.String s </syntaxhighlight> >|<syntaxhighlight lang="sql">SELECT toString(s.this),getType(s.this), >getAddress(s.this),shallowSize(s.this),retainedSize(s.this) >FROM "java.lang.String" s</syntaxhighlight> >|- >|More functions >|<syntaxhighlight lang="tsql">SELECT h, h[0:-1].size(), h.table, >h.table.@length, h.modCount, h.getField("modCount") >FROM java.util.HashMap h</syntaxhighlight> >|<syntaxhighlight lang="sql">SELECT h.this,getSize(h.this),h.this['table'], >length(h.this['table']), h.this['modCount'], getField(h.this,'modCount') >FROM "java.util.HashMap" h</syntaxhighlight> >|- >|Comments >|/* multi-line comment */ >|/* multi-line comment */ >|- >|Single line comment >|// comment >| -- comment >|- >|JOIN >|Simulated by [[#JOIN operations|JOIN Operations]] >|Supported >|- >|LIMIT and OFFSET >|Simulated by [[#LIMIT and OFFSET|LIMIT and OFFSET]] >|Supported >|- >|ORDER BY >|Click on column headers to sort >|Supported >|- >|GROUP BY >|Can be simulated by [[#GROUP BY|GROUP BY]]. >Also 'Java Basics > Group by Value' >query might help. >Also, if the row is backed by an object (the from clause >returned a list of objects) then the 'Group by' menu bar >option allows 'Group by classloader' and 'Group by package'. >|Supported >|- >|COUNT >|Can be simulated by [[#COUNT|COUNT]]. >|Supported >|- >|MAX,MIN >|Not directly supported. Could be >simulated by clicking on a column name >and taking the top or bottom value. >|Supported >|- >|AVG,SUM >|Not supported >|Supported >|} > >OQL/Calcite provides advanced SQL functions such as JOIN (INNER JOIN,LEFT JOIN,RIGHT JOIN,FULL JOIN), CROSS JOIN, GROUP BY, ORDER BY. Some can be simulated by OQL with a bit of work. > >== Enhancements in November 2019 == > >Various updates and enhancements have been made to OQL under [https://bugs.eclipse.org/bugs/show_bug.cgi?id=552879 552879: OQL enhancements for sub-selects, maps, context providers, DISTINCT]. These are available from [https://www.eclipse.org/mat/snapshotBuilds.php snapshot builds] for testing and are subject to change. Please comment on the [https://bugs.eclipse.org/bugs/show_bug.cgi?id=552879 bug], [https://www.eclipse.org/forums/eclipse.memory-analyzer forum] or [https://dev.eclipse.org/mailman/listinfo/mat-dev development mailing list] > >Some of the aims of these changes were to permit more complex queries such as: >* [https://www.eclipse.org/forums/index.php/t/1095354/ Does OQL permit map access] >* [https://www.eclipse.org/forums/index.php/t/1078960/ OQL nested list flattening and filtering] >* [https://www.eclipse.org/forums/index.php/m/1220103 OQL: Need help with query involving object array and sub-select] Find objects with children without references to the parent >* List maps with original map, key and value columns like the collections 'Hash Entries' query. > > >=== SELECT DISTINCT === > >`DISTINCT` used to just operate on the results of a query if it returned objects rather than general select items. > ><syntaxhighlight lang="tsql">SELECT DISTINCT OBJECTS classof(s) FROM "java.lang\.S.*" s</syntaxhighlight> > ><syntaxhighlight lang="tsql">SELECT * FROM OBJECTS "java.lang\.S.*"</syntaxhighlight> > >`DISTINCT` now also operates on SELECTs with select items. It uses the whole row considered as a list as the item to be considered as distinct. It also uses the optimization that the input FROM items are also considered as being distinct (either as ints/IObjects or more general `FROM` items). > ><syntaxhighlight lang="tsql">SELECT DISTINCT classof(s) FROM "java.lang\.S.*" s</syntaxhighlight> > >=== Sub SELECT with select items === > >sub-selects were permitted where the sub-select returned an object list. > ><syntaxhighlight lang="tsql">SELECT v, v.@length FROM OBJECTS ( SELECT OBJECTS s.value FROM java.lang.String s ) v</syntaxhighlight> > >sub-selects are now also permitted which have select items. > ><syntaxhighlight lang="tsql">SELECT v,v.s,v.val FROM OBJECTS ( SELECT s,s.value as val FROM java.lang.String s ) v</syntaxhighlight> > ><pre> >Row |Object |Array >-------------------------------------------------------------------------------------------------------------------------------------------------------------- >{s=java.lang.String [id=0x26ba8a30], val=char[] [id=0x26ba8a48;length=14;size=40]} |java.lang.String [id=0x26ba8a30]|char[] [id=0x26ba8a48;length=14;size=40] >{s=java.lang.String [id=0x26ba8998], val=char[] [id=0x26ba89b0;length=56;size=128]}|java.lang.String [id=0x26ba8998]|char[] [id=0x26ba89b0;length=56;size=128] >{s=java.lang.String [id=0x26ba8160], val=char[] [id=0x26ba8178;length=56;size=128]}|java.lang.String [id=0x26ba8160]|char[] [id=0x26ba8178;length=56;size=128] >{s=java.lang.String [id=0x26b9d390], val=char[] [id=0x26b9d3a8;length=15;size=48]} |java.lang.String [id=0x26b9d390]|char[] [id=0x26b9d3a8;length=15;size=48] >{s=java.lang.String [id=0x26b9d358], val=char[] [id=0x26b9d370;length=8;size=32]} |java.lang.String [id=0x26b9d358]|char[] [id=0x26b9d370;length=8;size=32] >-------------------------------------------------------------------------------------------------------------------------------------------------------------- ></pre> > >The outer select processes the result of the sub-select row by row, with a single RowMap `Map` object representing the row. The key/value pairs are the sub-select items with the sub-select column names as the keys. If the keys are standard identifiers, i.e. generally alpha-numeric then attribute processing can be used as `v.s` rather than having to to `v.get("s2")` which can still be used, perhaps for column names with spaces. > >The whole sub-select continues to return a `CustomTableResultSet` which is an `IResultTable` but this has been enhanced to also be a `List` of `RowMap` items. It is quite hard to operate in OQL on the whole result as if it is supplied to an outer select then the `Iterable` nature means it will be processed row by row. > ><syntaxhighlight lang="tsql">SELECT * FROM OBJECTS ( SELECT s, s.value AS val FROM java.lang.String s ) v</syntaxhighlight> ><pre> >[{s=java.lang.String [id=0x26ba8a30], val=char[] [id=0x26ba8a48;length=14;size=40]}, {s=java.lang.String [id=0x26ba8998], val=char[] [id=0x26ba89b0;length=56;size=128]}, {s=java.lang.String [id=0x26ba8160], val=char[] [id=0x26ba8178;length=56;size=128]}, {s=java.lang.String [id=0x26b9d390], val=char[] [id=0x26b9d3a8;length=15;size=48]}, {s=java.lang.String [id=0x26b9d358], val=char[] [id=0x26b9d370;length=8;size=32]}, {s=java.lang.String [id=0x26b9d318], val=char[] [id=0x26b9d330;length=11;size=40]}, {s=java.lang.String [id=0x26b9d2e8], val=char[] [id=0x26b9d300;length=4;size=24]}, {s=java.lang.String [id=0x26b9c758], val=char[] [id=0x26b9c770;length=21;size=56]}, {s=java.lang.String [id=0x26b9c6c8], val=char[] [id=0x26b9c6e0;length=13;size=40]}, {s=java.lang.String [id=0x26b9c690], val=char[] [id=0x26b9c6a8;length=8;size=32]}, ... ></pre> > >Shows the whole table as a list > >==== LIMIT and OFFSET ==== > >SQL has LIMIT and OFFSET to choose only some of the items from the FROM clauses. This can be simulated in OQL. > ><syntaxhighlight lang="tsql">SELECT eval((SELECT * FROM OBJECTS ( SELECT s, s.value AS val FROM java.lang.String s ) v))[3] FROM OBJECTS 0</syntaxhighlight> ><pre> >eval((SELECT * FROM OBJECTS ( SELECT s, s.value AS val FROM java.lang.String s ) v ))[3] >----------------------------------------------------------------------------------------- >{s=java.lang.String [id=0x26b9d390], val=char[] [id=0x26b9d3a8;length=15;size=48]} >----------------------------------------------------------------------------------------- ></pre> >Processes the whole table as a select item. > >This could be used to simulate SQL LIMIT and OFFSET clauses. > ><syntaxhighlight lang="tsql">SELECT z.s FROM OBJECTS ( eval((SELECT s FROM "java.lang.String" s ))[10:29] ) z</syntaxhighlight> > >This extracts 20 entries, skipping the first 10. Note the array slice processing, with the start and end offsets as 0-based but inclusive. > >Compare with MAT Calcite (SQL) > ><syntaxhighlight lang="sql">SELECT s.this from "java.lang.String" s limit 10 offset 20</syntaxhighlight> > >=== Context Menu for object columns === >If a column appears to hold heap objects, or lists or arrays of heap objects, then the context menu now offers a choice to process that column's item of the selected rows. > ><syntaxhighlight lang="tsql">SELECT s AS String, s.value AS "Char array", inbounds(s) AS Inbounds FROM java.lang.String s</syntaxhighlight> ><pre> >String |Char array |Inbounds >-------------------------------------------------------------------------------------- >java.lang.String [id=0x26ba8a30]|char[] [id=0x26ba8a48;length=14;size=40] |[I@6ad112de >java.lang.String [id=0x26ba8998]|char[] [id=0x26ba89b0;length=56;size=128]|[I@18a0721b >java.lang.String [id=0x26ba8160]|char[] [id=0x26ba8178;length=56;size=128]|[I@2ae2fa13 >-------------------------------------------------------------------------------------- ></pre> > >Context Menu: >;SELECT ... s >:The entire row - based on the underlying object s. Copy OQL: <syntaxhighlight lang="tsql">SELECT s AS String, s.value AS "Char array", inbounds(s) AS Inbounds FROM OBJECTS 20798,20796,20793 s</syntaxhighlight> >;String >: Just the String item in column 'String'. Copy OQL: <syntaxhighlight lang="tsql">SELECT s AS String FROM OBJECTS 20798,20796,20793 s</syntaxhighlight> >;Char array >: Just the char array. Copy OQL: <syntaxhighlight lang="tsql">SELECT s.value AS "Char array" FROM OBJECTS 20798,20796,20793 s</syntaxhighlight> >;Inbounds >: All the inbounds as heap objects. Copy OQL: <syntaxhighlight lang="tsql">SELECT inbounds(s) AS Inbounds FROM OBJECTS 20798,20796,20793 s</syntaxhighlight> > > >The context menu has a `Copy > OQL Query` option which returns an OQL query representing the selected rows and appropriate column. > >The report plug-in which converts result tables to HTML now uses the context menu name to match with the table column to >put HTML linksin the correct place across the columns in the table rather than always in the first column. This also applies for >other queries, so the system properties query when used in a report has in-place links for keys and values. > >'''Question - should the context menu appear for all columns, in case a column has a heap object in rows other than the first. The context menu would then appear for non-object columns holding strings or numeric values. Is it confusing to offer a context menu for those, when no queries (apart from Copy Selection) can do anything.''' > >=== Map processing === > >Map heap objects in the heap dump can be now accessed using array notation, returning Map.Entry items. Previously array access returned any Map.Entry heap objects in the heap dump for the map, which could then be used to find the key and value via the `key` and `value` fields. Not all maps have entry objects, so the new system means that `getKey()` and `getValue()` can be used to access the keys and values. > >'''Question: when array access returns map entry objects, should those objects have a fake object ID of the actual map heap object, or of the Map.Entry heap object if one was available? This affects all collection extraction of maps, not just OQL. See [https://help.eclipse.org/2019-09/topic/org.eclipse.mat.ui.help/doc/org/eclipse/mat/inspections/collectionextract/IMapExtractor.EntryObject.html]''' > ><syntaxhighlight lang="tsql">SELECT h AS map, (SELECT e.getKey() AS key, e.getValue() AS value FROM OBJECTS ${h}[0:-1] e ) AS kv FROM java.util.HashMap h WHERE (h[0:-1].size() > 0)</syntaxhighlight> > ><pre> >map |kv >-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- >java.util.HashMap [id=0x22f49970]|[{key=java.lang.String [id=0x22f44658], value=java.lang.String [id=0x22f44670]}, {key=java.lang.String [id=0x22f44688], value=java.lang.String [id=0x22f446a0]}] >java.util.HashMap [id=0x22f49948]|[{key=java.lang.String [id=0x22f44628], value=java.lang.String [id=0x22f44640]}] >java.util.HashMap [id=0x22f49920]|[{key=java.lang.String [id=0x22f445c8], value=java.lang.String [id=0x22f445e0]}, {key=java.lang.String [id=0x22f445f8], value=java.lang.String [id=0x22f44610]}] >-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ></pre> > >Extracts the map and a list of key / values pairs. > >'''Question: Should MAT use the select clause or the AS column name as the key for the RowMap and subsequent access, or should it just use a simple name if given as part of the select, and a generated term such as `Expr1000` or `EXPR$0` (Apache Calcite) if more complex? Is an autogenerated term required in the case of duplicated select items or column names?''' > >==== Flattening ==== > >The second column is a list of key/value pairs. It would be nice to process these further, and auto-flattening is one way to achieve this. If a sub-select returns a RowMap containing values which are lists or arrays, then auto-flattening splits that RowMap into multiple RowMaps, one for each entry of the list or array. Other objects are just repeated in the RowMap. If there are multiple lists or arrays of different lengths then items beyond the end of the list or array are replaced by null. > >'''Question: This is very experimental and is intended to be a basic alternative to SQL JOIN. Is auto-flattening of sub-selects the way to go, or should it operated on FROM method calls as well or an alternative. Should there be a `flatten((SELECT ...))` function, or a `flatten()` method on the result table?''' > >'''Question: A SELECT as a SELECT item can with OQL return multiple rows or columns. SQL expects at most one row and one column. Should a SELECT with one column be dequalified? E.g. <syntaxhighlight lang="tsql">SELECT z, z.st.v FROM OBJECTS ( SELECT (SELECT s.value AS v FROM java.lang.String s ) AS st, t FROM java.lang.Long t ) z</syntaxhighlight> the z.st.v needs several levels of qualification. As there is just one column, should this be just 'z.st'? > >The aim is to achieve a result such as ><pre> >Map |Key |Value >---------------------------------------------------------------------------------------------------------------------------------------------------------------- >java.util.HashMap [id=0xe35e96a8]|java.lang.String [id=0xe35ef478]|com.sun.management.internal.PlatformMBeanProviderImpl$4 [id=0xe35cbf68] >java.util.HashMap [id=0xe35e96a8]|java.lang.String [id=0xe35e96d8]|jdk.management.jfr.internal.FlightRecorderMXBeanProvider$SingleMBeanComponent [id=0xe35e98b0] >java.util.HashMap [id=0xe35ce190]|java.lang.String [id=0xe35c1950]|java.lang.Object [id=0xe0c145f0] >java.util.HashMap [id=0xe35ce190]|java.lang.String [id=0xe35c1900]|java.lang.Object [id=0xe0c145f0] >java.util.HashMap [id=0xe35cbde0]|java.lang.String [id=0xe35cbc70]|java.lang.Object [id=0xe0c145f0] >---------------------------------------------------------------------------------------------------------------------------------------------------------------- ></pre> > > >#<syntaxhighlight lang="tsql">SELECT z.map as Map, z.kv.key as Key, z.kv.value as Value FROM OBJECTS ( SELECT h AS map, (SELECT e.getKey() AS key, e.getValue() AS value FROM OBJECTS ${h}[0:-1] e ) AS kv FROM java.util.HashMap h WHERE (h[0:-1].size() > 0) ) z</syntaxhighlight> >#<syntaxhighlight lang="tsql">SELECT z.map AS Map, z.kv.key AS Key, z.kv.value AS Value FROM OBJECTS (eval(( SELECT h AS map, (SELECT e.getKey() AS key, e.getValue() AS value FROM OBJECTS ${h}[0:-1] e ) AS kv FROM java.util.HashMap h WHERE (h[0:-1].size() > 0))) ) z</syntaxhighlight> >#<syntaxhighlight lang="tsql">SELECT z.map AS Map, z.kv.key AS Key, z.kv.value AS Value FROM OBJECTS (flatten((SELECT h AS map, (SELECT e.getKey() AS key, e.getValue() AS value FROM OBJECTS ${h}[0:-1] e ) AS kv FROM java.util.HashMap h WHERE (h[0:-1].size() > 0))) ) z</syntaxhighlight> >#<syntaxhighlight lang="tsql">SELECT z.map AS Map, z.kv.key AS Key, z.kv.value AS Value FROM OBJECTS (eval(( SELECT h AS map, (SELECT e.getKey() AS key, e.getValue() AS value FROM OBJECTS ${h}[0:-1] e ) AS kv FROM java.util.HashMap h WHERE (h[0:-1].size() > 0)).flatten()) ) z</syntaxhighlight> > >'''Question: How should flattening handle lists or arrays with no items? Should the row be omitted, or should a row with no entries for the array be generated as a null for the item, or should the empty array be left unchanged? It is hard to add back a row later, but it is possible to filter rows with null or an empty list. It is hard to tell an empty list/array from one which contains one entry which is the same as the default value OQL chose (null or an empty list/array).''' > > >Here is another example of flattening - a query to see if any child of a parent does not have a back reference to the parent. > ><syntaxhighlight lang="tsql">SELECT group AS Group, thread AS Thread >FROM OBJECTS ( SELECT t AS group, t.threads[0:-1] AS thread FROM java.lang.ThreadGroup t ) >WHERE ((thread != null) and (thread.group != group))</syntaxhighlight> > >This selects all the java.lang.ThreadGroup objects and then generates rows with two columns, the group and a list of the child threads. This is then flattened to a rows of the group and a single child thread, where the select then checks for a non-null child Thread and a child which does not point back to the ThreadGroup. Note here the omission of the alias name before the 'WHERE' as it is not necessary - 'group' and 'thread' in the outer select do not need to be qualified with an alias name. > >=== GROUP BY === > >SQL GROUP BY can be simulated in the following fashion: > ><syntaxhighlight lang="tsql">SELECT s.sz AS Size, >(SELECT OBJECTS m FROM java.util.HashMap m WHERE (m[0:-1].size() = s.sz)) AS Maps >FROM OBJECTS ( SELECT DISTINCT h[0:-1].size() AS sz FROM java.util.HashMap h ) s</syntaxhighlight> > ># This first obtains a list of things to group by, which here is a list of sizes. ># The sizes are then returned to the next phase as a sub-select. ># Then the select items clause only chooses objects which match the current GROUP BY value. ># Then the results are converted to an object list, which appears in the columns as an int[] array, which can then be used as a context menu. > >Another example grouping by number of inbounds: > ><syntaxhighlight lang="tsql">SELECT s.sz AS Size, >(SELECT OBJECTS m FROM INSTANCEOF java.lang.Object m WHERE (inbounds(m).@length = s.sz)) AS Objects >FROM OBJECTS ( SELECT DISTINCT inbounds(h).@length AS sz FROM INSTANCEOF java.lang.Object h ) s</syntaxhighlight> > >=== COUNT === > >SQL COUNT can be simulated in the following fashion using the @length attribute on arrays, or on a list or by converting an array to a list and then using size(). > ><syntaxhighlight lang="tsql">SELECT z.size AS Size, >z.maps AS Maps, >z.maps.@length AS "Count", >z.maps[0:-1].size() AS "Count (another way)" >FROM OBJECTS ( eval(( >SELECT >s.sz AS size, >(SELECT OBJECTS m FROM java.util.HashMap m WHERE (m[0:-1].size() = s.sz)) AS maps >FROM OBJECTS ( SELECT DISTINCT h[0:-1].size() AS sz FROM java.util.HashMap h ) s >)) ) z</syntaxhighlight> > ># This first obtains a list of things to group by, which here is a list of sizes. ># The sizes are then returned to the next phase as a sub-select. ># Then the select items clause only chooses objects which match the current GROUP BY value. ># Then the results are converted to an object list, which appears in the columns as an int[] array ># The select is then wrapped by an eval() so that the outer select does not flatten it ># The outer select then generates the result, with the size, the maps, and two ways of counting the elements in the map array, once using @length and one using size(). > >Another example: > ><syntaxhighlight lang="tsql">SELECT z.size AS Size, >z.objects AS Objects, >z.objects.@length AS "Count", >z.objects[0:-1].size() AS "Count (another way)" >FROM OBJECTS ( eval(( >SELECT s.sz AS size, >(SELECT OBJECTS m FROM INSTANCEOF java.lang.Object m WHERE (inbounds(m).@length = s.sz)) AS objects >FROM OBJECTS ( SELECT DISTINCT inbounds(h).@length AS sz FROM INSTANCEOF java.lang.Object h ) s >)) ) z</syntaxhighlight> > >=== JOIN operations === > >OQL does not have SQL-style JOIN operations apart from UNION. With flattening it is possible to simulate some of these operations, but the statements required are more complex. > >Consider a JOIN on `java.lang.Integer` and `java.lang.Long` based on the value fields of both. > >==== CROSS JOIN ==== > >This operation generates every combination of the left and right tables (sets of objects), so can generate a huge sized result table. > ><syntaxhighlight lang="tsql">SELECT z.i AS Integer, z.i.value AS "Integer value", z.lv.l AS Long, z.lv.l.value as "Long value" >FROM OBJECTS ( SELECT i, (SELECT l FROM java.lang.Long l ) AS lv FROM java.lang.Integer i ) z</syntaxhighlight> > >This selects all the `java.lang.Integer` heap objects, then for each Integer heap object then generates a row with the object and a list of all the java.lang.Long objects, then flattens the rows. Each flattened row has one Integer heap object (accessed via 'z.i' or 'i') and one Long heap object (accessed via 'z.lv.l' or 'lv.l'). The objects in one row do not necessarily match in value. > >Compare to MAT Calcite (SQL): ><syntaxhighlight lang="sql">select i.this,i.this['value'] as "Integer value", l.this,l.this['value'] as "Long value" >from "java.lang.Integer" i CROSS JOIN "java.lang.Long" l</syntaxhighlight> > >==== LEFT JOIN / LEFT OUTER JOIN ==== > >This operation generates every row from the left table (set of objects) and includes in that row any corresponding row from the right table. The result table is the same size as the left. > ><syntaxhighlight lang="tsql">SELECT z.i AS Integer, z.i.value AS "Integer value", z.lv.l AS Long, z.lv.l.value as "Long value" >FROM OBJECTS ( SELECT i, (SELECT l FROM java.lang.Long l WHERE (l.value = i.value)) AS lv FROM java.lang.Integer i ) z</syntaxhighlight> > >This selects all the `java.lang.Integer` objects, then for each Integer then generates a row with the object and a list of all the java.lang.Long objects with the same value, then flattens the rows. Each flattened row has one Integer heap object (accessed via 'z.i' or 'i') and possibly one matching Long heap object or null (accessed via 'z.lv.l' or 'lv.l'). > >Compare to MAT Calcite (SQL): ><syntaxhighlight lang="sql">select i.this,i.this['value'] as "Integer value", l.this,l.this['value'] as "Long value" >from "java.lang.Integer" i LEFT JOIN "java.lang.Long" l on i.this['value']+0 = l.this['value']+0</syntaxhighlight> > >==== INNER JOIN ==== > >This operation generates every row from the left table (set of objects) which has a matching row in the right table (set of objects). The result table is no bigger than the smaller of the left and right tables. > ><syntaxhighlight lang="tsql">SELECT z.i AS Integer, z.i.value AS "Integer value", z.lv.l AS Long, z.lv.l.value as "Long value" >FROM OBJECTS ( SELECT i, (SELECT l FROM java.lang.Long l WHERE (l.value = i.value)) AS lv FROM java.lang.Integer i ) z >WHERE (z.lv != null)</syntaxhighlight> > >This selects all the `java.lang.Integer` objects, then for each Integer then generates a row with the object and a list of all the java.lang.Long objects with the same value, then flattens the rows and excludes any row without a java.lang.Long value. > ><syntaxhighlight lang="tsql">SELECT z.iv.i AS Integer, z.iv.i.value AS "Integer value", z.l AS Long, z.l.value as "Long value" >FROM OBJECTS ( SELECT (SELECT i FROM java.lang.Integer i WHERE (i.value = l.value)) AS iv, l FROM java.lang.Long l ) z >WHERE (z.iv != null)</syntaxhighlight> > >This selects all the `java.lang.Long` objects, then for each Long then generates a row with the object and a list of all the java.lang.Integer objects with the same value, then flattens the rows and excludes any row without a java.lang.Integer value. Each flattened row has one Integer heap object (accessed via 'z.i' or 'i') and one Long heap object (accessed via 'z.lv.l' or 'lv.l') which matches by the 'WHERE (i.value = l.value)' clause. > >Compare to MAT Calcite (SQL): ><syntaxhighlight lang="sql">select i.this,i.this['value'] as "Integer value", l.this,l.this['value'] as "Long value" >from "java.lang.Integer" i INNER JOIN "java.lang.Long" l on i.this['value']+0 = l.this['value']+0</syntaxhighlight> > >==== RIGHT JOIN / RIGHT OUTER JOIN ==== > >This operation generates every row from the right table (set of objects) and includes in that row any corresponding row from the left table which matches. The result table is the same size as the right. > ><syntaxhighlight lang="tsql">SELECT z.iv.i AS Integer, z.iv.i.value AS "Integer value", z.l AS Long, z.l.value as "Long value" >FROM OBJECTS ( SELECT (SELECT i FROM java.lang.Integer i WHERE (i.value = l.value)) AS iv, l FROM java.lang.Long l ) z</syntaxhighlight> > >This selects all the `java.lang.Long` objects, then for each Long then generates a row with the object and a list of all the java.lang.Integer objects with the same value, then flattens the rows. Each flattened row has one Long heap object (accessed via 'z.l' or 'l') and possibly one matching Integer heap object or null (accessed via 'z.iv.i' or 'iv.i'). > >Compare to MAT Calcite (SQL): ><syntaxhighlight lang="sql">select i.this,i.this['value'] as "Integer value", l.this,l.this['value'] as "Long value" >from "java.lang.Integer" i RIGHT JOIN "java.lang.Long" l on i.this['value']+0 = l.this['value']+0</syntaxhighlight> > >==== FULL OUTER JOIN ==== > >This operation generates has a row for every row from the left table (set of objects) and every row from the right table (set of objects), but when the left table has a row with the same value as a row from right table they will be included in the same output row. The result table is at least as big as the bigger of the left table and right table. > ><syntaxhighlight lang="tsql">SELECT z.i AS Integer, z.i.value AS "Integer value", z.lv.l AS Long, z.lv.l.value as "Long value" >FROM OBJECTS ( SELECT i, (SELECT l FROM java.lang.Long l WHERE (l.value = i.value)) AS lv FROM java.lang.Integer i ) z >UNION ( >SELECT z.iv.i AS Integer, z.iv.i.value AS "Integer value", z.l AS Long, z.l.value as "Long value" >FROM OBJECTS ( SELECT (SELECT i FROM java.lang.Integer i WHERE (i.value = l.value)) AS iv, l FROM java.lang.Long l ) z >WHERE (z.iv = null) >)</syntaxhighlight> > >This does a [[MemoryAnalyzer/OQL#LEFT_JOIN_.2F_LEFT_OUTER_JOIN|LEFT JOIN / LEFT OUTER JOIN]] then combines the rows with a list of all java.lang.Long objects which do not have a corresponding java.lang.Integer object. Each row contains either an Integer heap object or a Long heap object or both. > >Compare to MAT Calcite (SQL): ><syntaxhighlight lang="sql">select i.this,i.this['value'] as "Integer value", l.this,l.this['value'] as "Long value" >from "java.lang.Integer" i FULL OUTER JOIN "java.lang.Long" l on i.this['value']+0 = l.this['value']+0</syntaxhighlight> > >=== Bug fixes === > >* When there is a union queries with a select which returns no items, that select was then omitted from the command window. ><syntaxhighlight lang="tsql">SELECT s FROM java.lang.String s UNION (SELECT s FROM java.lang.Missing s)</syntaxhighlight> this was then redisplayed as <syntaxhighlight lang="tsql">SELECT s FROM java.lang.String s</syntaxhighlight> >* Context dependency fix. OQL processing optimizes some queries by detecting that some parts are not context dependent and will evaluate the same each time, so can just be evaluated once. This processing was not correct for sub-select. Now a sub-select in a select item will be correctly re-evaluated if required. >* Progress monitoring has been improved so the progress monitor bar graph better shows how much more work needs to be done to complete a query. Also, cancelling a long running OQL query works more swiftly. > >== Writing Queries == > >Writing queries can be a bit of an art, with some trial and error required. >For example, consider this problem: > >"I would like to find all the unreachable objects of a particular type." > >Normally, Memory Analyzer discards unreachable objects and the only thing visible are some totals in the >unreachable objects histogram. This is not useful for OQL, so we need the <code>-keep_unreachable_objects</code> option. The unreachable objects will not then be discarded, but will be >retained by some artificially inserted <code>UNREACHABLE_OBJECT</code> garbage collection roots. > >Let us try some examples: > >The <code>snapshot</code> has some interesting methods, including <code>getGCRoots()</code>. > ><syntaxhighlight lang="tsql"> >SELECT * FROM OBJECTS ${snapshot}.getGCRoots() r ></syntaxhighlight> > >As it is a no-argument method starting with get we can access it using a bean attribute. > ><syntaxhighlight lang="tsql"> >SELECT * FROM OBJECTS ${snapshot}.@GCRoots r ></syntaxhighlight> > ><pre>[21800, 21801, 21802, 21803, 21804, 21805,</pre> >This returns an array of integers - the MAT object IDs. > >Treat them as values: ><syntaxhighlight lang="tsql"> >SELECT r FROM OBJECTS ${snapshot}.@GCRoots r ></syntaxhighlight> ><pre> > r >-------- > 21,800 > 21,801 > 21,802 >-------- ></pre> >Treat them as objects: ><syntaxhighlight lang="tsql"> >SELECT OBJECTS r FROM OBJECTS ${snapshot}.@GCRoots r ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >------------------------------------------------------------------------------------------------- >class java.lang.IllegalArgumentException @ 0x2b75f188 System Class| 104 | 104 >class java.lang.NumberFormatException @ 0x2b75f1e8 System Class | 104 | 104 >class java.text.CharacterIterator @ 0x2b760648 System Class | 104 | 104 >------------------------------------------------------------------------------------------------- ></pre> >This gives all the GC roots, but we only want the unreachable roots. > >The snapshot getGCRootInfo(int id) method might help. Let use try it out using an object ID >above that is a GC root. > ><syntaxhighlight lang="tsql"> >SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(21800) t ></syntaxhighlight> > ><pre> >[org.eclipse.mat.parser.model.XGCRootInfo@2b963bbc] ></pre> >so it returns an array of XGCRootInfo. > >Let OQL look at each element: ><syntaxhighlight lang="tsql"> >SELECT t FROM OBJECTS ${snapshot}.getGCRootInfo(21800) t ></syntaxhighlight> ><pre> >r >------------------------------------------------- >org.eclipse.mat.parser.model.XGCRootInfo@2b963bbc >------------------------------------------------- ></pre> >Using the MAT API reference we can see that this type has a <code>getType()</code> method, so >access it using as a bean. ><syntaxhighlight lang="tsql"> >SELECT t,t.@type FROM OBJECTS ${snapshot}.getGCRootInfo(21800) t ></syntaxhighlight> ><pre> >t | t.@type >----------------------------------------------------------- >org.eclipse.mat.parser.model.XGCRootInfo@2b963bbc| 2 >----------------------------------------------------------- ></pre> >Using the API reference we can see that <code>GCRootInfo.Type.UNREACHABLE</code> has a value of <2048> >so we can select just the GCRootInfo objects with that value as: ><syntaxhighlight lang="tsql"> >SELECT t,t.@type FROM OBJECTS ${snapshot}.getGCRootInfo(21800) t WHERE t.@type = 2048 ></syntaxhighlight> >This does not return anything as object ID 21800 is a SYSTEM root. However, we can use it to >select from all the GC roots, relying on this sub SELECT clause being null if it does not find an >UNREACHABLE root. We can simplify the select item to <code>*</code> as it is not important. > ><syntaxhighlight lang="tsql"> >SELECT OBJECTS r FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >----------------------------------------------------------------------------------- >int[11] @ 0x22ef4c58 Unreachable | 56 | 56 >int[9] @ 0x22ed2ad8 Unreachable | 48 | 48 >int[7] @ 0x22ed2b28 Unreachable | 40 | 40 >int[17] @ 0x22ed3150 Unreachable | 80 | 80 >java.lang.ref.SoftReference @ 0x22f52cd8 Unreachable| 32 | 400 >----------------------------------------------------------------------------------- ></pre> >This is looking promising - we have a list of objects, all of which are UNREACHABLE GC roots. >We now need the retained set to find everything that is normally discarded, but is now only >retained via these artificial roots. ><syntaxhighlight lang="tsql"> >SELECT AS RETAINED SET r FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null ></syntaxhighlight> ><pre> >r >-------------------------------------------------------------------- > >java.lang.Class [id=0x26970198;name=java.lang.Throwable[]] >java.lang.Class [id=0x26970370;name=java.lang.Error[]] >java.lang.Class [id=0x26970558;name=java.lang.VirtualMachineError[]] >-------------------------------------------------------------------- ></pre> >Some of the output is above. >Alternatively to see them as a tree, not a table. ><syntaxhighlight lang="tsql"> >SELECT AS RETAINED SET * FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >------------------------------------------------------------------------------------------------- >int[33] @ 0x22e5be20 Unreachable | 144 | 144 >int[27] @ 0x22e5bed0 Unreachable | 120 | 120 >int[17] @ 0x22e5bf60 Unreachable | 80 | 80 >sun.reflect.NativeConstructorAccessorImpl @ 0x22e5c5d8 Unreachable| 24 | 184 >sun.reflect.NativeConstructorAccessorImpl @ 0x22e5c5f0 Unreachable| 24 | 184 >------------------------------------------------------------------------------------------------- ></pre> >If we are quickly interested in the types then the 'Show as Histogram' button in the top right of the toolbar will show the totals by class. If we are interested in the subclasses then the 'Group by superclass' option does that. > >However, we might want to do it entirely by OQL - for example as part of another query or in batch mode. >We now need to choose just the objects of the type we are interested in. There are >several ways. ># We could list all the objects of the type we are possibly interested in, and keep those in the retained set. ># We could look at each of the retained objects and see if each one is in the list of objects we are interested in. ># We could look at each of the retained objects, find its class, then its class name and see if the class name matches the one we are interested in. > ><syntaxhighlight lang="tsql"> >SELECT * FROM java.util.ArrayList o ></syntaxhighlight> > ><syntaxhighlight lang="tsql"> >SELECT * FROM java.util.ArrayList o WHERE o in >(SELECT AS RETAINED SET * FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null) ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >--------------------------------------------------------------- >java.util.ArrayList @ 0x22efe620| 24 | 320 >java.util.ArrayList @ 0x22efe608| 24 | 320 >java.util.ArrayList @ 0x22efe5a8| 24 | 320 >java.util.ArrayList @ 0x22efcfd8| 24 | 384 >java.util.ArrayList @ 0x22efb998| 24 | 320 >java.util.ArrayList @ 0x22efb928| 24 | 320 >--------------------------------------------------------------- ></pre> > >or > ><syntaxhighlight lang="tsql"> >SELECT * FROM OBJECTS (SELECT AS RETAINED SET * FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null) u > WHERE u in (SELECT * FROM java.util.ArrayList o) ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >--------------------------------------------------------------- >java.util.ArrayList @ 0x22efb928| 24 | 320 >java.util.ArrayList @ 0x22efb998| 24 | 320 >java.util.ArrayList @ 0x22efcfd8| 24 | 384 >java.util.ArrayList @ 0x22efe5a8| 24 | 320 >java.util.ArrayList @ 0x22efe608| 24 | 320 >java.util.ArrayList @ 0x22efe620| 24 | 320 >--------------------------------------------------------------- ></pre> > >or > ><syntaxhighlight lang="tsql"> >SELECT * FROM OBJECTS (SELECT AS RETAINED SET * FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null) u > WHERE u.@clazz.@name = "java.util.ArrayList" ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >--------------------------------------------------------------- >java.util.ArrayList @ 0x22efb928| 24 | 320 >java.util.ArrayList @ 0x22efb998| 24 | 320 >java.util.ArrayList @ 0x22efcfd8| 24 | 384 >java.util.ArrayList @ 0x22efe5a8| 24 | 320 >java.util.ArrayList @ 0x22efe608| 24 | 320 >java.util.ArrayList @ 0x22efe620| 24 | 320 >--------------------------------------------------------------- ></pre> > >If we wanted to know if about objects which extended <code>java.util.AbstractCollection</code> then >use: > ><syntaxhighlight lang="tsql"> >SELECT * FROM INSTANCEOF java.util.AbstractCollection o WHERE o in >(SELECT AS RETAINED SET * FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null) ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >--------------------------------------------------------------- >java.util.ArrayList @ 0x22efb928| 24 | 320 >java.util.ArrayList @ 0x22efb998| 24 | 320 >java.util.ArrayList @ 0x22efcfd8| 24 | 384 >java.util.ArrayList @ 0x22efe5a8| 24 | 320 >java.util.ArrayList @ 0x22efe608| 24 | 320 >java.util.ArrayList @ 0x22efe620| 24 | 320 >java.util.HashSet @ 0x22f2b600 | 16 | 136 >java.util.Vector @ 0x22f2b5e8 | 24 | 80 >java.util.Vector @ 0x22f2b638 | 24 | 80 >--------------------------------------------------------------- ></pre> > >or > ><syntaxhighlight lang="tsql"> >SELECT * FROM OBJECTS (SELECT AS RETAINED SET * FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null) u > WHERE u in (SELECT * FROM INSTANCEOF java.util.AbstractCollection o) ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >--------------------------------------------------------------- >java.util.ArrayList @ 0x22efb928| 24 | 320 >java.util.ArrayList @ 0x22efb998| 24 | 320 >java.util.ArrayList @ 0x22efcfd8| 24 | 384 >java.util.ArrayList @ 0x22efe5a8| 24 | 320 >java.util.ArrayList @ 0x22efe608| 24 | 320 >java.util.ArrayList @ 0x22efe620| 24 | 320 >java.util.Vector @ 0x22f2b5e8 | 24 | 80 >java.util.HashSet @ 0x22f2b600 | 16 | 136 >java.util.Vector @ 0x22f2b638 | 24 | 80 >--------------------------------------------------------------- ></pre> > >or > ><syntaxhighlight lang="tsql"> >SELECT * FROM OBJECTS (SELECT AS RETAINED SET * FROM OBJECTS ${snapshot}.getGCRoots() r > WHERE (SELECT * FROM OBJECTS ${snapshot}.getGCRootInfo(r) t WHERE t.@type = 2048) != null) u > WHERE u.@clazz.doesExtend("java.util.AbstractCollection") ></syntaxhighlight> ><pre> >Class Name | Shallow Heap | Retained Heap >--------------------------------------------------------------- >java.util.ArrayList @ 0x22efb928| 24 | 320 >java.util.ArrayList @ 0x22efb998| 24 | 320 >java.util.ArrayList @ 0x22efcfd8| 24 | 384 >java.util.ArrayList @ 0x22efe5a8| 24 | 320 >java.util.ArrayList @ 0x22efe608| 24 | 320 >java.util.ArrayList @ 0x22efe620| 24 | 320 >java.util.Vector @ 0x22f2b5e8 | 24 | 80 >java.util.HashSet @ 0x22f2b600 | 16 | 136 >java.util.Vector @ 0x22f2b638 | 24 | 80 >--------------------------------------------------------------- ></pre> > >== Extracting Thread information == > >With a recent level of MAT this should work: > ><syntaxhighlight lang="tsql"> >SELECT u.Thread AS Thread, u.Frame.@text AS Frame > FROM OBJECTS ( > SELECT t AS Thread, ${snapshot}.getThreadStack(t.@objectId).@stackFrames AS Frame > FROM java.lang.Thread t ) u ></syntaxhighlight> > >The inner select > ><syntaxhighlight lang="tsql"> >SELECT t AS Thread, ${snapshot}.getThreadStack(t.@objectId).@stackFrames AS Frame > FROM java.lang.Thread t ></syntaxhighlight> > >extracts each thread and an array of stack frames. The outer select then flattens that array with the same thread reference for each of its stack frames. > ><pre> >Thread |Frame >-------------------------------------------------------------------------------------------------------------------------------------------------- >java.lang.Thread [id=0x7b6a1e7f0]|at java.lang.Object.wait(J)V (Native Method) >java.lang.Thread [id=0x7b6a1e7f0]|at java.lang.Object.wait(JI)V (Unknown Source) >java.lang.Thread [id=0x7b6a1e7f0]|at com.squareup.okhttp.ConnectionPool.performCleanup()Z (ConnectionPool.java:305) >java.lang.Thread [id=0x7b6a1e7f0]|at com.squareup.okhttp.ConnectionPool.runCleanupUntilPoolIsEmpty()V (ConnectionPool.java:242) >java.lang.Thread [id=0x7b6a1e7f0]|at com.squareup.okhttp.ConnectionPool.access$000(Lcom/squareup/okhttp/ConnectionPool;)V (ConnectionPool.java:54) >java.lang.Thread [id=0x7b6a1e7f0]|at com.squareup.okhttp.ConnectionPool$1.run()V (ConnectionPool.java:97) >-------------------------------------------------------------------------------------------------------------------------------------------------- ></pre> > > >You can even extract each local from each frame using another select. > ><syntaxhighlight lang="tsql"> > SELECT v.Thread as Thread, toString(v.Thread) AS Name, v.Frame AS Frame, ${snapshot}.getObject(v.Objs) AS Local > FROM OBJECTS ( > SELECT u.Thread AS Thread, u.Frame.@text AS Frame, u.Frame.@localObjectsIds AS Objs > FROM OBJECTS ( > SELECT t AS Thread, ${snapshot}.getThreadStack(t.@objectId).@stackFrames AS Frame > FROM java.lang.Thread t > ) u > ) v > WHERE (v.Objs != null) ></syntaxhighlight> > ><pre> >v.Thread |Name |Frame |Local >------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ >java.lang.Thread [id=0x7b6a1e7f0]|OkHttp ConnectionPool|at com.squareup.okhttp.ConnectionPool.performCleanup()Z (ConnectionPool.java:305) |java.util.ArrayList [id=0x7bb402eb8] >java.lang.Thread [id=0x7b6a1e7f0]|OkHttp ConnectionPool|at com.squareup.okhttp.ConnectionPool.performCleanup()Z (ConnectionPool.java:305) |com.squareup.okhttp.ConnectionPool [id=0x6c556ccb0] >java.lang.Thread [id=0x7b6a1e7f0]|OkHttp ConnectionPool|at com.squareup.okhttp.ConnectionPool.runCleanupUntilPoolIsEmpty()V (ConnectionPool.java:242) |com.squareup.okhttp.ConnectionPool [id=0x6c556ccb0] >java.lang.Thread [id=0x7b6a1e7f0]|OkHttp ConnectionPool|at com.squareup.okhttp.ConnectionPool.access$000(Lcom/squareup/okhttp/ConnectionPool;)V (ConnectionPool.java:54)|com.squareup.okhttp.ConnectionPool [id=0x6c556ccb0] >java.lang.Thread [id=0x7b6a1e7f0]|OkHttp ConnectionPool|at com.squareup.okhttp.ConnectionPool$1.run()V (ConnectionPool.java:97) |com.squareup.okhttp.ConnectionPool$1 [id=0x6c556ccd8] >------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ></pre> > >== Displaying all the fields of objects == > ><syntaxhighlight lang="tsql"> >SELECT t.s AS "Object", toHex(t.s.@objectAddress) AS "Object address", t.f.@name AS "Field name", t.f.@value AS "Field value" > FROM OBJECTS ( SELECT s, s.@fields AS f FROM "java.util.*" s > WHERE (s implements org.eclipse.mat.snapshot.model.IInstance) ) t ></syntaxhighlight> > >This extracts all the plain Java objects (not arrays) of the java.util package. >It then extracts the fields, and uses flattening to process those fields one by one. > ><pre> >Object |Object address|Field name |Field value >--------------------------------|--------------|---------------|----------- >java.util.TreeMap [id=0x3acd178]|0x3acd178 |comparator | >java.util.TreeMap [id=0x3acd178]|0x3acd178 |root |0x3c32f60 >java.util.TreeMap [id=0x3acd178]|0x3acd178 |size |2 >java.util.TreeMap [id=0x3acd178]|0x3acd178 |modCount |2 >java.util.TreeMap [id=0x3acd178]|0x3acd178 |entrySet | >java.util.TreeMap [id=0x3acd178]|0x3acd178 |navigableKeySet| >java.util.TreeMap [id=0x3acd178]|0x3acd178 |descendingMap | >java.util.TreeMap [id=0x3acd178]|0x3acd178 |keySet | >java.util.TreeMap [id=0x3acd178]|0x3acd178 |values | >java.util.TreeMap [id=0x3acd1a8]|0x3acd1a8 |comparator | >java.util.TreeMap [id=0x3acd1a8]|0x3acd1a8 |root |0x3c32f80 >--------------------------------|--------------|---------------|----------- ></pre></text> > <sha1>4t9df80fgnw9yf2obznkvmpctxqb8ql</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Ramp Down Plan</title> > <ns>0</ns> > <id>24391</id> > <revision> > <id>434874</id> > <parentid>434873</parentid> > <timestamp>2019-10-23T09:12:20Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* Ramp down plan */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="1342">== Ramp down plan == > >Typically the last week of a Milestone is for testing, and fixing only regressions and P1 or blocking defects. >For milestones, the component lead (or delegate) is enough to review and approve a bug. > >'''For M3 [''was M7 for Photon and before''], we plan to be API and feature complete''', so there will be no breaking API changes and no new feature requests will be accepted. > >'''For RC1 [''was RC1 and RC2 for Photon and before''] only bug fixes to Memory Analyzer''' functionality should be done. The following describes the types of bugs that would be appropriate: >* A regression >* A P1 or P2 bug, one that is blocking or critical >* Minor documentation changes, including adding new and noteworthy > >'''For RC2 [''was RC3 - RC4 for Photon and before'']''' only bug fixes damaging the build/functionality of the simultaneous release should are allowed > >* [[Galileo Simultaneous Release]] >* [[Helios Simultaneous Release]] >* [[Indigo Simultaneous Release]] >* [[Juno/Simultaneous Release Plan]] >* [[Kepler/Simultaneous Release Plan]] >* [[Luna/Simultaneous Release Plan]] >* [[Mars/Simultaneous Release Plan]] >* [[Neon/Simultaneous Release Plan]] >* [[Oxygen/Simultaneous Release Plan]] >* [[Photon/Simultaneous Release Plan]] >* [[Simultaneous Release]] subsequent releases on a 13-week cycle > >[[Category:Memory Analyzer]]</text> > <sha1>ttfmtar85kxbqxpcx5d7eaogkagjdi7</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Reading Data from Heap Dumps</title> > <ns>0</ns> > <id>27731</id> > <revision> > <id>441258</id> > <parentid>339028</parentid> > <timestamp>2020-10-13T06:30:15Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <comment>/* The IProgressListener interface */</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="9456">=== Introduction === > >The Memory Analyzer offers an API which one can use to open a heap dump and inspect its contents programmatically. This API is used by the MAT tool itself to offer the different end-user features available in the tool. An overview of this API is available on this page. > >=== The ISnapshot interface === > >The most important interface one can use to extract data from a heap dump is ISnapshot. ISnapshot represents a heap dump and offers various methods for reading object and classes from it, getting the size of objects, etc⦠> >To obtain an instance of ISnapshot one can use static methods on the SnapshotFactory class. However, this is only needed if the API is used to implement a tool independent of Memory Analyzer. If you are writing extensions to MAT, the your coding will get an instance corresponding to an already opened heap dump either by injection or as a method parameter. See [[MemoryAnalyzer/Extending_Memory_Analyzer]]. > >=== Opening a snapshot using SnapshotFactory === > >To open an existing heap dump in one of the supported formats call the >SnapshotFactory.openSnapshot() method. See [[MemoryAnalyzer#Getting_a_Heap_Dump]] for more details on getting heap dumps from different VMs. > >public static ISnapshot openSnapshot(File file, IProgressListener listener) throws SnapshotException > >As parameters pass the heap dump file and a progress listener (see [[#The IProgressListener interface]]) >When you are finished with using the ISnapshot instance call the SnapshotFactory.dispose(ISnapshot) method to free the resources and unlock any used files. > >=== The IProgressListener interface === > >The IProgressListener listener interface offers (as the name suggests) functionality to report the progress of different computations. Usually if you are extending the tool then MAT will pass an instance of an object implementing the interface to you. In case you are opening the heap dump on your own, you may need to create the listener on your own. The tool provides some helper classes: >To log progress to the console create a ConsoleProgressListener. To ignore the progress output create a VoidProgressListener. You can wrap a org.eclipse.core.runtime.IProgressMonitor using a org.eclipse.mat.ui.util.ProgressMonitorWrapper. If you have several subtasks then org.eclipse.mat.util.SimpleMonitor can be used to generate several IProgressListener objects each handling a certain proportion of the work from a supplied IProgressListener. > >=== The object model === > >The following hierarchy of interfaces represents the object model that MAT builds for objects in the heap. >IObject > IClass > IInstance > IClassLoader > IArray > IObjectArray > IPrimitiveArray > >This model is pretty straightforward and easy to understand. However, there is one major challenge â the memory needed to maintain such a model. >As often there are millions of objects in a heap dump, MAT is not keeping such a model through the livetime of an ISnapshot. Instead it gives every object an id (starting from 0 and growing by one) and uses these ids to obtain information about objects (like its class, size, referenced objects, etcâ¦) from the ISnapshot instance. Also most of the heavy computations traversing potentially millions of objects (e.g. calculating a retained size, computing paths, etcâ¦) are done without using the object model described above. >The use of the classes described here is needed (and recommended) only when the full information about an object is needed â including its field names and their (possibly primitive) values. > >=== Single objects, objects id and address === > >To get an object by its id use the method getObject(int id) of ISnapshot. >Objects have also addresses (usually visualized as hexadecimal number next to the object). One can map between object ids and addresses using the following two methods of ISnapshot: > > public long mapIdToAddress(int objectId) throws SnapshotException; > > public int mapAddressToId(long objectAddress) throws SnapshotException; > >If you already have an instance of IObject you can call getObjectId() and getObjectAddress() > >=== Getting classes === > >The ISnapshot interface offers the possibility to get a class by its fully qualified name or get a collection of classes using a regex pattern. > public Collection<IClass> getClassesByName(String name, boolean includeSubClasses) throws SnapshotException; > public Collection<IClass> getClassesByName(Pattern namePattern, boolean includeSubClasses) throws SnapshotException; > >Both methods return a collection of classes, as classes with the same name but loaded with different class loaders are treated as separate classes. >To get a collection of all classes available in the heap dump, just call the getClasses() method without any parameters > >=== Get all instances of a class === > >To get all instances of a certain class first obtain the class (or collection of classes) and then call the getObjectIds method on the IClass instance: > public int[] getObjectIds() throws SnapshotException; > >The returned int[] contains the ids of all objects of the class. > >=== Inspecting referenced objects === > >There are various possibilities to explore the outgoing references of an object. The most preferment way is to use the getOutboundReferentIds(int) methods of ISnapshot. > public int[] getOutboundReferentIds(int objectId) throws SnapshotException; > >This method takes an object id and returns an array containing all the ids of all referenced objects. The reference objects include also object referenced by artificially modeled references (TODO link to this part). This method gives a fast way to traverse the object graph. > >The IObject interface also provides several ways to explore its references: > public List<NamedReference> getOutboundReferences(); >A NamedReference allows you to look at the name of the reference, get the id and the address of the referenced object, and also get the referenced object as IObject. > >If you are looking for the value of a specific field / reference, then the most convenient way to achieve it is to use the resolve value method. > > public Object resolveValue(String field) throws SnapshotException; > >It takes as an argument a dot-separated path to the field of interest. This means that one can access not only the fields of the object itself, but to provide a path through some of the references. Here is an example: > >IObject myObject = snapshot.getObject(objectId); >IObject fName = myObject.resolveValue(âdepartment.customer.firstNameâ); > >This code will go through the fields of myObject and will search for a field named âdepartmentâ. If department itself is not a primitive MAT will find its field âcustomerâ, and then find the field âfirstNameâ in the object referenced through âcustomerâ. > >If the field is of primitive type, then resolveValue() will return the corresponding wrapper class, e.g. you can do : > >IObject hashMap = ⦠; // some IObject representing a HashMap >int size = hashMap.resolveValue(âsizeâ); > >=== Printing objects === > >The IObject interface defines several methods for getting a String representation of the object. > >getTechnicalName() â this method will return a string in the format <class name> @ <address> > >getClassSpecificName() â this method is similar to the toString() method. It may return some meaningful description of the object. For example if you call it on an IObject representing a String, then it will return the value of the String. If you call it on an IObject representing a Thread, it will return the name of the Thread. This method however is not the toString() method of the real objects that were put in the heap dump. The heap dump only contains the objects and their values, but it is not possible to call methods of the corresponding classes. It is just that the Memory Analyzer extracts information from the fields of the objects and models the toString() behavior. It is possible to easily extend MAT by adding new ClassSpecificNameResolvers using a defined extension point (TODO see link). > >getDisplayName() â this is a convenience method returning a combination of the technical name appended by the class specific name. > >=== Object sizes === > >==== Shallow size ==== > >To get the shallow size of a single object, use the getHeapSize method of ISnapshot : > public long getHeapSize(int objectId) throws SnapshotException; > >If you have to compute the shallow size of a set of objects (e.g. the sum of the shallow sizes of each instance of a certain class), then we recommend to use the getHeapSize(int[] objectIds) method of ISnapshot and pass the ids of all objects of interest as an array. This method uses some internal structures and is executing the task in several threads (if more than one CPU is available), therefore it will have better performance than looping over the objects and calling getHeapSize() for each single object. > >==== Retained Size ==== > >To get the retained size of a single object, use the getRetainedHeapSize() method of ISnapshot. > > public long getRetainedHeapSize(int objectId) throws SnapshotException; > >To get the retained size of a set of objects, first compute the retained set using int[] getRetainedSet(int[], IProgressListener) and then call the getHeapSize(int[]) on the returned array with ids. > >The getRetainedSet method has two other âadvancedâ variants. Consult the API reference inside the tool for more details. > >[[Category:Memory Analyzer]]</text> > <sha1>18hq17n9vjtkpmk9s5keu7xeqspfnlu</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Releases</title> > <ns>0</ns> > <id>26767</id> > <revision> > <id>411387</id> > <parentid>341756</parentid> > <timestamp>2016-11-07T15:28:23Z</timestamp> > <contributor> > <username>Krum.tsvetkov.sap.com</username> > <id>3945</id> > </contributor> > <comment>Replaced content with "The releases information is available under the common Eclipse projects information [https://projects.eclipse.org/projects/tools.mat/governance page for MAT] Category:M..."</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="187">The releases information is available under the common Eclipse projects information [https://projects.eclipse.org/projects/tools.mat/governance page for MAT] > >[[Category:Memory Analyzer]]</text> > <sha1>6tgsk0ey9i4n8gkmr5v9pn4kxc8uvsq</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Retention policy</title> > <ns>0</ns> > <id>24952</id> > <revision> > <id>339026</id> > <parentid>278192</parentid> > <timestamp>2013-06-05T07:11:20Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <minor/> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="747">This document provides the current retention policy for Memory Analyzer. > >= Released Versions = >For all officially released versions the following applies: >* the two most recent releases from the latest major version will be available on http://download.eclipse.org >* releases included in the Eclipse simultaneous release will be available on http://download.eclipse.org >* older releases wull be archived on http://archive.eclipse.org and available there for unlimitted time > >This applies both for RCPs and update sites. > >= Milestones, Previews, Development Builds = >Milestone builds, developer builds or previews may be fully removed from the download sites or archived - the Memory Analyzer team should decide this. > >[[Category:Memory Analyzer]]</text> > <sha1>0e29q3ry5cswg0zn9uuo4n21y7zuexc</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/Shared Installation</title> > <ns>0</ns> > <id>61652</id> > <revision> > <id>445367</id> > <parentid>445365</parentid> > <timestamp>2022-05-03T14:43:23Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="1365">=Eclipse Memory Analyzer and a shared installation= > >Sometimes it can be useful to install Eclipse Memory Analyzer in a shared directory so that many people can use it, but ideally with their own settings. > >See [https://help.eclipse.org/latest/index.jsp?topic=%2Forg.eclipse.platform.doc.isv%2Freference%2Fmisc%2Fruntime-options.html Eclipse multi-user installs] > ># Unpack Eclipse Memory Analyzer into a location which can be shared ># Option: Start MAT and set up preferences etc. as required which will be the default configuration ># Add the following to MemoryAnalyzer.ini. This stops a root user from running Memory Analyzer and overwriting key files in the shared directories. <br/><code>-protect<br/>root</code> ># Add to configuration/config.ini a line such as<br/><code>osgi.instance.area=@user.home/MemoryAnalyzer</code><br/>This sets the workspace as the default would be under the Memory Analyzer install directory, so would not be writable. This means the user does not have to specify -data ># Make all the Memory Analyzer files and directories publicly readable, but not writable. ># The error logs will go to a location like ~/.eclipse/org.eclipse.mat_1.13.0_87691952_linux_gtk_x86_64/configuration/1651585377771.log The org.eclipse.mat and 1.13.0 come from .eclipseproduct, linux from the os, gtk from ws, x86_64 from arch. > >[[Category:Memory Analyzer]]</text> > <sha1>8qi1ar6rtxwxq4wr089iiswz4nu2ltk</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/UI</title> > <ns>0</ns> > <id>62081</id> > <revision> > <id>446978</id> > <parentid>446977</parentid> > <timestamp>2023-02-24T09:19:21Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="651">== UI interface text == > >[[Category:Memory Analyzer]] >The Eclipse Memory Analyzer User Interface (UI) > >=== Views === > >=== Editors === > >=== UI/UX Guidelines === > >[https://eclipse-platform.github.io/ui-best-practices/ Eclipse UI Guidelines] > >==== 5.1.1. Views & Editors ==== > >Overview and Find Object by address are on the toolbar but not in a menu > >Persist the state of each view between sessions (Guideline 7.20). >May not be possible to reopen heap dumps?? > >==== 5.1.2. Wizards & Dialogs ==== > >Acquire Heap dump and diagnostics start with a message > >==== 5.1.3. Workbench & Preferences ==== > >Headline Capitalization > >Sentence capitalization >needs work</text> > <sha1>nv2k7xdowataxomqrihdwu9b3udi1dd</sha1> > </revision> > </page> > <page> > <title>MemoryAnalyzer/WSL</title> > <ns>0</ns> > <id>61606</id> > <revision> > <id>447650</id> > <parentid>447649</parentid> > <timestamp>2023-07-12T06:24:16Z</timestamp> > <contributor> > <username>Andrew johnson.uk.ibm.com</username> > <id>4755</id> > </contributor> > <minor/> > <comment>Formatting</comment> > <model>wikitext</model> > <format>text/x-wiki</format> > <text xml:space="preserve" bytes="3558">== Windows Subsystem for Linux == > >[[Category:Memory Analyzer]] > >it is possible to test Linux builds on a Windows 10 (or 11) machine using Windows Subsystem for Linux. > >Install WSL2 with for example Ubuntu or Ubuntu 20 > >For Windows 10: Install Cygwin and X-Server > >For Windows 11: X-Server is installed on WSL2 by default > >For Windows 10: Install the appropriate graphics driver: [https://docs.microsoft.com/en-us/windows/wsl/tutorials/gui-apps Microsoft WSL GUI apps] > >For Windows 11: Graphics driver is installed on WSL2 by default > >Install unzip > ><code>sudo apt install unzip</code> > >Install GTK: > ><code>sudo apt-get install libswt-gtk-4-jni libswt-gtk-4-java</code> > >Install WebKit: > >[https://www.eclipse.org/swt/faq.php#browserlinux Eclipse instructions] ><code>sudo apt-get install libwebkit2gtk-4.0-37</code> > >Install Java 17 or later > ><code>sudo apt install openjdk-17-jre-headless</code> > >For Windows 10, start the X-server: >Find the IP address of the WSL2 system: ><code>ip addr | grep eth0</code> > >From Cygwin64 command prompt. xhost should have the IP address of the WSL2 system as seen from Windows / Cygwin > <nowiki>startxwin -- -listen tcp >xhost +127.0.0.1 #Add the appropriate IP address, need to check for WSL2 ># xhost +172.22.46.35 # or use this for WSL2, replace the address with the address from ip addr above ># xhost +$(wsl hostname -I) # or use this from a Cygwin xterm window to automatically find the WSL2 address</nowiki> > >Download Memory Analyzer zip > > <nowiki>unzip MemoryAnalyzer-1.12.0.20210602-linux.gtk.x86_64.zip</nowiki> > >For WSL1 > <nowiki> >cd mat >export DISPLAY=:0 >./MemoryAnalyzer</nowiki> > >or for WSL2, Windows 10 > <nowiki> >cd mat >export DISPLAY=$(ip route | grep default | cut -d ' ' -f 3)':0' # Finds the IP address of the Windows machine >./MemoryAnalyzer</nowiki> > >or for WSL2, Windows 11 ><code>./MemoryAnalyzer</code> > >===Problems=== > >Problem: Failed to load swt-pi3 > <nowiki>./MemoryAnalyzer >SWT OS.java Error: Failed to load swt-pi3, loading swt-pi4 as fallback. >MemoryAnalyzer: >An error has occurred. See the log file >/home/user1/mat/configuration/1689022953567.log. ></nowiki> >Solution: > >Install GTK4 > >Problem: Failed to create a browser > <nowiki>Failed to create a browser because of: No more handles because there is no underlying browser available. >Please ensure that WebKit with its GTK 3.x/4.x bindings is installed. >Consult error log for more details. >Press F1 or the help icon for help. ></nowiki> > >Solution: > >Install WebKit > >===Charts=== > >To get charts working, add <code>-Djava.awt.headless=true</code> to `MemoryAnalyzer.ini` in the vmargs section. > >====WebKit problems - e.g. Ubuntu 22.04==== > >If you get errors such as the following, and blank pages for reports or help: > > <nowiki>(WebKitWebProcess:22883): Gdk-ERROR **: 12:42:02.853: The program 'WebKitWebProcess' received an X Window System error. >This probably reflects a bug in the program. >The error was 'GLXBadFBConfig'. > (Details: serial 148 error_code 161 request_code 148 (GLX) minor_code 21) > (Note to programmers: normally, X errors are reported asynchronously; > that is, you will receive the error a while after causing it. > To debug your program, run it with the GDK_SYNCHRONIZE environment > variable to change this behavior. You can then get a meaningful > backtrace from your debugger if you break on the gdk_x_error() function.)</nowiki> > >try starting Memory Analyzer like this ><code>WEBKIT_DISABLE_COMPOSITING_MODE=1 ./MemoryAnalyzer</code> > >See [https://bugs.launchpad.net/ubuntu/+source/evolution/+bug/1966418] for details.</text> > <sha1>1lp5zlmqylz7v20bixfh3dyxjji2d10</sha1> > </revision> > </page> ></mediawiki>
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 583209
: 289371