Community
Participate
Working Groups
Consider a trace that lasts 5 minutes, where there is a period in the middle of the calls where there is no activity (no method calls - the user is doing something else) for 4.9 minutes. This basically renders hot spots useless, because there will only be a single large hotspot somewhere in the middle between calls when the user was doing nothing. But the user doesn't care about this part of the trace; he/she cares about the parts when the program was executing and processing data. The solution, I believe, is to have a toggle option in preferences to ignore periods of inactivity for the hot spots. That is, the periods between calls will not be considered when calculating hot spots. I would also suggest that this option be turned on by default.
The linear function used to determine the hot spot has been replaced by a logarithmic function
Will this fix the problem though? There is still the case where we are profiling an application that involves waiting for user input - we don't want to know how long it took for the user to respond; only how long the application took to execute. The user waiting time will usually be much larger than the processing time, so the problem would still exist with a logarithmic scale, wouldn't it?
I think we need to revisit this. I don't believe changing the scale will really fix the problem. The problem is that sometimes we don't want the user waiting times (or slow activity, whatever the reason is that there isn't much happening) to be considered in the hot spots. Let me give an example: User is profiling a web server with very light traffic. The time between each request is much greater than the time spent processing the requests. In this case, the hot spots are of no use - they will all be red between the requests, and you can't tell what parts of the web server are slow. What the user really wants to see are the hot spots only for when the application was processing, not while it was waiting for requests. This is the case for any application that is user driven, not just web servers. Users often take a long time to respond to the UI, relative to the time spent processing the request. One solution would be to allow the option of considering waiting times between top-level calls for hot spots or not, somewhere in the sequence diagram options or in the toolbar.
In order to ignore waiting times we need a way to detect them for sure. We can ignore automatically times spent just before a found message (a message with no starting lifeline) and just after a lost message (with no ending lifeline). The question that remains is to know if this really covers all situations of waiting times or not. Another way would be to offer a dialog that shows up an histogram of the breakdown of occurences falling in each of the time scale cells. Then the end- user could choose to ignore the rightmost cells and all the times falling in those cells would be ignored. Not sure this is really easy to implement... If you think it could be useful, it could be another bugzilla record targeted after 3.2. Tell us your mind.
I don't know how difficult this would be, but I was thinking that it would have the option to not consider the time slots between the top-level calls. These are the time slots where there is nothing happening in the sequence diagram. Not necessarily that nothing was happening in the application (something could be happening in another class that was filtered out). Maybe calling it waiting time wasn't quite accurate on my part. What if we thought of it as: I don't want hot spots to consider times when things were happening outside this sequence diagram.
From Sebastien: A new preference "Exclude external time" has been added and is checked by default. This pref will allow the user to take/not take the inactivity times into account into the time compression bar The pref name can propably be better
Verified. This is exactly what we needed.
Closing bugs that were previously verified.