Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [linuxtools-dev] [TMF] Advanced statistics view

Some other considerations that came to mind after writing the whole thing:

1 - In fact that state history would NOT be storable in a partial
history (aka "partializable"). The partial history only gives us the
values at the queried times, not the whole intervals. It can give us the
start time, but not the end time, since we do not know when this state
will end (to know for sure, we would have to read the trace up until the
end, for every query, not very interesting...)

In the implementation, which I'm working on nowadays, the return value
is still an array of intervals, whose start times are correct, but the
end times are all equal to the queried time.


2 - Depending on the precision you want to show in the view, you might
want to store the "cumulative time" values in microseconds, perhaps even
milliseconds. The integer value in the state system backend would
overflow very quickly if you use nanoseconds. It would be possible to
add support for long values in the backend, but then those would take 3
times more space than an int (because you now use the 32-bit entry for a
"pointer", and then then 64-bit in the variable-length section for the
long itself).



On 13-02-11 06:37 PM, Alexandre Montplaisir wrote:
> Hi François,
>
> This would indeed be a nice addition. It would require storing more data
> in the state system compared to what we have now (and I'm not sure if it
> would "partializable", probably but we would have to check for corner
> cases).
>
>
> We did an experiment about this a couple weeks back, it's a bit tricky
> how to store this information in the state system, but after a couple
> iterations we found out that the probably easiest way is to:
>
> - When a process arrives on the CPU, insert a state change whose value
> represents the *current*, at that time, cumulative amount of CPU time
> for that process.
> - When the process leaves, insert a null value to indicate it's not on
> the CPU anymore. The value of the interval that will be created will
> still be the cumulative CPU time at the *beginning* of the interval.
> - When the process comes back, you can get its latest cumulative CPU
> time by querying "two intervals back", and adding the length of that
> interval to update the cumulative time. You insert a change with that
> value, etc.
> Rinse and repeat for every process (or resource) you want to track.
>
> When doing queries:
> - If you fall on a "real" interval, you know you are in a time where the
> process was active on the CPU. You will need to do the interpolation
> between the value (which represents the cumulative amount at the start
> of the interval), the length of the interval, and the current position
> in the interval itself. Note this does not require any extra query to
> the state system, because a query returns you the whole interval now.
> - If you fall on a null value, this means the process was not on the
> CPU. To get its cumulative CPU time, get the previous interval (query at
> "interval.startTime - 1"), get the value of that interval + its length.
> This will give you the cumulative time at the end of that interval,
> which stays constant for the whole null interval too.
>
> Hoping it makes sense ;)
> Other ways could require storing in RAM the current total per
> process/per CPU, or doing back-assignments of states. But the above
> method doesn't need any of that.
>
> We only brainstormed about this, no implementation was done so far. If
> you would like to experiment on adding this to the state system and the
> statistics view, we'd be happy to review it ;)
>
>
> Cheers,
>
> --
> Alexandre Montplaisir
> Eclipse Linux Tools committer
>
>
>
>
>
> On 13-02-11 12:06 PM, François Rajotte wrote:
>> Hello all,
>>
>> I have some questions about the current state of the statistics viewer
>> inside TMF.
>>
>> Recent developments have integrated the state system to the statistics
>> view in order to speed up the queries.
>> At the moment, the information stored in this state system consists
>> solely of raw event count.
>>
>> It seems to me that more advanced statistics could be displayed using
>> the same view and basic framework.
>> For kernel traces, I feel like CPU usage per process would be a very
>> interesting statistic to show||.
>>
>> The structure could look like this :
>>
>> Process #1 : 25% total
>>     -> CPU0 : 20%
>>     -> CPU1 : 5%
>>
>> Process #2 : 10% total
>>     -> CPU0 : 10%
>>     -> CPU1 : 0%
>>
>> We could also want the complementary information:
>>
>> CPU0 : 30% total
>>     -> Process #1 : 20%
>>     -> Process #2 : 10%
>>
>> CPU1 : 5% total
>>     -> Process #1 : 5%
>>     -> Process #2 : 0%
>>
>> I would like to have the developers' opinion on this. Are similar
>> features planned or have been discussed in the past? What major
>> hurdles do you see?
>>
>> As a user, I feel like this would greatly benefit the control flow view.
>>
>> Thanks,
>> François
>>
>>



Back to the top