Bug 424515 - Need a way to flush or limit amount of data cached by various agent proxys
Summary: Need a way to flush or limit amount of data cached by various agent proxys
Status: RESOLVED FIXED
Alias: None
Product: TCF
Classification: Tools
Component: Agent (show other bugs)
Version: unspecified   Edit
Hardware: PC Windows 7
: P3 normal (vote)
Target Milestone: 1.2   Edit
Assignee: Didier Brachet CLA
QA Contact: Eugene Tarassov CLA
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2013-12-20 05:18 EST by Didier Brachet CLA
Modified: 2014-02-25 05:50 EST (History)
0 users

See Also:


Attachments
First draft of a proposed patch (3.46 KB, application/octet-stream)
2014-01-06 10:35 EST, Didier Brachet CLA
no flags Details
2nd version of the patch (4.56 KB, application/octet-stream)
2014-01-07 08:50 EST, Didier Brachet CLA
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Didier Brachet CLA 2013-12-20 05:18:30 EST
Build Identifier: 

When you use a TCF agent that uses a value-add to provide symbols or line number information, it uses a proxy to access and cache those information (in symbols_proxy.c or linenumbers_proxy.c).

Currently, the agent caches all those information without any limit and partly (only what is necessary) the cache on some events (run-control or memory map change). This provide best performances but can end-up in a pretty big memory consumption especially if you do not have any update of the memory map.

When using TCF agent on a memory constraint targets (i.e. VxWorks), you would like to be able to either limit the amount of cached information or provide a way to trigger a flush of the various caches to free memory. One solution for this would be to trigger a memory_map_event_mapping_changed() context for the various symbols context when memory usage is too high but this has some side effects (the UI refreshes and ask again for some symbols and call stacks).

I can think about different ways to do this:

1 - Update the various proxys to limit the number of entries they can store in their cache. When the maximum number of entries is reached, a new entry would replace an old one. This would be configurable by a macro that can be set at build time. I suspect this could potentially break the ACPM model if the number of entries in the cache is not big enough.

2 - Provide something similar to 1 but also add a timestamp to cache entry and only old cache entries would be flushed. If we have reached the maximum number of cache entries but the older cache entry is not old enough (2 seconds?) then we still create a new one

3 - Provide an external APIs in each proxy to flush their cache. It would be up to the user to call those APIs and make sure they are not called too often

4 - Similar to 3 but more generic: provide a new "cache_flush" listener, all libraries that can make use of it would register on this listener and flush their cache when the flush_cache event is triggered. It would be up to the user to trigger the event and make sure they are not called too often

I would prefer 4 to 3 and I suspect that 2 is safer than 1. Overall, I would think that 4 is the best solution (more flexible and simpler).

Thoughts?

Reproducible: Always
Comment 1 Eugene Tarassov CLA 2013-12-20 13:56:35 EST
My favorite strategy is one that is used in the debugger GUI: drain the cache with  constant speed. For example, dispose one item per second. Advantages are:

1. It does not break ACPM.

2. Cache size grows when it is used heavily, providing maximum performance.

3. When not used for a while, all items are disposed and all memory freed.

4. No need for timestamps. Timestamps are expensive - it take a lot of CPU cycles to get current time.

5. It is easy to implement.

Drain speed can be variable: it can increase when cache size grows beyond certain limit. That would limit memory consumption.

It is easy to dispose least used items by maintaining a sorted list of items: when item is used, it should be moved to the head of the item list, so last item in the list will be least used one.
Comment 2 Didier Brachet CLA 2014-01-06 10:35:18 EST
Created attachment 238693 [details]
First draft of a proposed patch
Comment 3 Didier Brachet CLA 2014-01-06 10:35:55 EST
Hello Eugene,

I have added in attachment a first version of a possible solution based on your proposal. This is currently very basic but I wanted to make sure I clearly understood your proposal. I don't think I have broken ACPM support in this example but I am not 100% sure of this.

I was wondering if I should not add a trigger to start flushing only when there are at least X entries in the cache; not sure it would really help.

Thanks,
Didier
Comment 4 Eugene Tarassov CLA 2014-01-06 14:13:26 EST
(In reply to Didier Brachet from comment #3)
> Hello Eugene,

> I have added in attachment a first version of a possible
> solution based on your proposal. This is currently very basic but I wanted
> to make sure I clearly understood your proposal. I don't think I have broken
> ACPM support in this example but I am not 100% sure of this.

It looks good to me.

> I was
> wondering if I should not add a trigger to start flushing only when there
> are at least X entries in the cache; not sure it would really help.

I don't think it will make any difference. Instead, I suggest to flush a symbol only if "->pending == NULL". Flushing pending symbols might cause infinite loop. Also, it would be good to keep a symbol if "->cache.wait_list_cnt > 0".
Comment 5 Didier Brachet CLA 2014-01-07 03:46:11 EST
> > I was
> > wondering if I should not add a trigger to start flushing only when there
> > are at least X entries in the cache; not sure it would really help.
> 
> I don't think it will make any difference. Instead, I suggest to flush a
> symbol only if "->pending == NULL". Flushing pending symbols might cause
> infinite loop.

I wanted to do this first but it seems this is already done by the various free_xxx APIs.
> Also, it would be good to keep a symbol if
> "->cache.wait_list_cnt > 0".
OK
Comment 6 Didier Brachet CLA 2014-01-07 08:50:08 EST
Created attachment 238721 [details]
2nd version of the patch
Comment 7 Didier Brachet CLA 2014-01-07 08:50:47 EST
I have added a 2nd version of the patch
Comment 8 Eugene Tarassov CLA 2014-01-07 16:38:23 EST
(In reply to Didier Brachet from comment #7)
> I have added a 2nd version of the patch

It looks very good. Tests don't show any errors. I think it is OK to commit.
Comment 9 Didier Brachet CLA 2014-01-08 14:26:41 EST
Fix commited