Thanks a lot! I looked at the DHAT tool of valgrind. My understanding is that it can point to the usage of blocks in the heap, which is good.
However the question that I am exploring needs to answer if the page access pattern of a program changes with a change in system load. For example, consider a multi-programable system. If the system load is high, ( i.e. many programs are co-executed), then the last level cache misses of a program are high as the last level cache is typically shared. This should be reflected as a greater number of accesses to pages in memory.
I want to know if such information can be obtained through Valgrind tools?
Yes, I want to estimate the number of accesses on a page. I am planning to perform this study for linux.
I started to look at kernel memory management policy for this purpose. I think the answer is buried in there.
I reformulated the question after some study.
I gathered that linux implements a LRU like page replacement policy. For this purpose it maintains two lists, a hot pages list and a cold pages list. Pages are evicted from the cold list if required. However, the hot pages list is a safer list and pages are not evicted from this list. What I do not know is "who tags pages as hot?" and "when ?"
Common sense tells that the CPU should tag a page as referenced. However, when does a cpu tag a page as referenced? when there is an access to that page or when there is a cache miss on that page?
One may reason that the latter approach may be better as the pages that do not miss often may actually be well cached and hence do not need to be marked hot.
However, I can not find documentation on what happens in practice.
I got the answer to this question. It turns out that the processor (intel x86 in my case) sets the referenced bit of a page entry if any an address belonging to that page is translated. Also, it seems that LLC misses play no part in this.
A related discussion is present at http://stackoverflow.com/questions/11448907/kernel-function-set-the-pg-referenced-bit-of-a-heap-page also.