I'm running experiments with the SPEC2006 benchmarks using the cycle-accurate simulator MARSSx86 where everything is constant except the core frequency, I'm lowering that from 3GHz to 2.5GHz, and 2GHz. I noticed that at the lower frequencies, more L1 and L2 accesses are being generated for some benchmarks, such as xalancbmk, bwaves, and gromacs, resulting in higher miss rates, when they should be pretty much constant.
I also noticed that at the lower frequencies, there is a very high number of re-dispatched instructions due to mis-speculations, and more load instructions filling the pipeline. Memory frequency is constant all through these runs.
It is always possible that bugs exist, but I wanted to look if there a well-defined relation that connects all this information together.
Thank you.