There is an excellent paper in the October issue of Annals of Epidemiology by Swaminathan et al that shows why measures of association cannot be used to quantity predictive power
I don't know that article, or epidemiology, but I do think that measures of association fall short because they cannot consider all that needs to be considered, and predictive power needs to be looked at in a broader sense, as well.
A graphical residual analysis can tell you a great deal about model fit, and a cross-validation can help avoid overfitting to a particular sample. Expected prediction error, and the estimated variance of the prediction error used for placing prediction intervals around regressions may be helpful, but a graphical residual analysis may easily tell you a great deal at a glance. It may be helpful for recognizing heteroscedasticity as well. To quantify that, you can use the coefficient of heteroscedasticity.
[I see that Excel is noted as a topic here. Assuming that is not a mistake, you might note that the following is an Excel tool for helping to determine the coefficient of heteroscedasticity. It makes use of a graphical residual analysis and an extension to it: https://www.researchgate.net/publication/333659087_Tool_for_estimating_coefficient_of_heteroscedasticityxlsx. Examples using this are found in https://www.researchgate.net/publication/333642828_Estimating_the_Coefficient_of_Heteroscedasticity. (See https://www.researchgate.net/project/OLS-Regression-Should-Not-Be-a-Default-for-WLS-Regression, and updates.)]
Regarding prediction accuracy, you - or perhaps some of your readers - might be interested in this: https://online.stat.psu.edu/stat857/node/160/.
Perhaps you might like to say something about the contents of the paper you mentioned.