I'm evaluating two different prediction models on continuous data. Model 1 (shown at the top) predicts values in about the same range as the true data. Model 2 (shown at the bottom) always predicts values very close to the mean of the true data. In both models, the correlation between the predicted values and the true data is about the same because correlation is scale-invariant.
I think most researchers would agree that Model 1 is "better" than Model 2. The covariance reflects this, whereas the correlation does not. However, I would like to report a metric that is scaled between -1 and 1, and covariance is not.
Here is what I'm proposing to do:
Cov(Pred, True) / Max(Sd(Pred)^2), Sd(True)^2)
As a reminder, standard Pearson correlation is:
Cov(Pred, True) / (Sd(Pred) * Sd(True))
I know that there is also a metric called the Regression coefficient, which is directional and takes the following form:
Cov(Pred, True) / Sd(Pred)^2 OR Cov(Pred, True) / Sd(True)^2
My question is, is my proposed metric something that people actually use in research? And if so, does it have a name?