There is a sort of fetish in staring at R-squared and reporting it. I would make one caution and one comment:
The caution is that no measurement is error-free. The ability of your predictor variables to predict your outcome variable is limited by their measurement error. So if age has a better R-squared than depression, don't conclude that it's a better predictor. It could simply be that we can measure age with little error while we cannot measure depression without having a lot of error.
The comment is this: when there is a big difference between R-squared and adjusted R-squared, it indicates that your model contains a lot of predictors, relative to the amount of data you have. Applied to new data, your model will predict less well, and this problem increases as the number of predictors rises. The ratio of Adjusted R-squared to R-Squared tells you the likely decrease in model fit when the model is applied to new data.
Adjusted R squred penalizes the model for including too many parameters that actually don't contribute much to explain the original variance. Otherwise the R Squared would increase monotonically by adding any irrelevant independent variable.
The predicted R Squared is the proportion of variance in the *testing part of the data* explained by the model.
The adjusted R2 is related to R2 as follows (Dillon and Goldstein, Multivariate analysis1984, p 222).
adjR2 = 1 - ((1-R2)*(n - 1)/(n - p))
where n is the number of measurements and p the number of parameters or variables. We that the more variables are taken into the model, the better R2 gets but adjR2 usually becomes smaller because the variable improves the model less than expected by chance (see also answer from Michal Illovky). Focus on adj R2. F tests on the various models (with different number of variables) are also provided in Dillon and Goldstein.
R2 and Adjusted R2 give an explanation of how well your dependent variable is explained by your independent variables. It has nothing to do with the predictive quality of your model. Predicted R2 measures the predictive quality of your model. That is the ability of the model to predict a set of new data .
It is important to adjust R2 not only to take into account the number of independent variables in the model, but also to correct the statistical bias: for an inference to the population, you need to estimate rho, the true coefficient of determination: R2 is a sample statistics that is systematically higher than rho2
Several methods exist to adjust R2. See this paper for a review:
Yin, P., & Fan, X. (2001). Estimating R 2 shrinkage in multiple regression: a comparison of different analytical methods. The Journal of Experimental Education, 69(2), 203-224.
There is a sort of fetish in staring at R-squared and reporting it. I would make one caution and one comment:
The caution is that no measurement is error-free. The ability of your predictor variables to predict your outcome variable is limited by their measurement error. So if age has a better R-squared than depression, don't conclude that it's a better predictor. It could simply be that we can measure age with little error while we cannot measure depression without having a lot of error.
The comment is this: when there is a big difference between R-squared and adjusted R-squared, it indicates that your model contains a lot of predictors, relative to the amount of data you have. Applied to new data, your model will predict less well, and this problem increases as the number of predictors rises. The ratio of Adjusted R-squared to R-Squared tells you the likely decrease in model fit when the model is applied to new data.
As underlined by Ronan, measurment errors and within-subject variability tend to lower correlation between predictors and the outcome variable: this is the regression dilution. It is possible to correct for this bias if the within-subject variability is known, for instance from a test-retest study. See the following reference:
Knuiman, M. W., Divitini, M. L., Buzas, J. S., & Fitzgerald, P. E. (1998). Adjustment for regression dilution in epidemiological regression analyses. Annals of epidemiology, 8(1), 56-63.
predicted R-squared indicates that if you remove a data point from your data set, how much your model is capable to find out the correct value of that point.