Simply put, R is the correlation between the predicted values and the observed values of Y. R square is the square of this coefficient and indicates the percentage of variation explained by your regression line out of the total variation. This value tends to increase as you include additional predictors in the model. Thus, one can artificially get higher R square by increasing the number of Xs in the model. To penalize this effect, adjusted R square is used. When you compare models with their complexity, you should then rely on Adj R square. Predicted R square is another measure which addresses the issue of overfitting the data and explain the prediction power for future observations.
Once you have fitted a linear model using the regression analysis, you will need to determine how well the model fits the data. Those coefficients (R, R^2,adjusted R) quantify the 'model quality', or the proportion of the results variance that can be explained by the model. Have a look at this site:
If you are using a multiple linear regression, you need to look at the R^2 (adj). I would also look at R^2 (pred). R^2 (adj) tells you what percent of the total variability is accounted for by your model. If R^2(adj) is 0.6897, then your model accounts for 68.97% of the total variability.
What is your VIF for each term in your model. That is more important, initially, than an R^2 value.
In multiple regression analysis the "Adjusted R squared" gives an idea of how the model generalises. In an ideal situation, it is preferable that its value is as close as possible to the value of "R squared". So, the proportion of the variance explained by the multiple regression model is indicated by "R squared". If a hierarchical regression has been conducted, then then the improvment of the model can be assessed at each stage of the analysis by looking at changes in "R squared" and assessing the significance of such change.
There is the assumption that errors in regression are independent. This assumption can be met if the Durbin-Watson statistic is around 2 (and between 1 and 3)
Simply put, R is the correlation between the predicted values and the observed values of Y. R square is the square of this coefficient and indicates the percentage of variation explained by your regression line out of the total variation. This value tends to increase as you include additional predictors in the model. Thus, one can artificially get higher R square by increasing the number of Xs in the model. To penalize this effect, adjusted R square is used. When you compare models with their complexity, you should then rely on Adj R square. Predicted R square is another measure which addresses the issue of overfitting the data and explain the prediction power for future observations.
Due to multicollinearity, the coefficients on individual variables may be insignificant when the regression as a whole is significant. Under this condition, It is a good idea to look at the P value for the regression as a whole. It tells how confident you can be that each individual variable has some correlation with the dependent variable, which is the important thing.
R^2 is the proportion of sample variance explained by predictors in the model. Thus it is the ratio of the explained sums of squares to the total sums of squares in the sample. R is the multiple correlation coefficient obtained by correlating the predicted data (y-hat) and observed data (y). Squaring R gives you R^2. Thus R^2 is a function of the quality of prediction within the sample.
Adjusted R^2 attempts to generalize the R^2 statistic as a population estimate. Using R^2 as a population estimate leads to bias in small samples and this increases with more predictors.
I find R^2 more useful as a descriptive quantity (albeit not very useful) and the difference between R^2 and adjusted R^2 is a helpful reminder that the model is likely to over-fit. I don't find adjusted R^2 that useful because I think the population value of R^2 is generally not a useful quantity per se.
In the simple regression, the R - square and adjusted R-square are same
But in the multiple regression, You have to care about Adjusted R - square and the difference between R - square and Adjusted R - square. If you put one more varibale into your model, but the adjusted R - square is not increased or even it decreases, you have to consider the suitable of this variable in your model
In order to assess the overall regression model fit in supporting the research hypotheses. This is done by, firstly, examining the adjusted R squared (R2) to see the percentage of total variance of the dependent variables explained by the regression model. Whereas R2 tell us how much variation in the dependent variable is accounted for by the regression model, the adjusted value tells us how much variance in the dependent variable would be accounted for if the model had been derived from the population from which the sample was taken. Specifically, it reflects the goodness of fit of the model to the population taking into account the sample size and the number of predictors used. Researchers suggests that this value must be equal to or greater than 0.19.