it depends on what model you've used. some econometric models like logistic regression R- squared doesn't have common interpretation (changes in the predictors are related to changes in the response variable).
if you mentioned the model's name, nature and type of your variables, maybe I could say.
R2 is a measure how much of the variations of the dependent variable is explained by the model's explaining variables. The acceptance of the estimated relations is, therefore, not dependent on R2. For this purpose, one has to use the variances of the estimated coefficients (e.g. t-test). If these variances are low, one can of course accept the model, but a low R2 says that it does only expain a low part and that including other (missing) varaibles could strongly increase the explanational power.
Low R-square is perfectly acceptable. R square just measures the proportion of variations explained by included explanatory variables out of total variations in dependent variables. Other model fitness and adequacy measures should be focus such as individual significance of coefficients and overall fitness of the model using t and F statistics respectively. Additionally, look at multicollinearity and heterskedasticity.
In cross-section studies, E-related is always low. You would be lucky to have it like 0.40. Many reasons contribute to that such as: respondents contributions, lack of training of data collectors, poor wording of survey, ordering of survey questions, among others. However, you might be able to increase it via exclusions of some surveys that you feel respondents fake answers in.
One of Kennedy's ten commandments of applied econometrics says: do not worship the R squared! So it is not a big issue especially if you test for other basic assumptions and that they are satisfied. You might also check for omitted variable bias to be sure your model is justified regardless of the low R squared for a least squares regression.
I doubt that Sherin is principally right. The main reason for higher R2's in time series analyses is that these are, in general, working with highly aggregated data and/or data for the same statistical unit every period. If the cross-section study is based on survey data, this does not necessyrily lead to lower R2's and the same is true for time series which can also be based on surveys.
Thanks, but I did not say that the R-squared is necessarily low in cross-section studies! What I said is that R-squared, are in general lower in cross-section studies (surveys, that is), as opposed to time-series studies, and that this is normal!
Originally you wrote "always low", which is even stronger than "in general".
I have often used samples for one year (=cross-section) from tax accounts. For relations between some of the data, one can, of course, get extremely high R2's (much higher than one can get for regressions with data from National Accounts), but for some other relations they were, again of course, very low.
Always low still applies to many cross section studies. My field is agricultural and applied economics. I have done so many cross section surveys throughout my career. I don't recall any R-squared that was more than 0.50!! I am saying nothing that negates what what you said. I am just telling something that I have found in my empirical research work. Probably my empirical research work is all wrong! Apparently you know more! Sorry for causing problems to you or to some researchers.
A low R² does not mean that an equation is wrong. It only means that the explaining variables together explain only a part of the dependent variable. Whether the effect of some variables is significant can be seen from the standard deviations or t-values. But, in general, one would not get very significant results with low R². I think, there are no general articles or books for the interpretation of econometric estimation results, because these are highly dependent on the subject of research and the data available/used.
R-squared can be low just because the sigma of the estimated residuals is large. One might still then model the mean response in each case well, but have high sigma. (Please see my November 23, 2021 responses to https://www.researchgate.net/post/Can_you_have_a_single_goodness_of_fit_measure_for_multiple_regression_analysis_or_a_number_of_measures_to_be_taken_to_explain_performance.) The model coefficients might all have fairly low standard error for your sample, even with low R-squared. (The predictors impact each other, however, so those coefficients change, for given predictors, with different combinations of predictors.) However, since high sigma for estimated residuals also impacts standard errors of regression coefficients, this may be noticeable. A larger sample will help. Sigma itself, however, does not become smaller with a larger sample size.
You can look at confidence intervals for means versus prediction intervals. See the following:
https://online.stat.psu.edu/stat501/lesson/3/3.2.
https://online.stat.psu.edu/stat501/lesson/3/3.3.
The confidence intervals may look OK, but with high sigma, a graphical residual analysis may not look very good.
Rather than consider R-squared, try looking at a "graphical residual analysis" to check model fit for your particular sample, and a "cross-validation" to study whether you may have overfit to your particular sample.
Note that the graphical residual analysis may also show changing sigma as you go to larger predicted-y values. Please see https://www.researchgate.net/publication/354854317_WHEN_WOULD_HETEROSCEDASTICITY_IN_REGRESSION_OCCUR.) .
The following is of some interest here, I think:
https://data.library.virginia.edu/is-r-squared-useless/. Cheers - Jim