It seems odd because your R squared is so much smaller than in the reference studies. The first thing that comes to mind are data errors. If those can be ruled out then another thing to think about is whether you obtained a sample from a different population than the reference studies. Also, does your sample have a restricted range (e.g., extreme group) with very small variances or do you have a very small sample? Restriction of range can cause correlations to be smaller than in a group that shows the full range of possible values.
@Moira The most important thing you've not mentioned here is regression coefficients of your model between the predictors and outcome variable. Whether they are significant or insignificant and what is the magnitude of their relationships relationship with the outcome variable. If they're are too weak and do not predict the outcome variable and that's not in conformity with the literature then definitely there's a problem with your data (as pointed out by @Christian).
Adjusted r2 shows the importance of independent variable to the model. Small value of adjusted r2 shows the most important variable is missed from the model. There fore if r2 is small you have to add another important independent variables to the model.
Graphics can tell you a good deal. R2 can be tripped up by various things, and p-values are also problematic. I suggest you first compare reasonable models on your sample by using a "graphical residual analysis." If you could divide your sample randomly into two groups and do this for each sample separately, that would be a kind of "cross-validation" used to try to avoid fitting too closely to one particular sample, and not so well to the rest of the population or subpopulation to which you are applying your model. I generally suggest graphical residual analyses, and cross-validations, at least in more complex situations.
The graphical residual analysis might easily show a possible outlier, as well as general fit, and heteroscedasticity which should be modeled with regression weights. (OLS is just WLS with equal regression weights, but often OLS is not appropriate.) Remember that a possible outlier may not actually be such an error. But you might see what datum or data need scrutiny.
Also, you could research when R2 is not appropriate.
Anuraj, neither forward nor backward stepwise regression are likely to garner the best set of predictors. Best not to use stepwise regression. Your other suggestions depend upon circumstances/kind of study, and goals. Robust regression might possibly help when fitting a regression is problematic, but is not an improvement if not needed. Instead, it may throw away substantial useful information.