You can depend on stepwise regression by input first variable (X1) and watch the Digoinstic statistica(R2) and after that put X2 and aslo watch R2 ,... and select the model (regression model) that give a high R2 and small p-value.
The overall model may be appropriate, but take care in interpreting the coefficients. Possibly some or all of the other coefficients are not significant because their predictors are collinear with X1. If so, then the coefficient of X1 is likely affected by the inclusion of the other X's in the model. But many other things could be going on. Examine the partial regression plots for each of the coefficients. They will help you to see how each X is related to Y *after allowing for the linear effects of the other X's*.
Yes indeed as other people already stated, you don't expect all the explanatory variables are significant.
While fitting a regression model, it is also important to have some background information (which factors are important to predict my variable of interest ...).
Also it is important to check the validity of the model: assumption of normality of the residuals, assumption of homocesascity of the residuals...
A plot between the predicted values and observed values should be checked as well. R-square (for multiple linear regression) could be used to see how good the model is to explain the variability in your data. Although, using R-square need to be done with care since R-square always increases whenver you add more covariates into your model. An adjusted R-square might be a better choice.
I suggest you to choose the best regression model with a few predictor variables (e.g. stepwise regression model). Although the F test on ANOVA results is significant, you should also consider some statistics such as R square, R-square_adjust, CP-Mallow, etc. A best model involves a few predictor variables. However, if theoretically you have to use all three predictor variables, you can consider the use of all three predictor variables, but by decreasing the accuracy of the model.
Interesting responses, but I think they are missing the mark. The first question is what do you want to do with your regression. If you plan to use it for prediction, then the predictors that are not significant may not be helping much; try a model that omits one or both of them. If you plan to use it to understand what the coefficients mean, then you should not interpret the insignificant ones (they may be zero; you don't know) and those predictors may be affecting the value of the significant coefficient. Keep them in the model if there a strong theoretical reasons for them to be there and for their influence on the other coefficient to be there. Otherwise, set them aside. And always plot the data and the partial regression plots looking for outliers and influential points and anything else that might account for what you are seeing. Note that choosing a model with fewer predictors should probably not be done with a stepwise algorithm; stepwise methods are often mislead by anomalies in the data or collinearities.