Kindly check your attachment. It isn't open fully. Any way, in your results the regression model is non-significant but it shows the results in a significant interaction effect. But, your R - squared value was found as 0.20. This is very low value. If you will get the R - Squared value > 0.80 then, only your selection of the variable for the regression equation is best one.
So, you have to eliminate the non-significant variables one by one (Stepwise) and find out the R - Squared value for your regression equation.
What makes you say the the model is insignificant? The P-value is 0.0697. The "old standard" of using a p-value of 0.05 is quite silly, especially on such a small data set. I would be more interested in the power of your data/model. I would also be interested in how you created your interaction model. Are your other factors continuous or categorical
It is perfectly possible that factors only correlate when interacting but not on their own. BMI is a good example, the absolute height of a person and their absolute weight of a person are poor predictors of health risks. But interact them through a ratio and they become powerful tools for predicting risks associated with many common ailments.
As Senthivel suggests eliminate the poor performing factors and see if the interaction alone provides a useful regression. If it does then take the time to work out why those two factors might provide useful information when interaction but not on their own.
Andrew raises an important point about the p-value issue, but if it was decided in advance of performing the regression to use an alpha of 0.05 to test it, then you cannot change your mind and fudge the alpha level just because you didn't reach it. Ideally, alpha level should be set depending on benefit/risk considerations rather than blindly using a fixed level. In reality most people use 0.05 as this is pretty standard choice for balancing benefit risk and so reduces the risk of personally (note the bias is not eliminated but passed to the wider scientific community) biasing your interpretation of the data.
Also, it depends on the stage of your work. If you are in development/discovery phase you should not impose too high an alpha because of the uncertainty around power of the experiment. If you are in validation phase you would already have done the discovery and so should have powered your study to achieve a predefined alpha and so in that case you are constrained.
From your result, it is clear that each of the two independent variables are insignificant. In addition to what Senthilvel Vasudevan said, I will like you to check if the two variables are not highly correlated.
You have only 34 observation in your model, whereas Greene(1991) recommended there should be at least 104+K observations for running regression where K is the number of predictors. In your case you could remove insignificant predictors through stepwise regression but, you have only two variables and excluding your predictor(s) may invalidate your model. So, increased sample size (minimum 100) would be better for you which will reduce the chance of type 1 error and generalized your result. The result of regression also depend on the method through which you have collected the data, so keep these above factors in mind so solve your problem.
i think it depends on your experiment's goal. if the goal is finding the best model, u have to eliminate non significant variabel by stepwise , backward or forward method. andif ur goal is finding variables that have influence on dependent variable u can still adding ur non sig variable but u have to make reason , theoritical reason to explain why the variable not significant, and the impportant thing is u have theory fo backing up ur model
I go with Andrew a lil bit. You cannot say your model was insignificant per sé with these results. Your p-value is .067, so it is up to you deciding whether you can accept this model with this probability of making a mistake.
You can definitely interpret the results of this regression, It means that your two exogenous variables do not have an individual effect on your endogenous one, but have a joint effect. For further interpretation the meaning of your variables would be needed though.
Interesting that you seem to have a large intercept, from which you may be subtracting. Is the intercept an upper bound? From a subject matter point of view, is this a good model to use, or might you try something else? Just a thought.
If you haven't already done so, I suggest you try a graphical residual analysis. That might let you know if you can be satisfied. You could plot predicted y on the x-axis and estimated residuals on the y-axis, and to compare models, perhaps the current one, another one using just the intercept and feF, and maybe another if you have other data, you could put them all on the same scatterplot.
Yes, it can still be a good model. Interaction effects are by design highly correlated with the variables that they are created from so it is natural for them to "steal" some effect from the base variables.