It doesn't matter because stepwise methods are not reproducible and should not be used. The attached papers give examples and explanation of the problems with the step methods. Best wishes, David Booth
I agree with David Eugene Booth that stepwise regression is no longer accepted in most fields. Instead, you should simply include all the predicators and only interpret those that are significant.
I agree with the two Davids (not to be confused with The Two Ronnies--older members who like British comedy will be with me on that one.) Frank Harrell's Author Checklist (first link below) points to several good resources that highlight many of the problems with stepwise regression. I've added links for three of them below. HTH.
What type or types of variables are the 5 explanatory variables? I.e., are they all quantitative variables? Or are some of the categorical? If so, how many categories do they have? (I'm trying to work out what the model degrees of freedom if you include all 5 variables.)
Regarding the 4 outcome variables, do you have 4 separate univariate questions (i.e., one DV at a time)? Or could your research question be addressed with one multivariate model (i.e., one model with 4 DVs that are combined into a linear composite)? (From what you wrote earlier, I suspect you have 4 separate univariate questions.)
53 participants performed a cognitive task (a repeated measures design) and answered 5 questionnaires.
Our main analysis is related to the cognitive task but we also performed exploratory regression analyses to assess whether individual differences (e.g., depression, anxiety) predict task performance. We had 4 dependent measures of interest in the cognitive task and therefore ran 4 regression analyses. These measures are dependent of one another.
Thanks for the additional info. My first reaction is that you are asking an awful lot of 53 observations. My second thought is that there is a lot of stuff to work through, and that you ought to seek help locally. With that thought in mind, I see that your university does have a statistical consulting unit that you may have access to:
https://stat-con-en.hevra.haifa.ac.il/
If you do not have access to it, the staff there may be able to direct you to other local resources. HTH.
I would not rely on relative p-values here (though relative would be the only way I'd consider using them anywhere). Predictors are not "independent" in that they impact each other, perhaps most notably through collinearity. Looking at their individual impacts is thus like looking through very muddied water. (Note: It is not unusual for the presence of one predictor to change the sign of another predictor, due to collinearity.)
Explanation and prediction are different. See https://www.researchgate.net/publication/48178170_To_Explain_or_to_Predict, especially the example in the appendix. Prediction might best be done using principle components, but that pretty much destroys all interpretability. I've never used principle components, but that doesn't mean you shouldn't.
I often recommend graphics over single number statistics. I generally suggest "graphical residual analysis" to compare models on a given sample, and some kind of "cross-validation" to avoid such a close fit to that given sample that you do not predict well for the remainder of the population or subpopulation to which your model is being applied. (Looking at two or more samples which cover the range of possibilities would be nice - though perhaps impossible.)
Heteroscedasticity is a natural feature, as well as sometimes an artificial one. I encourage that the former be modeled. See https://www.researchgate.net/publication/352134279_When_Would_Heteroscedasticity_in_Regression_Occur. I am about to upload the article version.
Cheers - Jim
PS - Having the same set of predictors with different dependent variables would be "multivariate multiple regression." You might want to research that term. You might run across something helpful.