The "enter" method in IBM SPSS forces all of the named independent variables (IVs) into the model; this is sometimes referred to as a simultaneous solution. Step methods (e.g., stepwise, forward entry, backward elimination) are programmed approaches to identifying (potentially) useful subsets of the named IVs for a model, subject to user-specified constraints.
Forward entry starts with no IVs, and at the first stage, selects the best single predictor/IV (based on strongest correlation with DV), if it is sufficiently strong to meet the criterion for entry (usual default in SPSS is significance at .05 level). At stage two, if any of the remaining but unselected IVs would add significantly to the explanatory power of the IV chosen at stage one, the "best" of these is added to the model (again, if the computed regression coefficient is significant at the criterion for entry). This process continues until either: (a) all named IVs are incorporated into the model; or (b) no additional unselected IVs meet the criterion for entry.
Backward elimination starts with all IVs in the model. At the first stage, the IV having the weakest (closest to zero) regression coefficient is identified, and if not significant at the criterion for exclusion (usual default in SPSS is non-significance at the .10 level), it is removed from the model. A new model is created, using the remaining IVs, and the process repeats until: (a) no IVs remain in the model; or (b) no more IVs meet the criterion for exclusion.
Stepwise is a hybrid of the two. Like forward entry, it starts with no IVs in the model, and the best single predictor/IV is identified. If sufficiently strong to meet entry criterion, it is incorporated. Then, at stage two, the remaining, unselected IVs are examined to see whether the "best of the rest" would meet the entry criterion. From here on out, the process alternates: The new model is examined to see whether any of the currently included IVs meet the criterion for exclusion--if so, the worst of these is dropped from the model. Then, the remaining IVs are evaluated to see if the "best of the rest" would meet the entry criterion. This continues until: (a) no more IVs in the model are meet the criterion for exclusion; or (b) no more unselected IVs meet the criterion for inclusion.
ALL of the step methods are opportunistic; that is, they will capitalize on the idiosyncratic nature of your sample. As well, none might actually identify the "best" subset of IVs of any size (e.g., the best trio of IVs), based on maximum R-squared or minimum standard error. There are few other technical concerns as well (in SPSS, the internal significance decisions are inappropriate, given the actual number of comparisons being made). For these reasons, it is strongly recommended that you validate your results using other data. This is a great idea even if you're using the enter method.
The "enter" method in IBM SPSS forces all of the named independent variables (IVs) into the model; this is sometimes referred to as a simultaneous solution. Step methods (e.g., stepwise, forward entry, backward elimination) are programmed approaches to identifying (potentially) useful subsets of the named IVs for a model, subject to user-specified constraints.
Forward entry starts with no IVs, and at the first stage, selects the best single predictor/IV (based on strongest correlation with DV), if it is sufficiently strong to meet the criterion for entry (usual default in SPSS is significance at .05 level). At stage two, if any of the remaining but unselected IVs would add significantly to the explanatory power of the IV chosen at stage one, the "best" of these is added to the model (again, if the computed regression coefficient is significant at the criterion for entry). This process continues until either: (a) all named IVs are incorporated into the model; or (b) no additional unselected IVs meet the criterion for entry.
Backward elimination starts with all IVs in the model. At the first stage, the IV having the weakest (closest to zero) regression coefficient is identified, and if not significant at the criterion for exclusion (usual default in SPSS is non-significance at the .10 level), it is removed from the model. A new model is created, using the remaining IVs, and the process repeats until: (a) no IVs remain in the model; or (b) no more IVs meet the criterion for exclusion.
Stepwise is a hybrid of the two. Like forward entry, it starts with no IVs in the model, and the best single predictor/IV is identified. If sufficiently strong to meet entry criterion, it is incorporated. Then, at stage two, the remaining, unselected IVs are examined to see whether the "best of the rest" would meet the entry criterion. From here on out, the process alternates: The new model is examined to see whether any of the currently included IVs meet the criterion for exclusion--if so, the worst of these is dropped from the model. Then, the remaining IVs are evaluated to see if the "best of the rest" would meet the entry criterion. This continues until: (a) no more IVs in the model are meet the criterion for exclusion; or (b) no more unselected IVs meet the criterion for inclusion.
ALL of the step methods are opportunistic; that is, they will capitalize on the idiosyncratic nature of your sample. As well, none might actually identify the "best" subset of IVs of any size (e.g., the best trio of IVs), based on maximum R-squared or minimum standard error. There are few other technical concerns as well (in SPSS, the internal significance decisions are inappropriate, given the actual number of comparisons being made). For these reasons, it is strongly recommended that you validate your results using other data. This is a great idea even if you're using the enter method.