A moderator variable, commonly denoted as just M, is a third variable that affects the strength of the relationship between a dependent and independent variable. In correlation, a moderator is a third variable that affects the correlation of two variables. In a causal relationship, if x is the predictor variable and y is an outcome variable, then z is the moderator variable that affects the casual relationship of x and y. Most of the moderator variables measure causal relationship using regression coefficient. The moderator variable, if found to be significant, can cause an amplifying or weakening effect between x and y. In ANOVA, the moderator variable effect is represented by the interaction effect between the dependent variable and the factor variable.
You can use regression analysis in SPSS to test a moderating effect without using Sturctural Equation Modeling. Maurice Ekpenyong describes the basic concept, when can be operationalized as an interaction effect between the independent and dependent variables, across the levels of the moderating variable. In the simplest case, if your moderating variables is a dichotomy, then you include a dummy variable for it as well as a variable that multiplies it times the independent variable.
the approach to test moderator effects is called moderated regression. The approach is quite easy:
1) center or standardize the predictor variable and the moderator
2) multiply each and, hence, create a further variable in the data set
3) conduct a hierarchical regression in which in the first step, control variables, the predictor and the moderator are included. Note the R-square of this step
4) Include the product variable. In the increase in the Rsquare is significant, this means that the "multiplicative information" adds some new information to Y which is not contained in the main effects of the predictor and moderator from the earlier step. This is evidence for a moderation.
A few commnents:
a) Step 1 is not important for testing the moderator but helps later (in case of a significant moderator effect) to display the moderator effect. Please note, that significance alone is not necessarily a hint for the kind of moderation you have theoretically in mind. It depends on the form of the moderation/interaction
a) The creation of a product variable adds a tremendous portion of colinearity (high correlation between predictor/moderator and the product variable which is unavoidable. This increases the standard errors and lowers the power. In contrast to various beliefs, centering/standardizing does NOT reduce colinearity. Only an increase of the sample size can reduce its negative impact. I include a reference below that illustrates this.
b) In addition to the colinearity issue, the product variable has a low reliability which additionally reduces its power. Hence: It is difficult to find moderator effects. A solution can be to switch to structural equation modeling and create a latent product variable
HTH
Holger
Frazier, P. A., Barron, K. E., & Tix, A. P. (2004). Testing moderator and mediator effects in counseling psychology. Journal of Counseling Psychology, 51(1), 115-134.
Echambadi, R., & Hess, J. D. (2007). Mean-centering does not alleviate collinearity problems in moderated multiple regression models. Marketing Science, 26(3), 438-445.
Steinmetz, H., Davidov, E., & Schmidt, P. (2011). Three approaches to estimate latent interaction effects: Intention and perceived behavioral control in the theory of planned behavior. Methodological Innovations Online, 6(1), 95-110.
The procedure Holger Steinmetz suggests is designed to test the moderating effect of a continuous variable. If your moderating variable is categorical, you cannot "center" it and you should use a set of dummy variables instead.
FYI, the interpretation of the effect of categorical moderators is more straightforward than for continuous variables, but that does not mean that you should break a continuous moderator into categories (e.g., high versus low). If your moderator is continuous, you should use the literature to learn how to interpret that.
Thank you very much David L Morgan for your guidance on this case, I was trying it with regression but didn't thought of dummy variables. Many thanks for your valuable contribution
I really appreciate your valued contribution on my question. However this procedure is new to me even though I was practicing hierarchical regression. I need to test both a dichotomous and a continuous variable but need to do it with SPSS. Hence, I believe both your and David L Morgan 's approach would work well with my study.
To test a bariable as moderator you only need to employ regression. Create an interaction variable by multiplying your IV with the moderator variable. Then run the multiple regression with IV, Moderator, and Interaction in the model. Test the moderation effect by testing the regression coefficient of Interaction.
The results from the regression analysis and SEM would be essentially the same, because SEM is basically a variation on regression. If you are already familiar with regression, then there is no need to learn SEM in order to test moderation.
this is only the case a) when you model observed and not latent variable and b) when the model structure is a "common effect structure" (i.e. with several IVs and one DVs).
In the case of a moderator analysis, incorporating latent variables (as long as the measurement model is valid) can have a tremendous advantage compared with observed variable moderated regression as you can form latent product variables which reduces the immense biasing effect of low reliability of observed product terms.
Hence, it could be valuable if the moderated regression slightly fails to hit the significance border :) But I would recommend deciding that in case that happens.
Busemeyer, J., & Jones, L. (1983). Analysis of multiplicative combination rules when the causal variables are measured with error. Psychological Bulletin, 93, 549-562.
Jaccard, J., & Wan, C. K. (1995). Measurement error in the analysis of interaction effects between continuous predictors using multiple regression: Multiple indicator and structural equation approaches. Psychological Bulletin, 117(2), 348-357.
Moosbrugger, H., Schermelleh-Engel, K., & Klein, A. (1998). Methodological problems of estimating latent interaction effects. Psychological Research, 2(2), 95-108.
Steinmetz, H., Davidov, E., & Schmidt, P. (2011). Three approaches to estimate latent interaction effects: Intention and perceived behavioral control in the theory of planned behavior. Methodological Innovations Online, 6(1), 95-110.
If you do your research according to structural equation modeling approach, based on your moderator variable (a Nominal or a scale variable with some indicators) you can use AMOS or Smartpls as software (of course u can use both software for a scale variable with some indicators but Smartpls is much easier) and analyze moderating role due to the statistical results
Holger Steinmetz, Increasing the reliability of "unmeasured" variables is indeed one of the major advantages of SEM, but that is unrelated to testing for moderation, which is the issue here.
to the contrary. As I argued (did you read that?), the product term has a very low reliability and power, as its reliability is the product of the reliability of the predictor and moderator. That is, when both have a reliability of .7, the product has only a reliability of .49. Given that most interaction effects are small anyway, you have very low chance to detect the moderator effect.
I disagree that there are no best tests for assessing moderation (i.e., interaction effects). In regression, we can look at the changes in R-sq associated with the significance of the interaction term(s), and in Structural Equation models with can examine changes in Chi-sq and fit indices.
For analyzing the interaction among variables, you can use regression in Spss as well as regression in Spss Amos. The result is same. The advantage of Spss Amos is the graphic output.
the main issue of interest is whether you use manifest variables and product terms or latent variables and latent product term variables (and not the software). Latent and manifest interaction models will show a stronger difference the smaller the reliability of the variables is (as this will lower the reliability of the product term).
David L Morgan , actually it is also possible in a SEM framework to inspect the change in Rsquare but you would have to manually compute it. I guess that is pure tradition what to consider. I have no information whether the significance of the product term or change in Rsquare or Chisq difference test is "better" (which I would define as an optimal type I vs. type II error rate) and I would assume that all will lead to almost identical results. I would be very interested in any evidence in case I am wrong because I ask myself that for decades :)
this is indeed an interesting chapter by Jeffrey Edwards as he clearly points out the problems of measurement error. In addition, he argues that hierarchical tests (with F tests of the rsquare difference) vs. one-shot tests (with considering the significance of the effect of the product) will give identical results.
I don't know, however, why you mention ANOVA. An ANOVA will bring identical results when you have categorical exposure variables as in a dummy regression with a product. When you have (semi)continuous variables ANOVA does not work as you would have to mediat split the exposure variables (which one should not do).