But remember that there is an ongoing debate on which type of sum of squares are appropriate (type II vs type III) - this matters in unbalanced designs.
But Thierry, in that case, you just enter your covariate and determine if the effect is still significant or not but you do not test any interaction ????
Sorry, I may have gone to fast and skipped a few steps on SPSS. Imagine that you have a variable X and a covariate "Age" for example. To test the interaction between X and Age with SPSS, you have to use the general linear model module, to choose "univariate" (for this example), and to declare your model with the module "Model", a new window opens, click on "other" and now you can specify what kind of effect you want to test (main effect, interaction). But once again, this procedure is often incomplete and leads to some problems widely discussed in Yzerbyt, Muller & Judd.
the book you quoted tells nothing about what to do when there are in fact interactions between treatment and covariates. I think, the assumption of 'homogeneity of slopes' is too rigid in practical work, and it is an unnecessary assumption as well, because such terms can be included and interpreted in linear models. The first part is done in the chapter, the second one is lacking.
There are many circumstances, where such interactions occur, see for instance
Byar, D. P. (2000). Assessing apparent treatment--covariate interactions in randomized clinical trials. Statistics in medicine, 4(3), 255–63. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/4059716
I do agree. I just made her aware of these assumptions. If Natalie get a significant interaction (moderation). Lmatrices should be computed in order to test the different levels. Check this link out;
In any stats package you test the interaction effect in regression (of which ANCOVA is a special case) by adding a product term to the model. Some packages allow you do this via special commands or options but all you need to do in general regression software is create a new predictor that is the product of the original two predictors.
e.g., main effects model Y = b0 + b1X1 + b2X2
interaction model:
main effects model Y = b0 + b1X1 + b2X2 + b3X3
where X3 = X1 x X2
The test of the interaction is usual t test of X3 or equivalently the F test of R^2 change between the two models.
Complications:
- interpretation can be tricky and it advisable to try and explore any interaction effects graphically
- centering is not strictly required but centering X1 and X2 (if continuous) prior to computing the product term can often aid interpretation
- for models with other manipulated IVs it can be helpful to include additional interaction terms with the covariates to avoid bias
The basics of the test are not difficult but the interpretation and related modeling issues can be tricky so I would advise finding a text with good coverage of the issues. For example:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. Mahwah, NJ: Erlbaum.
Baguley, T. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave.
Jaccard, J., Turrisi, R., & Wan, C. K. (1990). Interaction Effects in Multiple Regression Thousand Oaks, CA: Sage.
As far as I know, (1) if the covariate is not related to the dependent vble, or (2) if the independent vble groups don't differ on the covariate, an ANCOVA is not reasonable.
(1) Use correlation.
(2) Use ANOVA (covariate as independent vble, independent vble groups as dependent vble).