When using CFA models to examine measurement invariance, one needs to identify these models by assigning a scale to the latent constructs. Two ways of doing this seems to appear frequently in the literature: (1) fixing the factor (latent) means to zero and the factor variances to one; and (2) fixing one item's (the reference item's) intercept to zero and that item's factor loading to one.

Both ways have their pros and cons. Regarding the first, it might be unrealistic to assume a latent mean of zero for all groups (or a factor variance of 1). Regarding the second, by constraining the factor loading(s) of the reference item(s), one assumes invariance, while in fact this item’s factor loading might be variant across groups. (Though this might be solved by running the models several times with different reference items.)

What is your opinion about this? What would be the best way to identify your model?

(In our case, we have 4 latent factors with 3 indicators each, and we would like to examine MI across four groups.)

More Marlies Maes's questions See All
Similar questions and discussions