My question is connected to rather unclear point of error correlation that many scholars encounter while conducting their SEM (structural equation modeling) analysis. It is a pretty often when scholars report procedures of correlating the error terms to enhance the overall goodness of fit for their models. Hermida (2015), for instance, provided an in-depth analysis for such issue and pointed out that there are many cases within social sciences studies when researchers do not provide appropriate justification for the error correlation. I have read in Harrington (2008) that the measurement errors can be the result of similar meaning or close to the meanings of words and phrases in the statements that participants are asked to assess. Another option to justify such correlation was connected to longitudinal studies and a priori justification for the error terms which might be based on the nature of study variables.

In my personal case, I have two items with Modification indices above 20.

lhs op rhs mi epc sepc.lv sepc.all sepc.nox

12 item1 ~~ item2 25.788 0.471 0.471 0.476 0.476

After correlating the errors, the model fit appears just great (Model consists of 5 latent factors of the first order and 2 latent factors of the first order; n=168; number of items: around 23). However, I am concerned with how to justify the error terms correlations. In my case the wording of two items appear very similar: With other students in English language class I feel supported (item 1) and With other students in English language class I feel supported (item 2)(Likert scale from 1 to 7). According to Harrington (2008) it's enough to justify the correlation between errors.

However, I would appreciate any comments on whether justification of similar wording of questions seems enough for proving error correlations.

Any further real-life examples of wording the items/questions or articles on the same topic are also well-appreciated.

More Artem Zadorozhnyy's questions See All
Similar questions and discussions