I have been working of developing a new measurement tool. I have followed all the standard scale development steps / best practices and have a reduced set of items (from 31 down to 22 items via exploratory factor analyses) that I plan to validate. I am preparing to collect a new sample of data with which we will conduct our confirmatory factor analyses (CFA). Since it is a three- country study, I am considering fielding all 31 items again but limiting the CFA to the 22 items that are part of the scale structure (4 domains) that emerged from the EFA. Is there a methodological reason why this approach is problematic? I could only include the 22 items but asking the additional 9 questions does not pose an additional burden to the data collectors, creates limited burden for the respondents as it increases administration time by only a few minutes, nor does it increase the costs of data collection. If for some reason the CFA results are poor, having two independent data sets with all 31 items may be beneficial for conducting further EFA, or developing country-specific tools vs.a multi-country global tool. Most scale development articles and books state that it is best practice to validate a tool on a new sample, but I cannot seem find any information about whether or not it is methodologically inappropriate to collect an expanded set of items and then limit validation analyses to the reduced set of items (as identified during EFA). I would value experience with or advice on this issue.

More Kristen M Shellenberg's questions See All
Similar questions and discussions