While developing a questionnaire to measure several personality traits in a somewhat unconventional way, I now seem to be facing a dilemma due to the size of my item pool. The questionnaire contains 240 items, theoretically deduced from 24 scales. Although 240 items isn't a "large item pool" per se, the processing time for each item is averages on ~25 seconds. This yields an overall processing time of over >1.5 hours - way to much, even for the bravest participants!

In short, this results in a presumably common dilemma: What aspects of the data from my item analysis sample to I have to jeopardize?

  • Splitting the questionnaire into parallel tests will reduce processing time, but hinder factor analyses.
  • Splitting the questionnaire into within-subject parallel tests over time will require unfeasible sample sizes due to a) drop-out rates and b) eventual noise generated by possibly low stability over time.
  • An average processing time over 30 minutes will tire participants, jeopardize data quality in general.
  • Randomizing the item order and tolerating the >1.5 hours of processing time will again require an unfeasible sample size, due to lower item-intercorrelations.

I'm aware that this probably has to be tackled by conducting multiple studies, but that doesn't solve most of the described problems.

This must be a very common practical obstacle and I am curious to know how other social scientists tackle it. Maybe there even is some best practice advise?

Many thanks!

More Björn Hommel's questions See All
Similar questions and discussions