01 December 2021 1 7K Report

Suppose a scale has two items that seem very similar. For instance, suppose you have a survey with two items that people rate their level of agreement with (e.g., 1 = strongly disagree, 7 = strongly agree), but the two items look almost identical and do not appear to be meaningfully different.

What problems, if any, may arise from using items that may be redundant or "too similar"? And are there any heuristics or guidelines for making judgment calls about item similarity?

Outside whatever quantitative means one might use to assess item redundancy, what should one do if the items don't meet these criteria, but still seem like they aren't meaningfully different? I would worry that any reliable difference in participant response isn't an indication that the two items are assessing non-overlapping contours of a construct, but that there's just something idiosyncratic about the difference in response that has nothing to do with capturing the construct in question.

I would absolutely be interested in any articles on this topic as well. Thanks!

Similar questions and discussions