I have combed through many studies pertaining to estimates of illegal resource use. The authors rely on responses from survey participants, which are prone to social desirability bias. To control for SDB, they employ the Random Response Technique (RRT). With the help of a randomising device (a coin or die), interviewers offer statistical noise which can help conceal a participants response, thereby increasing anonymity and encouraging the respondent to answer honestly.

Although it seems practical in theory, the same statistical noise that is meant to reduce SDB could result in participants not following instructions due to the perception of forced admissions of sensitive behaviour participation implicating them in something they did not do. Additionally, do participants really comprehend the statistics well enough to understand how their responses are concealed?

The only validations I could find of the RRT were weak comparative studies with a "more is better" assumption, that is more admissions to the sensitive behaviour of interest equates to a more accurate method and a reduction in SDB. However no studies have actually validated the prevalence of these behaviours to any directly observed data.

In fact, many studies in the social sciences have found the RRT to often produce paradoxical estimates, waste a significant amount of data, requires significantly more resources to administer and that the instructions and method are not easily understood by participants.

Yet the RRT is growing in popularity in the literature. My question is, with no strong evidence of its validity, why is RRT so readily employed in the literature?

More Christopher S Bova's questions See All
Similar questions and discussions