My research team and I collected a batch of data this month(~150 workers) from Mturk. Despite having many quality checks embedded in the survey (e.g., multiple attention checks, reverse coded items) we still feel that the data are suspicious. We ran a similar study one year ago and one of our measures assesses sexual assault perpetration rates. We used the same measure in our current study and the perpetration rates are unusually high this time. Is anyone else having trouble finding quality participant responses in Mturk? Does anyone have suggestions for how we could target better participants? Are there any forums or blog posts that we should be aware of that will help us better understand what is going on? Any information would help and be greatly appreciated!

Thanks in advance!

More Breanne R. Helmers's questions See All
Similar questions and discussions