I am analyzing responses to surveys where people have written very different text responses to the "please explain" text fields from their Likert scale x-choice in the same question? For example:
Q: How would you rate the responsiveness of home office to field office needs?
X choices=Excellent, fair, neutral, poor, terrible (respondent selected "fair")
*Same* respondent's text response in comment field = "Home office has almost never responded to any of my requests as field office director. My understanding from other ODs [office directors] is that they have an equally hard time getting home office to reply to requests, let alone get the request filled."
This is a strong and broad pattern across many questions and many respondents. I'm beginning to think I did something wrong to break the platform (SurveyMonkey) or just did a really poor job of constructing questions...if it is not me, what causes people to answer this way? Is this a real survey-based research thing or is it just me?