What do you mean by "reliability"? Accuracy of the result (and against what criteria)? Repeatability of the result? Certainly a good idea in survey design would be a trial run - give the question to a bunch of volunteers and see what you get for results. It would help to know what your open ended question is, or is the answer 42?
Thank you for your response. By saying reliability, I wanted to mean consistency and content validity of the question. Yes, definitely accuracy of the result is involved in it. Normally for many questions in the questionnaire, we check the Cronbach's alpha which needs at least 2 items in a questionnaire, that to for scaled, or ordinal or nominal data. How do we check for question like-
What message does the signage convey? Explain. I have collected few responses.
Yes definitely I will try to do a trial run for more responses. Regards, Sangeeta.
I am a believer in a mixture of categorial and open ended responses are useful. And there is a good statistical methodology I was taught in the Master's degree to look at the distribution of the responses, and based upon the distribution you can develop the category boundaries (Rather than using an arbitrary Likert 1, 3, 5, 7, 9 scale)..
I'd suggest asking a question like - "This signage provides a positive message to me about [safety or whatever the subject the message is supposed to deliver]. And then - "Please tell me the message you did receive from the sign and how you feel about it."
Yes, I got the responses that gave me distribution which is deviated from normal. The answers I received was like 'stop', ' no entry to certain areas.'. I felt that the answers are pretty correct. The signage is confusing people and people don't know what to stop, or what to do next as they have written in answers. I have also analysed the signage and felt the same as subjects did.
OK. By the way, I should have said I like to use "Strongly Agree", "Agree", "Neutral", "Disagree" and "Strongly Disagree" on the categorical question. Sounds like some safety signs you are looking at. So from an overall performance perspective - I might ask - have there been any defects, events or injuries due to failure to follow (or misinterpretation of) the signage. I must admit - our problems usually relate to the procedure being used rather than the signage, but on occasion I've seen cases where a safety barrier and signage get violated..
The signages are COVID-related and I am trying to assess the understanding of employees regarding the signages. That is the reason I have given them open-ended questions. I wanted to check the reliability of my question that is open-ended. I did not find any mechanism to check.
You said " distribution of the responses, and based upon the distribution you can develop the category boundaries (Rather than using an arbitrary Likert 1, 3, 5, 7, 9 scale). " I liked this method. Thanks. Could you please refer me a book/ paper or elaborate if possible, how we can use distribution to develop categories?
I need to check my Dept of Energy work computer Monday for that paper from the Naval Postgraduate School course. It originated from an old US Army field manual from the 1950's. With Excel spreadsheet it is fairly easy to perform.
I will say at the laboratories I support, managers are required to do "Management by Walking Around" and they record instances when the see COVID 19 issues (lack of sanitizer, lack of social distancing, lack of masks) and work to correct any physical conditions and to coach employees. And also record examples of good practices. It is part of the overall performance assurance program. I would see a survey of the workers about knowledge and their opinions towards COVID precautions to be an integral part of overall monitoring.