Recently, we embarked on a research project that involves the assessment of the intention of primary care physicians in providing sickness certification to patients. We rigorously conducted pilot tests (quantitative and qualitative as well) which led to the final version of the questionnaire. We have sent that final version through post to the various target population (primary care physicians) in two different states. We have briefed fellow representatives in the two states on how to administer the questionnaire as well.

 We are now at the validation phase of the questionnaire. And that this point of time, a fellow researcher colleague of mine pointed out, how would I be sure that the questionnaires were answered satisfactorily well? I argued that in the process of validation, we would be able to identify such errors. My friend suggested that there should be a trained personnel (i.e. research assistant) present when the questionnaires are being answered. But I feel that this is not necessary in all cases.

 I am well aware that some respondents would just casually answer the questionnaire without much thought/analysis. I know that this is the bane of all research work involving questionnaires. I feel that this could be dealt with statistically (Cronbach/Factor Analysis).

 So how do I approach this problem? To me, I believe that I set a numerous rules to minimize bias and that should be reasonably (and humanly) enough. Any thoughts on this issue? Thanks.

Rabin

Similar questions and discussions