Articles often quote reliability more than 70%. Some argue in social research reliability cannot be achieved higher, hence 30-40% is acceptable. Could anyone elaborate this?
he survey method has an impact on response rate. Generally, e-mail surveys have a lower response rate than mail surveys, even when access to the Internet is not an issue. For example, in a 2004 survey of university undergrads with e-mail access, about 21% responded to an e-mail survey while 31% responded to a mail survey (Kaplowitz et al., 2004 in Public Opinion Quarterly pp. 94-101). Face-to-face surveys achieve the highest response rates, with the best I’ve seen being a whopping 92%. Some studies report that telephone surveys have a higher response rate than mail surveys, while others report the reverse. Sending reminders boosts response rates. Oddly enough, studies have shown that sending a $2 incentive boosts both response rate and representativeness.
25% – Dr. Norman Hertz when asked by the Supreme Court of Arizona
30% – R. Allen Reese, manager of the Graduate Research Institute of Hull U. in the United Kingdom
36% – H. W. Vanderleest (1996) response rate achieved after a reminder
38% – in Slovenia where surveys are uncommon
50% – Babbie (1990, 1998)
60% – Kiess & Bloomquist (1985) to avoid bias by the most happy/unhappy respondents only
60% – AAPOR study looking at minimum standards for publishability in key journals
70% – Don A. Dillman (1974, 2000)
75% – Bailey (1987) cited in Hager et al. (2003 in Nonprofit and Voluntary Sector Quarterly, pp. 252-267)
In addition, various studies described their response rate as “acceptable” at 10%, 54%, and 65%, while others on the American Psychological Association website reported caveats regarding non-responder differences for studies with 38.9%, 40% and 42% response rates.
Nirmarla, You have done an incredible job with summarizing the variation of acceptable response rates... I do need to spend some time with all of your references.... but simplistically in the Applied Science in Mental Health Surveys, I have always been told that 33% is the cut off for how low one can go & still have a viable data base to reach statistical significance with your observations - Is that still true or a dated number?
Professor Nirmala, Thank you very much for your information and articles regarding response rate. I was wondering if response rate some how relates with reliability of a questionnaire. How good would it be to test the reliability using Cronbach's alpha in social research and obtain the coefficient 0.38 ? I have used 5-point likert scale. Is is essential to improve the reliability in this case and if so what might be the possible ways to improve it? Thanks again
In general the standard for Cronbach's alpha in the social sciences, as elsewhere, is .7 and above. Below that (especially at .38) you have too much random error in the data.
With a reliability that low, I would begin by looking for problems in the data, such as negative correlations or mistakes in coding missing data.
Response rates have rather little to do with reliability, although they may have strong effects on validity, if only a specific subset of the available population answers the survey.
I have also observed 0.7 as a accepted standard for Cronbach's alpha, with some variation of .6 as well.
I myself experienced a situation where alpha was as low as .4 and then I found that there was issue with coding- it was supposed to be reverse coded which I didn't.
There are two types of reliability of interest in surveys. The first is test-retest reliability, which asks whether respondents can consistently complete a questionnaire, and is undertaken when developing the questionnaire. This is usually measured by kappa or the ICC, and there are well established cut-off points for acceptability:
Below 0.00 Poor
0.00-0.20 Slight
0.21-0.40 Fair
0.41-0.60 Moderate
0.61-0.80 Substantial
0.81-1.00 Almost perfect
The other type of reliability relates to scale development, and asks the question whether all items in the scale can form a linear composite. As mentioned above, Cronbach's Alpha is the usual test statistic, and 0.7 the usual cut-off. However, alpha is affected by the number of items in the scale, and scales with only a few items are likely to have an alpha
For measuring the reliability of measuring instruments like scale, two methods can be used; 1) Test-retest and 2) Split half method. The acceptable Cronbach's Alpha is the usual test statistic, and 0.7 the usual cut-off. It indicate that 70 per cent of the respondents agreed with the items indicating that the items included in the scale or questionnaire are clear. The reliability coefficient can also be calculated and its significance depend on the sample size used for testing the reliability of measuring instruments. For further details, book by Kerlinger, F. N. " Foundation of Behavioural Research" can be consulted.