Cronbach's alpha measures the internal consistency or reliability of a data set; this is one of the considerations to judge the suitability of a data set for statistical analysis (e.g. factor analysis). Other tests that also measure the internal consistency of data are Split-half reliability, and Odd-even reliability.
Prof. Bachir ACHOUR has rightly mentioned the ranges of Cronbach's alpha coefficients and the implied reliability. The detailed information on the topic. Thanks to Prof. Achour.
One question to Prof. Achour: > 0.90, - I think this indicates too much inter-relations i.e. data redundancy and not acceptable. Is it right?
Some papers also offered indications of alpha having a threshold or cut-off as an acceptable, sufficient or satisfactory level. This was normally seen as ≥0.70 (five instances) or >0.70 (three instances) although one article more vaguely referred to “the acceptable values of 0.7 or 0.6” (Griethuijsen et al., 2014).
In the analysis of the scales. Do you perform analysis of the scales after transforming the reverse (negative) questions? When I analyse before transforming the reverse questions my Alpha becomes 0.73 but after transforming the questions it rises to 0.75....
You should absolutely reverse-score and reverse-worded items before assessing alpha. (Consider that alpha is a function of the average inter-item correlation, so if some correlations are negative...) The fact that the estimates of alpha that you obtain are so similar whether you do or do not reverse-score is troubling, unless you have so many items, and just a few a reverse scored, in which case they just don't matter much
While reviewing a research paper, I came across a Cronbach alpha of 0.95. This is quite high but was probably caused by the long list of items (18 of them under one construct) on the Likert-scale and some questions were similar. I recommended that the list be shortened by 50%.
Indeed. Alpha is a mathematical function of the average inter-item correlation and the number of items, so it can be increased by increasing scale length with items of similar correlation
Your high Cronbach alpha may reflect item redundancy. I suggest the following thread: https://www.researchgate.net/post/What_to_do_with_a_high_Cronbach_alpha_score
Description of Cronbach's alpha as "acceptable" or "not" is not obvious. Please see a graphics in Taber 2017: Article The Use of Cronbach’s Alpha When Developing and Reporting Re...
I'm preparing this table based on what was published in an article: The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education by Keith S. Taber. You can find text on page 6
The accepted value of Cronbach’s alpha is 0.7; however, values above 0.6 are also accepted (Griethuijsen et al., 2015; Taber, 2018).
van Griethuijsen, R.A.L.F., van Eijck, M.W., Haste, H. et al. Global Patterns in Students’ Views of Science and Interest in Science. Res Sci Educ 45, 581–603 (2015). https://doi.org/10.1007/s11165-014-9438-6
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting
research instruments in science education. Research in Science Education, 48(6),
Arnida Jahya, in brief, I would say that 0.5 is highly likely to indicate an insufficient amount of "glue" among a set of items - even if only a few items are being considered.
Apart from that, please note that the preferable "name" for the metric at hand is "coefficient alpha", sometimes abbreviated to just "alpha". Even Lee Cronbach, after whom many people label the metric, said he thought it inappropriately named after him.
Cronbach’s alpha results should give you a number from 0 to 1, the closer Cronbach’s alpha coefficient is to 1.0 the greater the internal consistency of the items in the scale. George and Mallery (2003) provide the following rules of thumb: “_ > .9 – Excellent, _ > .8 – Good, _ > .7 – Acceptable, _ > .6 – Questionable, _ > .5 – Poor, and _ < .5 – Unacceptable”.
I hope I don't come across as disagreeable, but there are two problems people don't seem to be aware of. The minor one is that this metric is more correctly referred to as coefficient alpha, not Cronbach's alpha - as I mentioned up on 9 December within this thread.
The major problem is that rules of thumb for this metric can be extremely misleading. With a large-enough number of items, say 25 or more, it's quite easy to get an alpha above .90 even when there is not much relationship between many of the items. Values of alpha can be very dependent on the number of items involved.
Danilo Rogayan Jr., please see the attached article "On the Use, the Misuse, and the Very Limited Usefulness of Cronbach’s Alpha." This popular paper describes in detail the issues Robert Trevethan mentioned.
It is worth taking seriously the overwhelimg amount of critical and cautionary literature recommending against using and reporting alpha.
Miky Timothy, thank you for following up on my post. I was being lazy and didn't include the citation you've provided. Here are some other articles that I think are illuminating:
Cho, E. (2016). Making reliability reliable: A systematic approach to reliability coefficients. Organizational Research Methods, 19(4), 651–682. https://doi.org/10.1177/1094428116656239
Cho, E., & Kim, S. (2015). Cronbach’s coefficient alpha: Well known but poorly understood. Organizational Research Methods, 18(2), 207–230. https://doi.org/10.1177/ 1094428114555994
Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350–353. https://doi.org/10.1037//1040-3590.8.4.350
Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 74(1), 107–120. https://doi.org/10.1007/s11336-008-9101-0 [There are commentaries additional to this article in the same issue of the journal.]
I have to confess that, when I encounter some articles, there are shortcuts I use to work out whether it's worth putting much faith in the researchers' work. One such shortcut is whether the researchers can write decent sentences. Another is whether they refer to coefficient alpha as Cronbach's alpha. Another is the amount of "blind faith" they put into coefficient alpha. I won't dismiss articles that exhibit these characteristics, but my faith in those articles declines with each shortcoming.
I have a question, can we use the same participant for the validity test and the summary test? with a situation the number of participants is limited because of the criteria
A general accepted rule is that α of 0.6-0.7 indicates an acceptable level of reliability, and 0.8 or greater a very good level. However, values higher than 0.95 are not necessarily good, since they might be an indication of redundance.
I would very much like to see people avoiding simplistic (but quite pervasive) rules-of-thumb when referring to coefficient alpha. Instead, I recommend greater awareness about how the number of items entered into an analysis has a considerable bearing on the alpha level and how it should be interpreted. I have provided some citations regarding that a few posts above here.
Sometimes an alpha less than .60 is possible when there is an acceptable amount of association between items (particularly if only a small number of items is involved), and, conversely, it's quite easy to get alphas above .90 with at least 20 items even if many of those items are not very related to each other.
The following reference should be helpful for indicating the reliance of alpha values on the number of items in a (sub)scale:
Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78(1), 98–104. https://doi.org/10.1037/0021-9010.78.1.98
Generally, acceptable values would be greater than 0.70. However, some literature (Panayides, 2013) reports that very high values of alpha (>0.90) could mean lengthy scales, parallel items, item redundancy, or narrow coverage of the construct (or construct underrepresentation).
Here's a helpful article: https://www.ijme.net/archive/2/cronbachs-alpha.pdf
Jemal Haidar, I think you mean > .7, not < .7, don't you?
Apart from that, simple rules of thumb, such as what you and many other people have suggested, can be quite misleading. For one thing, the number of items submitted to analysis for coefficient alpha can have a major impact on the outcome.
Jemal Haidar, you're welcome - and thanks for being so gracious. We all slip up on that small kind of thing from time to time.
Please forgive me if I come back and say that I think > .90 is not always best. In my experience, a coefficient alpha as high as that indicates either a large number of items is being thrown into the pot for analysis ( and many of those items might have little relationship with each other) or, conversely, the items are so highly related that they are likely to be redundant and therefore prevent the richness and complexity of a construct from being tapped.
Cronbach's alpha ranges from 0 to 1. Higher values denote higher internal consistency. According to George and Mallery (George D, Mallery, P. SPSS/PC+ step by step: A simple guide and reference. Belmont, CA. United States Wadsworth Publishing Company 1995), these are the values to consider:
Below 0.5 shows an unacceptable level of reliability;
A value between 0.5 and 0.6 could be considered as a poor level;
If it is between 0.6 and 0.7, it is a weak level;
Between 0.7 and 0.8 would refer to an acceptable level;
In the 0.8-0.9 range it would be considered a good level; and
Any value above 0.7 is acceptable. Of course, the higher the better. Just keep in mind that values above 0.9 could mean that you survey has significant redundancy.
This is an interesting discussion. There seems to be some agreement that 0.7 should be the lower bound but, as has already been mentioned by others (e.g., Musa Adekunle Ayanwale Robert Trevethan Japheth Mativo Nzioki ) it is definitely NOT the case that "higher is better". If you are approaching 1, it just means that your measurement items are getting increasingly redundant. I have briefly this discussed in a paper of mine (page 8) and included some sources for those who want to dig deeper.
Article Taking Feyerabend to the Next Level: On Linear Thinking, Ind...
Thank you for your response Horst Treiblmaiier, but I would like to differ with you in this case. Cronbach Alpha statistic is a correlation coefficient and one rule with correlation coefficients is that as the value increases to one, the strength of the relationship between the variables of interest increases. Thus a +0.95 shows a very strong positive correlation. +1 will therefore mean a Perfect positive correlation-meaning items are measuring the same construct of interest and a score on one is highly likely to predict a score on another. -1 will mean a perfect negative correlation. This is why an optimum value of alpha statistic is +. You can correct me with facts if im wrong.
Japheth Mativo Nzioki, Horst Treiblmaier will probably respond to your comment, but I'll dive in first. I think you're confusing what a simple correlation coefficient signifies and what coefficient alpha (which is the more appropriate way to refer to what many people call Cronbach's alpha) signifies.
I encourage you (and others) to read at least some of the following:
Cho, E. (2016). Making reliability reliable: A systematic approach to reliability coefficients. Organizational Research Methods, 19(4), 651–682. https://doi.org/10.1177/1094428116656239
Cho, E., & Kim, S. (2015). Cronbach’s coefficient alpha: Well known but poorly understood. Organizational Research Methods, 18(2), 207–230. https://doi.org/10.1177/ 1094428114555994
Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78(1), 98–104. https://doi.org/10.1037/0021-9010.78.1.98
Hoekstra, R. Vugteveen, J. Warrens M. J., & Kruyen, P. M. (2019) An empirical analysis of alleged misunderstandings of coefficient alpha. International Journal of Social Research Methodology, 22, 351–364. https://doi.org/10.1080/13645579 .2018.1547523
Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350–353. https://doi.org/10.1037//1040-3590.8.4.350
Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 74(1), 107–120. https://doi.org/10.1007/s11336-008-9101-0 [There are commentaries additional to this article in the same issue of the journal.]
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273–1296. https://doi.org/10.1007/s11165-016-9602-2
I hope that's helpful. In particular, I hope the above removes some of the reasons for researchers worshipping coefficient alpha to the extent that it often is, which means it is often applied inappropriately and believed to possess properties that it doesn't.
Not sure how good Cronbach's alpha should be. Now I'm working on a survey containing three scales, and if I remove one item, Cronbach's alpha of one scale will go up from .62 to .77, and all other psychometric characteristics calculated using SEM (CFA) will be slightly better, but PCFI will slightly down from .807 to .803 and PCLOSE will be marginally worse (from .940 to .943). So I'm unsure should I remove this item and even mention Cronbach's alpha in the article.
It is posited in literature that an instrument is regarded as reliable if the Cronbach alpha coefficient is greater than 0.5. The reliability value of Cronbach’s alpha between ±0.41 and ±0.70 qualifies for moderate reliability of the scale measured, while a greater value than ±0.70 shows high internal consistency.
I think it varys field to field that what score should be considered as an acceptable one. For instance, we can already see that there are varying qualitative descriptors being used by different researchers as mentioned in the attached article. Article The Use of Cronbach’s Alpha When Developing and Reporting Re...
Mohammad A. Tashtoush, please forgive me if I appear to be difficult to get along with, but I believe a Cronbach's alpha (more appropriately referred to as coefficient alpha) "close to 1" would almost inevitably be UNDESIRABLE.
And, given your recommendation about "any value close to 1", I can't see any logic behind your having indicated that a value "equal to 1" would be undesirable.
I suggest that people who post to this thread take the effort to read some of the posts higher up in order to avoid (the widespread) simplistic and erroneous notions and recommendations about coefficient alpha. The following references might also be helpful:
Cho, E. (2016). Making reliability reliable: A systematic approach to reliability coefficients. Organizational Research Methods, 19(4), 651–682. https://doi.org/10.1177/1094428116656239
Cho, E., & Kim, S. (2015). Cronbach’s coefficient alpha: Well known but poorly understood. Organizational Research Methods, 18(2), 207–230. https://doi.org/10.1177/ 1094428114555994
Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78(1), 98–104. https://doi.org/10.1037/0021-9010.78.1.98
Hoekstra, R. Vugteveen, J. Warrens M. J., & Kruyen, P. M. (2019). An empirical analysis of alleged misunderstandings of coefficient alpha. International Journal of Social Research Methodology, 22, 351–364. https://doi.org/10.1080/13645579 .2018.1547523
Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350–353. https://doi.org/10.1037//1040-3590.8.4.350
Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 74(1), 107–120. https://doi.org/10.1007/s11336-008-9101-0 [There are commentaries additional to this article in the same issue of the journal.]
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273–1296. https://doi.org/10.1007/s11165-016-9602-2