We use reliability to test the internal consistency among the variables or items through a summated scale (Hair et al., 2006). The most widely used test for internal consistency reliability is Cronbach’s Coefficient Alpha (Cronbach, 1946).
Churchill (1979) suggests that an alpha of 0.60 or greater should be considered adequate to develop a new questionnaire. But mostly we follow Nunnally (1982) threshold (.70).
Follow the simple procedure in SPSS.
Click …Analyse – scale – reliability analysis
Select all items of interest from the left hand list and move to the items on the right.
Select from descriptive option… Item, Scale, scale if item deleted
Dear Waldemar and Mohammad, Thanks for your responses. I have no problem with calculating Cronbach's alpha. I already tried the method stated in the second link of Waldemar. I have problems with installing userfriendlyscience in r.
How certain are you that ordinal alpha is useful in your analysis? It's only a reflection of potential reliability if all of your response options are changed from ordinal categories to continuous measures. See this paper for a discussion: https://doi.org/10.1177/0013164417727036
Thanks Robert for sharing this important article. I decided to use ordinal alpha to measure the internal consistency of a scale since all the items in my scale has responses in an ordinal scale. It is really confusing.
@Deepani sorry to hear that, though you aren't alone. Numerous researchers (including some quantitative methodologists) have believed the misconceptions Zumbo et al. have stated as fact, and only recently are measurement theory researchers starting to highlight the flawed reasoning of these authors. Bill Revelle (author of R's psych package) has made this observation as well, though from a simulation perspective, as well as Tenko Raykov who highlights the even worse misconception that item's must be normally distributed in order to be valid (which is completely incorrect).