what is the problem with an alpha of 0.62? is it too small? you may have heard of a minimum value of 0.7 for alpha in the literature, right?
if you look at the formula of cronbachs alpha (could be easily found in the books or on wikipedia) you'll see that alpha depends on 2 things: 1.) average correlations between the items in your scale and 2.) number of items in the scale. the more items you have in your scale the bigger alpha becomes (same average correlations assumed...).
so we need some more information: how many items does your scale contain? if your scale has 2 items, 0.62 is not small (but its probably not the best - broadest - scale). if you scale has 15 items, 0.62 is probably small...
with two items you should not worry about an Alpha of .62 but Cornbachs Alpha is not the best estimator for reliability - maybe this article is useful for you:
by using too items (like always and never), this question should be answered by YES or No (dichotomous or binary). Between Never and Always there a an important distance. I´m grateful to see any paper using 2 items only.
yes, exactly. you mixed up items (the questions or statements) with the answer categories/options of your rating scale, e.g. 1 to 5, or: always, sometimes, never. and so on...
i mentioned "2 items" in my first answer above. i meant 2 statements or questions. 2 items is the minimum number of items for computing cronbachs alpha. and if your scale (which then probably won't be a very good scale) has only 2 items, then an alpha of 0.62 is fairly large.
Ok ok, i had more than 2. Let me notify you mates, when i did it for my N=45, i had an acceptable alpha (more than 0,71), now that that i have N=68 i get this alpha=0.62. Thanks Timo for the paper.
How many scales do you have and how many items for each scale? You mentioned N=45 and N=68 in your previous answer, but this is presumably the number of respondents, not the number of items?
If you do only have 68 respondents, this isn't really enough. There are various "rules of thumb" when you run reliability analysis about sample size. Some authorities suggest a minimum of 300 people, Others say at least 3 x as many respondents as there are items. By either of these standards, 68 does not seem enough.
You've already calculated Alpha. If you are using one of the standard stats packages (SPSS for example) to do this, you should be able to produce statistics for each item, not just for the scale as a whole. Look at the item-partial correlations, or at the estimate of scale alpha if that item is deleted, to determine if any items are not working as they should. You can then review (and if necessary, rewrite and retrial) those items, or (assuming that you are analysing a trial version of your instrument containing more items than you need in the final version) delete those items from your scale(s). I wouldn't do this until you have a decent-sized sample though; with small samples the statistics are very sample-dependent.
dear John, Thank you... 68 and 45 are the sample size. I use SAS 9.4 to calculate alpha, it gives me alpha for each item, and the global is around 0.62. Alpha for some items appear acceptable (0.70). It´s a 5 point Likert Scale and i ever run it with 3 items. I´m grateful to receive any paper where authors mentions what you say (N=300, at least 3 x N). Regards
There have now been so many questions related to Likert-scored items and making them into scales that I have created a thread here is to share resources on this topic:
Note that the first few entries are mostly about Cronbach's alpha, while the later ones are mostly about exploratory factor analysis.
In the present case, it is worth noting that sample size will not affect Cronbach's alpha, which is calculated from the average correlation across the items and the number of items. Since the number of items didn't change between the first calculation and the later one with a larger N, then the average correlation must have gone down.