Are youbtrying b to calculate reliability for the entire survey? Do each of the scales have published reliability and validity the make up your survey?
Patrick, I am actually trying to generate it for sections for the survey. i.e. there are 4 sections and each have a set of questions. Unfortunately the papers I picked the set of questions from did not publish its reliability.
Lalitha, you are quite right; it is a specific answer within each question that I am looking at. so if a respondent picked a different answer the the behaviour I am looking for does not exist. (at least thats how the theory goes in the papers conducting these studies before) Hence I think that is where the problem might be! now the theory supports it but no published reliability what so ever.
If your alpha is negative, there is only one way that can happen: you included negative correlations. But alpha assumes that all the correlations are positive, so you need to reverse some of your items.
Note that alpha is based solely on correlations, so it doesn't matter how the variables are scored.
Have you checked for reverse-coded items, if any? Two possibilities are here for reverse-coded items. First, if reverse-coded items are wrongly coded when entering data, usually it will be denoted by reliability results by showing reliability improvement if the item is deleted. Second, data entry might have been correct, but respondents might have ticked answers on a "continuous" basis (like first item as 4, then rest of the items as 4 or 3, without really understanding that a particular reverse-coded item is to be answered differently to stay consistent). This can also be checked by looking into whether reliability improves by deleting a particular item.
Thank you all for your comments. After taking these comments in consideration and going through further searching I found that Principal Components Analysis might be a better option or a multivariate analysis to achieve reliability.
Unfortunately the main problem I am facing is having different scales within the element I am trying to measure. The items are mostly less than 4 and correlation does not seem realistic.
I would still appreciate any further comments if you think I am not on the right path.
PCA uses the same assumptions as alpha, since both are based on the analysis of a correlation matrix. Note, however, that PCA can handle negative correlations (it just assigns some items negative loadings).
Cronbach's alpha assumes that all indicators are equally reliable. The higher the similarity of factor loadings, is the higher the estimate. It is more regarded as an item-level reliability analysis.
For assessments at the latent variable or construct level, the composite reliability is a preferred measure