Cronbach alpha has criticized for several reasons and researchers are not believe in Composite reliability that can be performed in PLS and AMOS. Have a look for Composite reliability in AMOS.
The best test of reliability is the appropriate test. There is no better test at all, but according to the test situation, the best method can be determined
1. Each reliability measure could be developed for a specific purpose.
2. Reliability consists of several measures:
Item alpha reliability and split-half reliability assess the internal consistency of the items in a questionnaire – that is, do the items tend to be measuring much the same thing?
Split-half reliability on SPSS refers to the correlation between scores based on the first half of items you list for inclusion and the second half of the items. This correlation can be adjusted statistically to maintain the original questionnaire length.
Coefficient alpha is merely the average of all possible split-half reliabilities for the questionnaire and so may be preferred, as it is not dependent on how the items are ordered. Coefficient alpha can be used as a means of shortening a questionnaire while maintaining or improving its internal reliability.
Inter-rater reliability (here assessed by kappa) is essentially a measure of agreement between the ratings of two different raters. Thus it is particularly useful for assessing codings or ratings by 'experts' of aspects of open-ended data; in other words, the quantification of qualitative data. It involves the extent of exact agreement between raters on their ratings compared with the agreement that would be expected by chance. Note then that it is different from the correlation between raters, which does not require exact agreement to achieve high correlations but merely that the ratings agree relatively for both raters.
3. For more details, please refer to
Howitt and Cramer (2008). Introduction to SPSS, Pages 249-258.