A Likert scale is an ordinal scale. Assuming you have multiple questions intended to measure the same variable, you can compute Cronbach's alpha to assess reliability for participant scores on those items. Although the response scales are categorical, the tau equivalence assumption permits this treatment of scores.
You may want to be rest assured about your scales as to how the fit well on the subject matter, use factor analytic approach and look out for the fitness of items. Then inter item correlation study can be carried out. Cronbach's alpha will only give you the coefficient just like Simple correlation, but u need to see the performance at the item-scale level
Busari Yusuf Most statistical software will also provide item statistics, inter-item correlations, and item-total statistics when obtaining Cronbach's alpha.
If these are newly-developed items that have not yet been assessed for validity, I agree, FA should be conducted. However, the question only asked about reliability.
In my view, factor analysis can be related to reliability because, if reliability includes issues concerning the extent to which items consistently relate to each other, factor analysis sheds light on that. Factor analysis can also be related to issues of validity, namely the factorial validity of a scale.
Because of that, if I read the issue that Esha Bansal has raised correctly, I think she should check the composition of her multiple subscales by using factor analysis and then investigate the reliability (interrelatedness) of the items on EACH subscale separately by means of such things as interitem correlations, item-total correlations, and coefficient alphas.
Apart from that, I think it's important to distinguish between a set of response options, individual items, collections of items that comprise subscales, and a set of subscales that, together, comprise a single overall scale. Very often, distinctions between those things are blurred, and that can lead to misunderstanding and confusion.
Robert Trevethan Good points. Do you think factor analysis should be the default tool for assessing internal reliability, or are there situations where Cronbach's alpha is preferable?
After content validation, you could run Cronbach’s alpha (for internal consistency), Exploratory Factor Analysis (for the underlying factor structure), and Confirmatory Factor Analysis (for the goodness-of-fit of your factor model). As a good example, you may check out how Bostancıoğlu and Handley (2018) established the validity and reliability of their EFL-TPACK questionnaire, containing seven sub-scales.
Bostancıoğlu, A., & Handley, Z. (2018). Developing and validating a questionnaire for evaluating the EFL ‘total package’: Technological pedagogical content knowledge (TPACK) for English as a foreign language (EFL). Computer Assisted Language Learning, 31(5–6), 572–598. https://doi.org/10.1080/09588221.2017.1422524
Blaine Tomkins, thanks for acknowledging the points I made a few posts above here.
I blow hot and cold with regard to the usefulness of coefficient alpha. For example, I think that, if there are between (roughly) eight and 18 items, alpha might provide useful information about the "general" amount of association among the items. With fewer than, say, five items, there can be a quite acceptable amount of interitem association but a very poor alpha, and with more than, say, 20 items, there can be three or more identifiably different domains and a very high alpha - up around .95. In fact, I'm aware of cases (the short version of the Teacher sense of Efficacy Scale) where there are only 12 items comprising three distinct factors, but an alpha as high as .90 across the 12 items - which just goes to show how deceptive alpha can be.
In my view, too many researchers "worship" alpha and parrot it off in their articles as if it indicated something meaningful / useful - and the higher the alpha value, the more they seem to smile.
I prefer to conduct exploratory factor analysis to identify the likely "carve-up" of a particular set of data, and then to obtain an alpha value for each factor if there a reasonable number of items are involved within each factor (yes, I know I'm being a bit vague here).
In one publication, I explicitly indicated we were providing alpha values merely to satisfy convention because, for the samples involved, 24 items rendered the alpha values meaningless. In the same article, I wrote the following:
Sijtsma (2009, p. 118) has argued that “the only reason to report alpha is that top journals tend to accept articles that use statistical methods that have been around for a long time such as alpha”. Cho and Kim (2015) expressed a similar viewpoint and accompanied it with a request that journal editors recommend authors use superior alternatives to, or in conjunction with, alpha.
All that aside, I like to look carefully at interitem correlations as well as conduct exploratory factor analysis in order to get a sense of reliability. (I'm a bit suspicious about confirmatory factor analysis, but that's a completely different kettle of fish . . . )
If you disagree with anything I've written above, please feel free to indicate so. In my view, ResearchGate is an excellent forum for discussing ideas, not a place where people try to dominate over others. :-)
Beware of using Cronbach's alpha: "The assumption of uncorrelated errors (the error score of any pair of items is uncorrelated) is a hypothesis of Classical Test Theory (Lord and Novick, 1968), violation of which may imply the presence of complex multidimensional structures requiring estimation procedures which take this complexity into account."