Alan Tang Composite reliability (also known as the omega coefficient) is a measure of the dependability of a composite scale, which is made up of numerous items or sub-scales designed to evaluate a single underlying construct. The output for concept reliability and validity in SmartPLS 4 comprises two composite reliability measures: omega-a (rho a) and omega-c (rho c).
When all of the elements are viewed as a single scale, omega-a (rho a) is a measure of the dependability of the composite scale. It is determined as the sum of the extracted average variance (AVE) and the squared correlations of the items with each other. When all of the items are meant to test the same underlying construct and there is no reason to suppose that the items are measuring distinct facets of the construct, Omega-a is suitable to utilize.
When the components are viewed as distinct scales, omega-c (rho c) is a measure of the dependability of the composite scale. It is determined as the total of each item's AVE divided by the sum of each item's AVE and the squared correlations with each other. When the items are meant to assess distinct parts of the underlying construct, or when the items are not fully associated with each other, Omega-c is acceptable.
Because the correlations between the items are taken into consideration, omega-a tends to be greater than omega-c. If the items are evaluating distinct features of the underlying construct, omega-c may be a better indicator of reliability.
The nature of your research topic and the structure of your scale will determine the composite reliability measure you should employ for your study. If you feel that the items on your scale are measuring a single underlying construct and have no reason to suspect that the items are assessing various elements of the construct, then use omega-a.If you anticipate that the items on your scale are measuring distinct elements of the underlying concept, or if the items are not completely linked, you should utilize omega-c.
I hope this was helpful! Please let me know if you have any more queries.
Fatemeh Khozaei don't you think that the possibility for using both omega-a (rho a) and omega-c (rho c) together? For instance, the study comprises a number of constructs(latent variables). I understand from your explanation that rho_a is to evaluate the number of indicators for a single construct, while rho_c measures indicators of different constructs. I hope I am not wrong!
A fundamental weakness of statistical analyses is the evaluation of probabilities - more so the risk acceptance level which is an individual judgement call. A good hypothesis is best applied when a resounding “NO” is the result. The “yes” is never absolute and it is extremely variable. Values of any statistical test are especially problematic when the evaluation test results are near a “threshhold” value – itself a product of general consensus risk assessment.
Certainly, one must be very careful about claiming causality (or reliability) when the accepted tests result in marginal values. Remember that you should be trying simply to determine IF the results are more like due to some cause instead of random. It is never a good idea to hunt and peck for a test or scenario that supports a preconception! You should evaluate which test to use based on the data – type, format, and amount. Comparing two different test implies that you did not really categorize your data carefully.
After considering that, the Rho-C is often recommended as an alternative to the Rho-A (Cronbach’s Alpha) because “Reliability coefficients based on structural equation modeling (SEM) or generalizability theory are superior alternatives in many situations”.
Again, it is of the utmost importance for scientific integrity that you understand why you are using a specific statistical test, and what the limitations of its results are. Statistically “proving” non-randomness does not necessarily imply causation of an effect, much less proof.