I did a pilot study of about 320 respondents but some of the important items and constructs are scoring low. How can I ensure to improve it when I go on further data collection?
There have been a number of questions about here about Cronbach's alpha, and the related topic of factor analysis, so I have gathered some resources at:
One possible source for the problem could be that you have more than one factor, because alpha assumes that all the items are associated with a single factor.
Dear Tahir, of course David is right to mention factor analysis, it could very well be the case that you are measuring more than one construct with a single scale. You will get information about this using factor analysis. This should be the first thing you do.
A very different aspect is whether or not Cronbach's alpha itself is a meaningful measure respectively what score of alpha can be regarded as appropriate. There are two main features that influence alpha: the intercorrelation of the items and the number of the items. If you are measuring a complex construct with a low number of items you will never get a large alpha. On the other hand - if your construct is rather simple and you are using a large number of items, alpha will be high. Some researchers tend to construct items that are nearly identical and differ just slightly in order to get high alphas. That is not a good strategy because typically you will get in trouble with content validity.
Besides factor analysis there are different things you could do: watch out and analyze the actual wording of the items, maybe you will find some that are hard to understand and actually measure reading literacy. Do interview studies or think aloud studies in order to find out what your test persons are actually thinking when they are solving your items. Are they really thinking in the way the item constructor intended them to do (cognitive validity). And finally, low alphas around .60 might even be alright if you are measuring a complex and new defined construct and otherwise had to test individual 2 or 3 hours. Cronbach's alpha > .7 or even > .8 is merely a convention or an agreement, not a mathematically deducted law.
Dear Christoph and David thank you so much for assistance. My questionnaire has multiple parts like Technology Readiness Index, Technology Adoption Model, Customer Satisfaction, and the Demographics. Now I think that since I am proposing a model my main aim shall be to test the validity of the model through CFA and SEM because I am using an amalgam of previously used standardized questionnaires. So I am I correct if I ignore the alpha values then?
Tahir, both CFA and SEM can be relatively complex procedures if you have not already been trained to use them effectively. So, I would recommend a simpler starting point.
First, check the reliabilities for each of the separate scales (Technology Readiness, etc.). Remember that any items were asked in a "reverse" direction will generate negative correlations, so that you will need to recode those items to get positive correlations.
If those alphas are adequate, then sum the items to get the scale scores, and finally, use those scales to do regressions.
I suggest you take a look at Cho, E.; Kim, S. (2014) "Cronbach's coefficient alpha: well known but poorly understood". Organizational Research Methods, 18(2), 207-230. I think this article could be useful for you.
to one of your prior questions. If you measure the variables in a reflective manner, you should not ignore your Cronbach alpha. Check wording of your items first, then go into outlier analysis of your data, check for missing values (and if you have some of the same respondent - leave such person out of your dataset). If you measured your variables well, also 0.70 with 3 items should be possible. You might also ask others for evaluating about the 'content' of each item.