To create our own questionnaire, or modify from an established questionnaire, is it compulsory to do CFA analysis, or can we check the reliability of the questionnaire only using cronbach alpha?
If you want to create you own questionnaire, then you should do a regular factor analysis to see how many scales it differ. If you basic idea about this let's say trait stands, and the questions you collected or wrote are measuring what you think they do, you will get the same scales with the same questions after the FA that you hypothesized.
If you want to use, modify or adapt a questionnaire to your language, then you should use a CFA to see whether the new one works the same way and has the same scales than the other one.
So even if you just use a questionnaire, you should check the CFA, becasue if works a different way on your sample, then you cannot use the original scales and you will need to work on something.
Once you got the CFA results and they are satisfying, you can progress and check the Alfa values (which is also a must, because it can shed light on possible errors).
I hope I was able to make it clear and followable, feel free to ask if not or you have any further questions!
If you are developing a new measure, it is a good idea to start with reliability check through Cronbach's Alpha to ensure that the questionnaires that the statements that you have selected measure the same variable. You need to identify the underlying factor structure using EFA. In the second stage you may go for confirmatory factor analysis. It is important to use different data for CFA than the one that you used for EFA to avoid confirming factor structure that may may have higher fit than actual. You may use data collected from a sub- set of the full sample for EFA and carry out CFA on full sample. If you are using AMoS for CFA, you will need to compute Convergent and discriminant reliability out side the software, may be using excel sheet. You can also workout the construct reliability using data generated through CFA. Since construct reliability is considered more accurate than Cronbach's Alpha, you do not need to compute Alpha agian.
Even if you are using a standardised measure in different than environment, it is a good idea to follow the process starting from CFA as the measure would have identified factor structure that you need chaeck whether it fits your data. Same for a measure translated into a different language.
You should note from the two answers above that factor analysis and reliability analysis tell you somewhat different things, although there is a fair amount of overlap. Factor analysis tells you what items go together, with no assumption that there is only one thing being measured. If you have, for example, 10 items, you could have anywhere from 1 to 10 things being measured, and FA will tell you how many underlying commonalities there are (based on eigenvalues). But when you run a reliability analysis (Cronbach's alpha), the analysis presumes there is only one thing being measured and will tell you how well or poorly all 10 items get at that one thing. If there is only one construct revealed in the FA, the reliability analysis might still tell you that one or two of the items don't fit very well. Personally, I prefer Rasch methods to get at how items perform within a construct, but that may be more technical than you need.
A final note, related to the title of your post - reliability is not validity. Factor analysis can get at some elements of validity (specifically, if item 1 validly measures X and item 2 goes with item 1, it also validly measures X, probably), it isn't going to get you very far in most areas of validity. If you search on "validity" and "survey design" on ResearchGate, you will find a number of papers shared on the topic.
Reliability means the consistency or repeatability of the measure. This is especially important if the measure is to be used on an on-going basis to detect change. There are several forms of reliability, including:
Test-retest reliability - whether repeating the test/questionnaire under the same conditions produces the same results; and
Reliability within a scale - that all the questions designed to measure a particular trait are indeed measuring the same trait.
Whereas Validity means that we are measuring what we want to measure. There are a number of types of validity including:
Face Validity - whether at face value, the questions appear to be measuring the construct. This is largely a "common-sense" assessment, but also relies on knowledge of the way people respond to survey questions and common pitfalls in questionnaire design;
Content Validity - whether all important aspects of the construct are covered. Clear definitions of the construct and its components come in useful here;
Criterion Validity/Predictive Validity - whether scores on the questionnaire successfully predict a specific criterion. For example, does the questionnaire used in selecting executives predict the success of those executives once they have been appointed; and
Concurrent Validity - whether results of a new questionnaire are consistent with results of established measures
I must add to the previous answer that sometimes test-retest reliability can and must be neglected. If you measure a personality trait, you should use it and it should be relatively consistent, however, if you want to measure a state, it will most likely result in a lower test-retest reliability, which is not a problem of course.
In developing questionnaires, [construct validity] is the crucial factor. Of course unless its’s reliable (and I think internal consistency does much matter), there’s no room to check validity of any measuring instrument.
Verifying questionnaire items for construct validity is highly significant, particularly when the items are self-developed and not based on questionnaires which have already been validated and used in previous studies. However, to validate a questionnaire you need to gather ‘multi-dimensional’ evidence to ensure that the intended construct is being measured in order to be confident that inferences based on the results obtained are valid. Sometimes checking reliability alone would not do!
It is a common misconception that reliability has to do with repeatability -- instead, repeatability was just one of the first means devised for measuring reliability.
Thus, the test-retest approach to reliability is a rather old approach, which like all measures of reliability is based on the amount of random error in a measure. It works because a high amount of random error will limit any possible correlation, and this shows up up best in correlating an item with itself. So yes, test-retest does work, but it is also now quite out of date.
As Julia says, as Chronbach's alpha will accomplish this when your goal is to measure a single construct (i.e., all the items are indicators of the same underlying concept).
Because there have been so many questions here on Cronbach's alpha and exploratory factor analysis, I have collected a set of resources on this topic: