In CFA you determine a factor's reliability according to the fit-indices (Chi-square/df, CFI, TLI, RMSEA etc.). You can calculate Cronbach's Alpha, which I would recommend. Running another EFA is not necessary in my opinion. Then again, it really depends on how widespread the questionnaire, or parts of it, has been used. If it is something like the "industry's standard" (e.g. The scale for General Self-Efficacy by Jerusalem & Schwarzer), Cronbach's Alpha will do. If not, run an EFA in the background to get to know the data, but do not report it.
The long short answer is: Yes. You might be pleasantly surprised that, due to population difference & other peculiar characteristics, the questionnaire might not be very suitable for your own [intended] population, compared to the previous populations that it’s been used o.
Thus, it’s always advised to run these basic tests, including the CFA.
Hi Rokhsareh. Reliability is a property of the data and it is not a fixed attribute of the instrument. It makes good sense to check (if possible) the reliability of scores with new data set.
It is relatively straightforward to calculate reliability from the results of a factor analysis. The following article might be useful as a starting point.
Raykov, T. (2009). Evaluation of scale reliability for unidimensional measures using latent variable modeling. Measurement and Evaluation in Counseling and Development, 42, 223– 232.
Coefficient alpha proceeds on the assumption of unidimensionality and tau-equivalence (i.e. that the unstandarized factor loadings of the items are the same). When this is true the reliability calculated from the confirmatory factor analysis and the standard way of calculating coefficient alpha will give the same result. If the assumptions are violated alpha will differ from the reliability yielded by the confirmatory factor analysis (and coefficient alpha will be a biased indicator of reliability).