When you want to evaluate a questionnaire to see the items are related to how many factors, you first perform a explanatory factor analysis (EFA). In this analysis you can find how many main factors are in the questionnaire.
For instance you might see three items of the questionnaire are related to one factor (for example diet), four items are related to another factor (for example smoking) and so on. From this analysis you can see how many main factors are in the questionnaire. Then for confirming the results you will run confirmatory factor analysis (CFA).
One common software for performing this analysis is AMOS. You can google it and learn more.
why test your models? You apply a CFA when you believe (--> model) that a set of manifest variables follows the common factor model. By applying a CFA, you test this set of beliefs. You apply EFA when you believe that there is a some sort factor structuring among your manifest variables (-->model!) but you have no clue how many factors are there and which manifest variables were caused by which factor. Thus an EFA model is still a model in which you make bold (=causal) assumptions but it is much weaker as the CFA model in which you have to be much more explict in your beliefs.
And NEVER confuse with the PCA approach which is a simple statistical data reduction approach. A PCA just technically reduces many manifest variables to a smaller set of dimensions and it works no matter where the correlation among the manifest variables came from (in the E/CFA this is crucial).
Classically EFA has been used to analyze data during the process of building an instrument and CFA during the validation process. However, this has been more for "common" analysis than methodologically justified.
In theory, both an instrument under construction and one to be validated already have a defined structure (specification matrix), where the number of factors and the items that belong to each of them are known, so that a CFA is the direct procedure to evaluate and propose alternative structures with much greater precision and theoretical direction than an EFA.
From my perspective, the sole application of an EFA instead of a CFA would only be feasible if there really is a poorly defined structure of the instrument or unknown structure for some reason... which should already warn us not to use another test instead.
Now, during the valuation phase of a CFA, one could use the results of an EFA to hypothesize alternative models... but one must be very careful with this strategy so as not to form artificial models (that only fit our data).
totally agree but I would still insist that a FA model (in E and C) are *explicit* model types (i.e., the common factor that proposes that a set of intercorrelated manifest items correlate *because* of the joint causal influence of one or more underlying factors).
Sometimes, however, you have sets of variables that confirm rather an aggregte construct (item lists or index variables) where, in the extreme, each item in the list measures its own latent variable but the whole set *consistitues* (not "measures") the aggregate. And: you often have crystal clear single-indicator measures which are much more valid than most of the dirty-dozen scales :)
What you mention makes me think of a bifactor model, but from the perspective that the "general factors" are those originally proposed in the structure, and the specific ones, those that could be specific to some items (or one in particular, in the most extreme case).
However, in doing this approach from EFA, how can we correctly address the theory-based approach to the structure versus the results-based approach? That is, how to control the bias of proposing one or more possible factors underlying the items - in addition to the original proposed structure - solely on the basis of the statistical result and not on a theoretical-coherent basis. It seems to me an excellent idea for use (which I had not taken into account), but I consider that it should be used by an expert team with a solid foundation in psychometrics.
I second the notion that employing confirmatory factor analysis (CFA) or exploratory factor analysis (EFA) relies on whether a researcher has a robust theory underlying the construct(s) of interest. In other words, CFA can be more suitable when the researcher has a reliable theory explaining the association between latent and observed variables. On the other hand, EFA can be more suited when the theory is weak or there is little empirical evidence available on the construct(s). Here are some interesting reads.
Hox, J. j. (2021). Confirmatory factor analysis. In J. C. Barnes & D. R. Forde (Eds.), The encyclopedia of research methods in criminology and criminal justice (pp. 830–832). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781119111931.ch158
Kyriazos, T. A. (2018). Applied psychometrics: Sample size and sample power considerations in factor analysis (EFA, CFA) and SEM in general. Psychology, 09(08), Article 08. https://doi.org/10.4236/psych.2018.98126
Matsunaga, M. (2010). How to factor-analyze your data right: Do’s, don’ts, and how-to’s. International Journal of Psychological Research, 3(1), 97–110. https://doi.org/10.21500/20112084.854
Nylund-Gibson, K., & Choi, A. Y. (2018). Ten frequently asked questions about latent class analysis. Translational Issues in Psychological Science, 4, 440–461. https://doi.org/10.1037/tps0000176
Schmitt, T. A., Sass, D. A., Chappelle, W., & Thompson, W. (2018). Selecting the “best” factor structure and moving measurement validation forward: An illustration. Journal of Personality Assessment, 100(4), 345–362. https://doi.org/10.1080/00223891.2018.1449116
Watkins, M. W. (2018). Exploratory factor analysis: A guide to best practice. Journal of Black Psychology, 44(3), 219–246. https://doi.org/10.1177/0095798418771807
EFA is used when it is not known how many factors there are between the items and which factors are determined by which items while CFA is used if there is a strong theory about the structure.