I have done CFA and the out put showed a zero degrees of freedom and probability can not be computed ,how to interpret this and references will be appreciated.
Well, you cannot test model fit with zero degrees of freedom. Therefore it would be best to include another construct in your analysis. If, for example, you performed an CFA using only three indicator variables, adding another construct measured by three different manifest variables would solve the problem. Then, your observed covariance matrix would include 21 elements (6 variances and 15 covariances), and the number of estimated parameters would be 13 (6 error variances, 6 factor loadings, one covariance of the factors) if you fixed the latent variances to one.
Degree of freedom cant be zero in statistical analysis [however, the answr of Karin in the next part is possible]. can you provide us with the variable that you examined with its categories and steps.
Just to clarify the calculation of degrees of freedom in confirmatory analysis:
There is a difference between regression and confirmatory factor models. For regression models, the df cannot be zero, because the sample size is part of the calculation. For CFA models, however, the "data" are the variances and covariances of the variables. For example, if you have three variables measuring a single latent variable, then the empirical data are three variances and three covariances. The parameter to be estimated are three error variances and three factor loadings (assuming that the variance of the factor was fixed to one). In this case, the degrees of freedom is zero: 6 elements in the covariance matrix - 6 parameters to be estimated = 0.
why can the number of degrees of freedom in regression not be zero? Consider a situation with 10 observations and 10 coefficients to be estimated. This should result in a situation with 0 dfs.
Hi Florian, I was not quite clear here. Of course you could have a model in which the error df are 0: df_e = N - p -1, with p being the number of variables. But then, you wouldn't analyze a regression model with a small sample size being almost equal to the number of variables. What I meant was that when using the covariance matrix as input data, N is not used for calculating the df.
Theo, thanks for this clarification. I just had a look at your nice paper dealing with parameter estimates at the boundary. This seems to be a problem often overlooked in empirical research. I remember the advice to fix an error variance to zero when the estimated value is slightly negative, e.g., -.20, - this was state-of-the-art in empirical research. However, I did not feel comfortable with this advice because fixing a parameter to a certain value implies that the population value is known, and that in the case of a small negative value it is exactly zero. What I found out was that quite often such a model was misspecified or overparameterized. Thanks for sharing this article.