Have you tried to fix the intercept of one indicator tom zero (probably the reference indicator which loading is fixed to one) and tell the program to estimate the means freely?
They are fixed to zero because they use z-scores to compute CFA and SPSS saves the z-score combined measure to the file. Your easiest solution is to compute your own combined measure based on the information from the CFA.
If all you care about is items going together, with no weighting, just use the mean operator in your computation. If you want the items to be weighted, you will need to get the factor score matrix from your CFA to make your computations.
@Manuel: Unfortunately, I can't find specifications or theoretical implications for your advice within the Link you posted, could you guide me further?
@Julia: I've computed the factor score matrix, but the latent means (for the wohole group) stay 0...
These restricition are necessary to get an at least identified model. Kenny (see the link below) suggests that fixing the intercept of the first indicator to zero and estimate the latent facator means freely is a common procedure. However, other alternatives like the effect coding approach (intercepts are modeled in a way that the that their sum is zero,estimate factor means freely, see Brown, p 226) exist.
Taken together, i think you have to decide for one of these alternatives to ewrimatr2 the means, otherwise they are fixed to zero or the model is not identified without any additional constrains.
Does this makes sense?
Brown, Timothy A. Confirmatory factor analysis for applied research. Guilford Publications, 2015.
Seek some form of division of the database identifying variables nominal rate so that at the crossroads of information they would be regarded as covariates. The data crosses the place with such variables.
Daniel, given some of your responses, I am wondering if you are conflating two different things: (1) the analysis of CFA to determine whether different items "go together" to measure a single underlying construct and (2) differences between different measures (which is repeated measures - paired t-test if you are only comparing two). The fact that a combined measure can be created from CFA is not useful if you want to deal with the second question.
In the olden days, CFA just gave us a factor score matrix and we had to make our own composites. The usual way was to z-score each item, multiply it by its factor score, and then sum them. Now that the program does this process, you don't have the option to put the composite together in a way that you can compare the means. So, you need to have your analysis program do it "by hand."
In the example I have attached, there are two CFA analyses from a set of items that are all coded 1=never to 5=always, one measuring physical problems, one measuring emotional problems. If you let the program make the measures, you are comparing 0 to 0. But if just average them together using the original coding, you can compare them. And, if you want the weighting, you can construct your own weights, weight each item by that new weight, average the weighted items together, and compare them. But note that if you multiply the factor score weights by the original coding for two different composites, you will have two composites that are correlated r = 1.0. You need to transform your weights relative to the item coding to maintain the original composites correlations.
I don't think that Daniel is mixing things up. If i get him right he want to compare the latent means of different constructs using a SEM framework, and therefore get estimates of these latent means corrected for measurement error. How such a test (t-test for dependet samples in case of repeated measures) can be done using SEM is presented here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3794455/
The example Julia attached are the result of a principal component analysis, which is not a factor analysis in the sense of CTT, since no differentiation between measurement and trues score variacne is made (therefore the scores are not corrected for emasurement error). In addition, it is an exploratory procedure since no assumptions regarding factor assignment are made. If Daniel want to built a composite which represents most of the variance in the items, than PCA is okay, however, if he wants to work with latent scores corrected for measurement errors, it is not and CFA is the way to go.
Manuel - Daniel said this "I'd like to compare the means of some factors in my CFA" which I took to mean that he does, in fact, want to compare the means between different factors that came from his CFA. Regardless of whether he used CFA or PCA the problem is the same. That problem is what I was addressing in my example. If you have a method that will create latent measures from CFA that are NOT z-scores, I'm eager to know how.
For comparing latent means across a CFA / SEM model you have to enter the mean structure. One way to do this in lavaan is to constrain loadings and intercepts of indicators across groups with the group.equal() argument. Before doing this you should evaluate the measurement equivalence across groups (e.g. with the measurementInvariance() function of the semTools package). The means in the first group equals 0 while the means on the second group is estimated freely and can be interpreted in the original metric of the scale. The standardized coefficient of the mean estimate in group 2 is actually a d - statistic. You can prove that by calculating d by hand: Use the unstandardized estimates for means and factor variance... it´s magic. :-)