1) size of coefficients and r-square versus model fit are two different pair of shoes. The fit concerns the question whether the algorithm can reproduce the covariance matrix of the model variables given the constraints (i.e., fixed effects) that it contains. If the model has very low correlations from the start, there are less violations of these constraints, hence you will get a decent fit albeit the model being not very helpful.
2) The parameter estimates however tell you what you get from your model and especially factor loadings tell you whether you measurement model is valid. With factor loadings so low I personally would have doubts that I have measured anything reasonable. Such loadings typically ocurr when you include conceptually unrelated indicators. As the correlations among these variables will tend to be low because of this, you will not detect this by means of a model mis-fit (as I said, there are no violations of any constraints).
I would recommend going back to the drawing board, select a focused number of indicators for which you can make a valid case that they concur with the implications of the common factor model (=all items measure the same underlying phenomenon). Then test again.
And by the way: This thread will probably be flooded by statements as "Hair et al. recommends that factor loadings above .30 are acceptable". Don't trust them. These are recommendations stemming from scholars that come from a different school of thought (factor analysis as mere data reduction).
1) size of coefficients and r-square versus model fit are two different pair of shoes. The fit concerns the question whether the algorithm can reproduce the covariance matrix of the model variables given the constraints (i.e., fixed effects) that it contains. If the model has very low correlations from the start, there are less violations of these constraints, hence you will get a decent fit albeit the model being not very helpful.
2) The parameter estimates however tell you what you get from your model and especially factor loadings tell you whether you measurement model is valid. With factor loadings so low I personally would have doubts that I have measured anything reasonable. Such loadings typically ocurr when you include conceptually unrelated indicators. As the correlations among these variables will tend to be low because of this, you will not detect this by means of a model mis-fit (as I said, there are no violations of any constraints).
I would recommend going back to the drawing board, select a focused number of indicators for which you can make a valid case that they concur with the implications of the common factor model (=all items measure the same underlying phenomenon). Then test again.
And by the way: This thread will probably be flooded by statements as "Hair et al. recommends that factor loadings above .30 are acceptable". Don't trust them. These are recommendations stemming from scholars that come from a different school of thought (factor analysis as mere data reduction).