You don't mention what type of factoring you're doing, nor the purpose of the factoring. These could be important considerations in mustering a rationale for a given loading threshold as somehow being sacrosanct.
Suggested loading guidelines are just that: guidelines, and not commandments. There are numerous examples in the literature wherein loading thresholds of .30, .35, .40, .50, .70 (and others, including any correlation significantly different from zero) have been applied.
For your reference, Raymond Cattell wrote of an example in which variables having loadings as low as .15 were retained as being salient for the factor.
You might wish to have a look at his text, The scientific use of factor analysis in behavioral and life sciences.
As you included AMOS, I understood that this is about the confirmatory factor analysis or SEM
[1] to maintain content validity
- if you remove items, you can get great results from a statistical point of view (extracted variance, reliability), but do the remaining items still represent the intended construct?
[2] comparability with previous studies
- if each survey removes some items from the scale, how can the results of one study be compared to the next?
I have meanwhile given up to fight against this :) The point of my repeatedly stated suggestion is that a factor loading has to make theoretical sense. A factor loading is the connection between your *supposed* latent variable and its reflection (the indicator). If your indicator is a crystal clear formulation, then a factor loading of .5 would create some doubt on my side, whether the factor is really what i want to measure.
If you read study 1 in this paper:
Rosman, T., Kerwer, M., Steinmetz, H., Chasiotis, A., Wedderhoff, O., Betsch, C., & Bosnjak, M. (2021). Will COVID‐19‐related economic worries superimpose health worries, reducing nonpharmaceutical intervention acceptance in Germany? A prospective pre‐registered study. International Journal of Psychology. doi:doi.org/10.1002/ijop.12753
We had factor loadings of over .8--and still concluded that the factor was invalid in the first run (and re-imaged its meaning). BTW. The paper presents a practical way how we tried to deal with it. We may still be wrong but still being aware of potential problems in models and react to it with some suggestions is better then explaining away obvious problems. This is what I learned from Popper :)