I understand your question. The first problem that comes to mind is that most researchers use IBM SPSS. Unfortunately the software comes with a standard option of extraction setted for PCA. And as standard it provides you with PCA results, unless you change the extraction method.
When I started working with factor analysis I mostly used PCA - and for what I was doing that was wrong. At first hand it looks like it provides "better" results, but that's a false impression. What happens is that common factor analysis procedures explicitly account for measurement errors, while PCA doesn't. It is that characteristic that gives the inflated aspect to PCA loadings when you compare results. In some sense, we can say that it is less forgiving. It is important also to take a look on the different assumptions of the extraction methods. Violations can drastically modify the behavior of the results. Now, I'm mentioning the uses of PCA for Psychology, where error of measurement is something really important. Hair et al. (2006) are into administration and business if I'm not wrong and they have specific uses for PCA.
On Varimax, there's no problem in using it. The major field issue currently observed at publications is that just like PCA, Varimax was used for the wrong purposes. Again, its characteristics of making most of the variance to accumulate at the first factor and pulling the factor loadings to the extremes (higher and lower near zero), were somewhat attractive for most social researchers. The problem that was neglected is that varimax is an orthogonal rotation. Orthogonality implies that factors are uncorrelated and that items will be mostly explained by one factor only. The problem is that in social sciences it is less likely - but not impossible - to find uncorrelated factors.
So, the point is not about if something shouldn't be used or not, but exactly which are the assumptions and purposes of different methods. What type of information they use and which are the sacrifices they pose in terms of representing what we are investigating.
we are not looking for a method only (the method is standard) but the theoretical foundations should be clear. I wrote a paper on this issue that we need to look it from the contractual basis
I understand your question. The first problem that comes to mind is that most researchers use IBM SPSS. Unfortunately the software comes with a standard option of extraction setted for PCA. And as standard it provides you with PCA results, unless you change the extraction method.
When I started working with factor analysis I mostly used PCA - and for what I was doing that was wrong. At first hand it looks like it provides "better" results, but that's a false impression. What happens is that common factor analysis procedures explicitly account for measurement errors, while PCA doesn't. It is that characteristic that gives the inflated aspect to PCA loadings when you compare results. In some sense, we can say that it is less forgiving. It is important also to take a look on the different assumptions of the extraction methods. Violations can drastically modify the behavior of the results. Now, I'm mentioning the uses of PCA for Psychology, where error of measurement is something really important. Hair et al. (2006) are into administration and business if I'm not wrong and they have specific uses for PCA.
On Varimax, there's no problem in using it. The major field issue currently observed at publications is that just like PCA, Varimax was used for the wrong purposes. Again, its characteristics of making most of the variance to accumulate at the first factor and pulling the factor loadings to the extremes (higher and lower near zero), were somewhat attractive for most social researchers. The problem that was neglected is that varimax is an orthogonal rotation. Orthogonality implies that factors are uncorrelated and that items will be mostly explained by one factor only. The problem is that in social sciences it is less likely - but not impossible - to find uncorrelated factors.
So, the point is not about if something shouldn't be used or not, but exactly which are the assumptions and purposes of different methods. What type of information they use and which are the sacrifices they pose in terms of representing what we are investigating.
There have been some great answers here. I'll add my tuppence worth.
Ken Bollen wrote a paper in 1991 (Conventional Wisdom on Measurement) in which the distinction between PCA and EFA (or "causal" and "effect" indicator models) was very clearly made. This was probably the best and most influential paper I have ever read. A cracking read that will make psychometrics engaging, interesting and challenging.
Preacher & MacCallum also published "Repairing Tom Swift's Electric Factor Analysis Machine" in 2010. Best title of an academic paper ever, and a very clear dismissal of 'little jiffy' (the horror triumvirate of Eigenvaleus >1, PCA, varimax rotation).
Article Conventional Wisdom on Measurement: A Structural Equation Perspective