Note that the rules of thumb for PCA/factor analysis are based on correlational/covariance analysis in which you normally require 10 subjects per variable. Factor analysis works on matrix computations/transformations involving the covariance-variance matrices and the associated eigenvectors and that's why the aforementioned rule of thumb applies. However, it is crucial to know that this rule of thumb is NOT always correct or appropriate, particularly when you get a set of items that fail to correlate significantly regardless of the sample size (due to conceptual or technical reasons).
You need to consider the communality values, number of factors, and number of variables/items constituting each factor/component, as well as the cumulative variance accounted for by the factors. For a sample of less than a 100 people (preferably over 50), it should be perfectly fine provided that you have high communality values, low number of factors (3 or less), and a balanced number of items with discriminant loadings on each factor. Moreover, smaller samples are generally better for confirmatory rather than exploratory FA.
Also, note that a similar question had been raised, see https://www.researchgate.net/post/What_is_the_optimum_sample_size_for_factor_analysis
Also see Article Exploratory Factor Analysis With Small Sample Sizes
Hello, I recommend you to read this papers, that use a perspective like the one posted by Jimmy Y. Zhong
de Winter, J. C. F., Dodou, D & Wieringa, P. A. (2009). Exploratory Factor Analysis with small sample sizes. Multivariate Behavioral Research, 44, 147–181. https://doi.org/10.1080/00273170902794206
Mundfrom, D.J., Shaw, D.G., Tian Lu, K. (2005). Minimum sample size recommendations for conducting factor analyses. International Journal of Testing, 5, 159–168. https://doi.org/10.1207/s15327574ijt0502_4
Pearson, R. H. & Mundform, D. J. (2010) Recommended sample size for conducting exploratory factor analysis on dichotomous data. Journal of Modern Applied Statistical Methods, 9(2), 359-368. Recuperado de: http://digitalcommons.wayne.edu/jmasm/vol9/iss2/5
The articles presented by Andres Alberto Burga are a must-read. Kudos to Andres Alberto Burga !
Also, I would like to mention that the ratio of variables to factors is important (Mundfrom et al., 2005), as that would give any researcher a basic idea of the lower limit of items to construct for each factor/component, instead of mindlessly wording lots of items per factor under the misconception that more items is always better. Judging from past experiences, this is absolutely NOT the case and is crucial to remember.
I will suggest rule of thumb of 10:1. This means 10 cases per item. Also, you can set marginal error (e) between 1-4% for minimum sample size determination for survey study. Find attached wisdom table prepared in excel format for your sample size determination denoted as "n'. Just enter your desired marginal error (e) and p=0.5.
As some of the researchers highlighted in this thread, there are several suggestions for the minimum sample size in the factor analytic literature but following a rule of thumb might be misleading in some cases. A large dataset does not guarantee accurate factor solutions. I would suggest EFA/PCA researchers to carefully check the communalities. When the communality values are high (larger than .60), even a relatively small sample size would be enough. Further, the cumulative percentage of variance explained by the extracted factors/components, the number of items loading on the factors/components should also be taken into consideration. In short, instead of blindly following certain cut-off levels/rule of thumbs, researchers should first get to know their dataset by performing a detailed data screening/checking.
Here are some useful papers that you can refer to when conducting EFA/PCA:
Chapter Exploratory Factor Analysis and Principal Components Analysis
Article Methodological Synthesis in Quantitative L2 Research: A Revi...