It depends on why you are doing the factor analysis, how many factors you are allowing, how much variance there is, etc. Thus, more details are needed. Two good introduction to factor analysis books are:
more technical: http://www.amazon.com/Latent-Variable-Models-Factor-Analysis/dp/0470971924/ref=sr_1_1?s=books&ie=UTF8&qid=1447649003&sr=1-1&keywords=bartholomew+knott
please refer the book "Multivariate Analysis" by Hair et al (2012). the acceptable variance explained in factor analysis for a construct to be valid is sixty per cent.
This is tricky! But it depends on the scale that you're using. If it is an already validated scale, you may consider the whole scale. However you can adapt the scale by using "very good" arguments considering your phenomenon (I mean some theoretical explanations that allow you to drop some items, for example).
As others have said, it all depends on many things. For example, you have quite low percentage of explained variance, say, about 30%, but it consists of 20% for one "good“ factor and 10% for several "bad" factors. This one good factor could be practically and/or scientifically quite useful.
• The coefficient of determination is a measure of the amount of variance in the dependent variable explained by the independent variable(s). A value of one (1) means perfect explanation and is not encountered in reality due to ever present error. A value of .91 means that 91% of the variance in the dependent variable is explained by the independent variables.
• The amount of variation explained by the regression model should be more than the variation explained by the average. Thus, R2 should be greater than zero.
• R2 is impacted by two facets of the data:
o the number of independent variables relative to the sample size. For this reason, analysts should use the adjusted coefficient of determination, which adjusts for inflation in R2 from overfitting the data.
o the number of independent variables included in the analysis. As you increase the number of independent variables in the model, you increase the R2 automatically because the sum of squared errors by regression begins to approach the sum of squared errors about the average .
It should not be less than 60%. If the variance explained is 35%, it shows the data is not useful, and may need to revisit measures, and even the data collection process. If the variance explained is less than 60%, there are most likely chances of more factors showing up than the expected factors in a model.
I would like to know why this "60% threshold" is needed for for NGS data. If you use a negative binomial distribution analysis (e.g., DESEQ) and your treatment is not expected to induce changes to a large proportion of transcripts, why does this matter? As far as I can tell PCAs are exploratory and these analyses will find the differential expression. Any info would be appreciated, thanks!