Most literature I've read suggests a cut-off point of 0.4, however there are no real rules for this and it all depends on the instrument you are using. With a large enough sample even factor loadings of 0.2 would be significant, but are these items worth their inclusion? It all comes down to what you want: if you want a consistent scale that you can use in SEM - go for fewer items with larger factor loadings (e.g. 0.6 of larger); if you want a scale that addresses many facets of a measured trait, you might want to include more items, even if their loadings are as low as 0.3. Also, for exploratory factor analyses, most researchers tend to include items with higher loadings (at least 0.5) into the final scale, but if you are doing a confirmatory factor analysis, most of the time you just check if all the items are loaded into their factors and there are no cross-loadings.
Most literature I've read suggests a cut-off point of 0.4, however there are no real rules for this and it all depends on the instrument you are using. With a large enough sample even factor loadings of 0.2 would be significant, but are these items worth their inclusion? It all comes down to what you want: if you want a consistent scale that you can use in SEM - go for fewer items with larger factor loadings (e.g. 0.6 of larger); if you want a scale that addresses many facets of a measured trait, you might want to include more items, even if their loadings are as low as 0.3. Also, for exploratory factor analyses, most researchers tend to include items with higher loadings (at least 0.5) into the final scale, but if you are doing a confirmatory factor analysis, most of the time you just check if all the items are loaded into their factors and there are no cross-loadings.
I quite agree with Mykolas. The suggested cut-off point in most literature is 0.4. The higher the cut-off point, the fewer the items to be loaded onto a factor. It is better to have fewer items (but significant contributors) loaded unto a factor than having more items which may be redundant. I feel the cut-off should not be less than 0.4.
Mykolas is giving you good advice. There are no hard-and-fast rules, and the strategy to use largely depends on your goals.
For example, if you're trying to develop a questionnaire that other people will use just by sum-scores, then you want loadings that are large and consistent, as that is equality of loadings is assumed by sum-score measurement approaches.
If the factor will be used as part of a measurement model in SEM, there is no drawback to low loadings. If loadings in general are low, standard errors for paths involving the latent will be larger, but appropriately so if your measurement model isn't strong.
The cut-off value of AVE 0.40 is in accordance with Fornell and Larcker (1981) who state that if AVE is less than 0.5, but composite reliability is higher than 0.6, the convergent validity of the construct is still adequate.
Tabachnick and Fidell(2007) follow Comreyand Lee (1992) in suggesting using more stringent cut-offs going from 0.32 (poor), 0.45 (fair), 0.55 (good), 0.63 (very good) or 0.71 (excellent)