There are several factors that determine if sample size < 100 is sufficient. One factor is, how many questions do you have in your questionnaire? The fewer questions you have, in general, the easier it will be to determine validity and reliability. Another factor is, what is the structure of the responses? Responses that are binary require, in general, less sample size, because of less variability in responses (as compared to a Likert scale response with multiple choices). Another factor is whether this questionnaire is in the development phase or actual testing of psychometric properties? Surveys are often initially tested during pilot testing on a relatively smaller sample size.
Ultimately, you'll perform the reliability and validity testing and see if the survey meets the statistical criteria.
All that being said, it's in your best interest to ensure adequate sample size when developing survey questionnaires.
There are several factors that determine if sample size < 100 is sufficient. One factor is, how many questions do you have in your questionnaire? The fewer questions you have, in general, the easier it will be to determine validity and reliability. Another factor is, what is the structure of the responses? Responses that are binary require, in general, less sample size, because of less variability in responses (as compared to a Likert scale response with multiple choices). Another factor is whether this questionnaire is in the development phase or actual testing of psychometric properties? Surveys are often initially tested during pilot testing on a relatively smaller sample size.
Ultimately, you'll perform the reliability and validity testing and see if the survey meets the statistical criteria.
All that being said, it's in your best interest to ensure adequate sample size when developing survey questionnaires.
I really like the way Ariel explain. I simply like to add few as follows;
Meeting statistical criteria is important (Ariel): For exploratory study less sample size is not a problem only to be tested through pilot run with 30 and above respondents.
(a) For reliability- if no. of items/questions are 3-5 under one dimension, is helpful (more will increase variability-as Ariel)
(b) Validity- construct (convergent, AVE and discriminant, LVC)
(c) Predictive power (R2)
(d) Effect Size (f2)- Silburn also comments
(e) Predictive Relevance (Q2)
If all above statistical criteria are fulfilled with less sample size (100) the survey is quite ok. To me, SmartPLS is the best option to examine the survey with less sample size as described by Asif.
Agree with a lot of what has been said so far, for sample size < 100
sample size, power, validity and reliability ( but must have suitable indicators) are not as critical in terms of indicators used when testing around demographics and/or regression (predictive validity) or when undertaking descriptives. However if undertaking factor analysis or using AMOS for SEM sample sizes
I am not certain of your application, but your question seems to possibly be one where you may be concerned with the finite population correction (fpc) factor. That is, it may appreciably lower your sample size needs if all your concern is with the sampling error, which approaches zero as you approach a census. (But sometimes nonsampling error, such as measurement error, is more important.)
In the area of estimation for continuous data in establishment sample surveys of finite populations, in which I worked for many years, the data were very skewed, with the totals for any given variable/question being dominated by a few establishments. Estimation for the remainder means that you generally need to consider the fpc (or the equivalent for model-based estimation), or a censused/"certainty" stratum, but it depends upon your purpose.
A simple random sample is seldom used because you can usually do better by using stratified random sampling, but your population may be too small for this.