It sounds like you're using a non probability samplimg scheme. If that's the case then all one can say is more is probably better than less. If not.a better description of your sampling plan is needed. In that case consult Schaefer et al Elementary Survey Sampling downloadable from the z-library. I ff needed please post another question. Best wishes David Booth
The answer depends on what specific analyses you intend to perform for this pre-test appraisal of an instrument.
If your intention is to calibrate the items via item response theory, then you should aim for at least 200 cases for a one-parameter IRT model, and more (500-1500) for two- or three-parameter models.
If your intention is to factor the measure, using exploratory factor analysis, then you should aim for 10-20 cases per item (with a lower bound of 100). The reason is that the observed correlations are presumed to be population parameters (they won't be, of course); as well, you're looking for a structure to explain/account for the k * (k - 1)/2 unique correlations/covariances among the set of k items.
If you're concerned about basic item analysis and internal consistency reliability estimates for scores, then you should aim for: (a) more cases than items; and (b) a lower bound of 50-100 cases.
If you have something else in mind, then do follow Professor Booth's suggestion, and post further explanation. I'm confident that the Rgate crowd can then offer more focused recommendations.
The glib advice is 10-20 cases per parameter to be estimated in the CFA. For example, each variable-factor loading is estimated. You can, of course, be more specific by evaluating some simulations that mirror the proposed factor structure (see this example: file:///C:/Users/Morse/AppData/Local/Temp/samplesizedeterminationinCFA.pdf)
The concern associated with a convenience sample is that it may not be representative of your intended population.