We know cross-sectional analyses do limit the ability to draw causal inferences. What is the role of sampling limitations on the generalizability of research studies?
I am not quite sure what you mean by 'sampling limitations', but there seem to be two different possibilities; either it is about how much random error can influence results, which is a question for a statistician, or it is about how representative the sample is for the population. The latter question is often phrased as the problem of non-response to surveys, or response rate. This has been posed as a potential threat to validity for decades, but recent research has shown that it is not as bad as often believed.
In a recent paper of ours we basically find that the effect of non-response is hardly detectable, even at a response rate of 6%. This is in agreement with other research which you can find referenced in our paper.
af Wåhlberg, A. E., & Poom, L. (2015). An empirical test of non-response bias in internet surveys. Basic and Applied Social Psychology, 37, 336-347.
Your sample needs to generalize to that population upon which your research is to apply. Randomized sampling designs are often used for that, or you may have auxiliary/regressor data for your population which forms the basis of a regression relationship upon which data not sampled may be estimated (which is also often a good way to estimate for nonresponses to your sample), and sometimes randomization and regression are used together. Sometimes regression relationships are used to obtain a "balanced sample" which achieves a kind of representativeness which randomization does not always achieve.
As for nonresponse, two basic kinds are ignorable nonresponse, and nonignorable nonresponse. Ignorable nonresponse is not actually truly 'ignorable.' That just means the data missing follow the same pattern - are generated by the same mechanisms - as the data you have in a population, or subpopulation/category/or stratum. (This is always, to some degree, unknowable.) Then you might sometimes replace the missing data by a mean value, but that artificially reduces variance estimates. Often your missing data are missing for a reason that makes that a bad choice. Nonresponse problems in surveys have generated a great deal of international attention for workshops, and literature, for many years, for many different applications. The ill effects of nonignorable nonresponse may be reduced by stratification, and basing this on response propensity groups may help, and regression may help, which techniques can also be used to avoid underestimating uncertainty.
Nonignorable nonresponse has been studied by many statisticians because it can be extremely serious and invalidate your results. The potential for bias is often substantial, to say the least.
Even if you do not have serious nonresponse issues, and your sample can be used to infer to your population of interest to your study because it was either drawn with a randomized design, or uses regression to relate to something positively correlated to your data which is known, or both, you can still have substantial uncertainty. If the kind of 'representativeness' from randomization/regression/or both is achieved, with little or no suspicious nonresponse, then you still need an adequate sample size for a reasonably low variance. Reasonable uncertainty (say a low enough mean square error as defined by the square root taken of the total of variance and bias-squared) is defined as what is reasonable for your study. Some study results may need to be much more reliable than others.
Sample size requirements generally assume you have eliminated bias, and need a sample based on what is an acceptable standard error for a (each) parameter of interest. Of course you aren't likely to eliminate all bias, but you do your best, and should note concerns in your report. Also, an estimate for a standard error is incidentally impacted by bias, and can, in a way, cover for that fairly well at times. In order to estimate sample sizes, to approximate the standard errors you might later obtain, you need an idea of population (or subpopulation/or strata) standard deviation(s). Population standard deviations need to be estimated, or at least guessed to a reasonable extent, to see what sample size goes with a population standard deviation to obtain a reasonable standard error for a parameter or statistic of interest, such as an estimated mean. This is shown in texts such as Cochran, W.G(1977), Sampling Techniques, 3rd ed., John Wiley & Sons, where suggestions are given for obtaining such a guess, such as a pilot study. Or you may have other information from other work. I suggest that you try a few possibilities to have an idea of a nearly "best" and "worst" case.
...
It seems logical that these same considerations would impact qualitative or subjective/interview results as well, but in a less quantifiable and perhaps more interconnectedly complex manner.
.............
So, in summary, sampling 'limitations' would generally mean what population the sample might represent, and how well it might do this job.
Research studies are usually carried out on sample of subjects rather than whole populations. The most challenging aspect of fieldwork is drawing a random sample from the target population to which the results of the study would be generalized. The key to a good sample is that it has to be typical of the population from which it is drawn. When the information from a sample is not typical of that in the population in a systematic way, we say that error has occurred. In actual practice, the task is so difficult that several types of errors, i.e. sampling error, non-sampling error, Response error, Processing error,… In addition, the most important error is the Sampling error, which is statistically defined as the error caused by observing a sample instead of the whole population. The underlying principle that must be followed if we are to have any hope of making inferences from a sample to a population is that the sample be representative of that population. A key way of achieving this is through the use of “randomization”. There several types of random samples, Some of which are: Simple Random Sampling, Stratified Random Sampling, Double-stage Random Sampling... Moreover, the most important sample is the simple random sample which is a sample selected in such a way that every possible sample of the same size is equally likely to be chosen. In order to reduce the sampling error, the simple random sample technique and a large sample size have to be developed.