Assessing the sampling techniques' effectiveness in survey research is essential to assure that the resulting data accurately represents the target population, so the study's conclusions are valid. Probability samples offered in probability-based surveys, such as simple random samples or stratified random samples, are considered good because they give each population member a precisely known opportunity to be selected, effectively combating selection bias and conferring generalisability (Cochran, 1977).
In particular, stratified random sampling enhances precision by partitioning the population into less heterogeneous subgroups and sampling within these strata, thus reducing sampling error in cases where population traits differ (Kalton, 1983). Meanwhile, non-probability samples (SRS) such as convenience or snowball sampling, while usually simpler and less expensive, pose problems to efficiency due to perceived bias and poor representativeness (Etikan, Musa, & Alkassim, 2016). This sampling method is usually appropriate in exploratory studies and hard-to-reach populations; however, its influence on survey results in terms of credibility and reliability must be carefully measured.
Epidemiologists often address sampling selection in non-probability samples by pondering the distortions to sample design via varied weights used to estimate sample failures or losses (Bethlehem, 2010). Simply put, in complicated surveys, these adjustments can only be as simple as including corrective weights for subjects whose population sections may have been studied. Choices concerning the choice of sample should also be based on practical considerations such as feasibility, cost, and accessibility besides the balance between methodological compromises and benefits. For example, technological and data constraints may have led to more sophisticated sampling designs like multistage or cluster sampling, which balance efficiency and precision in large-scale surveys (Kish, 1965). The evaluation of these techniques focuses on response rate, sampling error, and bias and demonstrates the importance of relevant sampling to offering evidence-based values to surveys. Paraphrase the text above with quality.
References:
Bethlehem, J. (2010). Selection bias in web surveys. International Statistical Review, 78(2), 161–188.
Cochran, W. G. (1977). Sampling Techniques (3rd ed.). Wiley.
Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1–4.
Kalton, G. (1983). Introduction to Survey Sampling. Sage Publications.
Probability sampling has been largely used since the 1940s, but it's biggest problem today is nonresponse. This destroys the basis for inference from probability sampling. Modeling is often used in dealing with nonresponse so that increases the strength of an argument for a prediction-based (regression-based, not forecasting) approach.
Here is one example of improving on nonprobability sampling:
Elliott, M.R., and Valliant, R. (2017). Inference for Nonprobability Samples. Statist. Sci. 32(2): 249-264, May 2017. https://doi.org/10.1214/16-STS598 (Open Access from Project Euclid)
In work I did for the US Energy Information Administration (EIA), which is still used there, for establishment surveys of multiple attributes, I developed near-cutoff sampling with prediction, usually ratio estimation, and regarding evaluation, results were verified multiple ways. First, results compared weĺl to a previous stratified sample with a certainty stratum. Also, the total of 12 monthly predicted totals were later compared to annual census data. Further, some data collected were removed and I looked to see how well the predicted data compared to the actually collected data. Variance estimation and small bias were also verified with real data. So, if you can verify some cases and/or run test situations, that would help in evaluation. An advantage I had was that the predictor data for the (monthly) samples were generally the same data from a previous (annual) census, and a later census could then be used for testing/evaluation.
However, for many sampling situations there may be no way to verify results very well. This may be a big problem with probability sampling with nonresponse issues, or in nonprobability sampling such as in Elliott and Valliant (2017) above, obtaining covariate data of good quality, which are the best covariates, may be very problematic.
Therefore the best sampling technique may vary greatly by application. Within design-based or model-based or model-assisted design-based approaches, there are various choices. But availability of good quality data is also important.
Cheers.
PS - This might also be of interest:
Brick J.M.(2014), "Explorations in Non-Probability Sampling Using the Web," Proceedings of Statistics Canada Symposium 2014.