In Applied Survey Sampling, by Blair and Blair, Sage Publications, 2014, on page 175, they note the common use of "credibility intervals," by researchers using nonprobability samples and Bayesian modeling. They note that the American Association for Public Opinion Research (AAPOR) issued a caution to people using these credibility intervals, as not being something the public should rely on "in the same way" as a margin of sampling error. Attached is the AAPOR statement from 2012 in which they caution heavily regarding the use of such nonprobability opinion polls, as the Bayesian models have assumptions which will be of varying quality. However, they also state that "...even the best design [probability sample] cannot compensate for serious nonparticipation by the public." Thus, for much of the data in a probability sample, we have to use models or some method to estimate for missing data, and when the nonresponse rate is high, do we really actually have a valid probability sample anymore?
Thus the emphasis would be on total survey error. There is sampling error, and then there is nonsampling error. We have nonresponse and that can make reliance on a model better overall.
If that is the case, then why do many survey statisticians insist on probability samples for highly skewed establishment surveys with continuous data, when excellent regressor data are available? Often sampling the largest few establishments will provide very high 'coverage' for any given important variable of interest. That is, most of the estimated total would already be observed. The remainder might be considered as if it were missing data from a census. But these missing data, if generally close to the origin in a scatterplot of y vs regressor x, should have little prediction error considering the heteroscedastic nature of such data. With the relatively high measurement error often experienced with small establishments, long experience with energy data has shown one will often predict values for y for small x more accurately than one could observe such small y's. Further, this is done using the econometric concept of a "variance of a prediction error," and Bayesian model assumptions are not introduced.
It is important not to lump nonprobability sampling having good regressor data with other nonprobability sampling. For official statistics, an agency will often collect census surveys, and have more frequently collected samples of the same variables of interest (the same attributes), or a subset of them. Often the best regressor data for the samples are from such a census.
Finally, many years of use in publishing official statistics on energy data have shown this methodology - far less radical than the use of "credibility intervals" for polling - to have performed very well for establishment surveys. It does not appear reasonable to argue with such massive and long-term success. The second link attached is a short paper on the use of this methodology. The third is a study of the variance and bias, and the fourth shows a simple example, using real data, as to how effective a quasi-cutoff sample with a model-based classical ratio estimator can be.
Since the 1940s, for good reason in many cases, probability sampling has been the "gold standard" for most sample surveys. However, models are used heavily in other areas of statistics, and even for survey statistics "model-assisted design-based" methods have unquestionably often greatly improved probability sample results. But strictly model-based sampling and estimation does have a niche in establishment surveys, though it has met with resistance. One should not dismiss this without trying it. It especially seems rather odd if anyone were to consider Bayesian "credibility intervals" for election polls, but not quasi-cutoff sampling with model-based estimation, as defined by going through the second link below.
Your comments?
http://www.aapor.org/AAPORKentico/AAPOR_Main/media/MainSiteFiles/DetailedAAPORstatementoncredibilityintervals.pdf
Article Efficacy of Quasi-Cutoff Sampling and Model-Based Estimation...
Article Using Prediction-Oriented Software for Survey Estimation - P...
Conference Paper Projected Variance for the Model-based Classical Ratio Estim...