You ask "Is quantitative research requiring probability sample design?"
To estimate for a population from a sample, inference (often estimates of means or totals, and their standard errors) is generally based on randomized selection. Bias and nonsampling errors of all kinds are very important, but the estimators that have been developed for variance and therefore standard errors are for sampling error, and used as a standard for estimating sample size needs, and frankly variance due to sampling is usually overemphasized when considering the accuracy of an estimated total, mean, or price or any other ratio of two totals. The mathematics used to derive these estimators, for variance due to sampling, require randomized sample selection, and do not directly consider nonsampling error such as measurement error so that when using a finite population correction (fpc) factor for a finite population, as the sample size becomes larger, less nonsampling error is even indirectly considered, until we have a census, where there is no sampling error, and if nonsampling error is ignored, we often treat the results (falsely) as completely correct.
The part your question addresses is the randomized selection through simple random sampling, or more likely stratified random sampling, probability proportional to size (PPS) sampling, cluster sampling, or some perhaps more complicated sampling as in multistage sampling, but they all require randomized selection.
That is the way most statistical inference from sample surveys has been done since around the 1940s.
However, another school of thought is to base inference on regression models when you have regressor data for the entire population. However, even when that is done, randomization of sample selection is often done in an attempt to avoid leaving some parts of the population as unrepresented. Interestingly enough, a random selection can be an unfortunately drawn sample that one would not want to think of as 'representative,' and model-assisted design-based methods take advantage of regressor data (here called auxiliary data) to adjust for an unfortunate sample. These methods, where combining design-based and model-based methods is used, are generally the most accurate of all methods.
Regression modeling (conditionality) is seldom used alone, but with careful data groupings (strata of sorts) can be very useful for small, highly skewed establishment surveys. There are a number of papers on that on my ResearchGate page.
But generally speaking, for inference from quantitative surveys, one uses probability (random selection based) sampling (i.e., design-based sampling), or if you have auxiliary (really regressor) data, you use model-assisted design-based sampling and estimation.
Here are a few of many texts you might find helpful:
Cochran, W.G(1977), Sampling Techniques, 3rd ed., John Wiley & Sons.
Blair, E. and Blair, J(2015), Applied Survey Sampling, Sage Publications.
Lohr, S.L(2010), Sampling: Design and Analysis, 2nd ed., Brooks/Cole.
Särndal, CE, Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer-Verlang.
Brewer, KRW (2002), Combined survey sampling inference: Weighing Basu's elephants, Arnold: London and Oxford University Press.
Cheers - Jim
PS - I am assuming continuous data, but much of the above carries over to other types of data. - Also, if you are only interested in "representstiveness," randomization can help, and so can grouping/categorizing/stratifying the population, as appropriate.