Perhaps something in this paper by Elliott and Valliant may be helpful for you:
https://www.researchgate.net/publication/316867475_Inference_for_Nonprobability_Samples. There you will see this link:
DOI: 10.1214/16-STS598
That means the paper is found under Project Euclid at the following location:
http://dx.doi.org/10.1214/16-STS598
In that paper I see at least a reference or two to probability and nonprobability sampling together. You might investigate "psuedo-inclusion probabilities."
Selecting proper sampling procedures depends on the adopted research design and the opted approaches for data collection and analysis. Here is a helpful textbook.
Daniel, J. (2012). Sampling essentials: Practical guidelines for making sampling choices. Sage Publications. https://methods.sagepub.com/book/sampling-essentials
Selecting sampling units from the population is very crucial task. Selecting sampling units depends upon your nature of research objective. If your research design is qualitative types you can choose non probability sampling methods.If your research design is quantitative types than it is better to apply probability sampling method. In your research if variables are both qualitative and quantitative types than you can apply both probability and non probability sampling method.
Selecting sampling technique based on the research design and objectives of the research. Can adopt a mixed sampling technique depends on your research question to be answered and to meet the research goal.
The paper by Elliott and Valliant in my response above notes two ways to make the nonprobability part here useful for inference: either by using covariates to establish psuedo-inclusion probabilities, or using prediction (regression modeling). Thus auxiliary data are needed. There is more by Richard Valliant on this.
But the two different methods of inference, probability sampling (or an approximation to it), and a model-based method, rely on two different approaches. The probability approach lets each member of the sample represent itself and other members not collected. The inverse of the sampling probability, the weight in each case, tells us how many members of the population are being represented by a member of the sample. Thus the sample must in some way represent the population. But in the model-based approach (a term I'd attribute to Richard Royall, I think), the model is relied upon to "predict" (i.e. estimate for a random variable) for the members of the population not in the sample. The relationship modeled between the sample and corresponding independent variable(s) is expected to hold up for the y-values not collected. In more complex cases I think this is more problematic. In simple cases, notable when a ratio model is appropriate, this may be more reliable. One can use "balanced sampling" to make the sample more "representative," when using a model. The model adds a kind of representativeness in that the auxiliary/regressor/predictor data are for the entire population or each subpopulation or stratum, and this can help to know the population better, especially if a ratio model is appropriate. A cutoff or quasi-cutoff (multiple attribute) sample may introduce a quite small amount of bias, and remove a very large amount of variance. See discussion on balanced versus cutoff sampling in https://www.researchgate.net/publication/261947825_Projected_Variance_for_the_Model-based_Classical_Ratio_Estimator_Estimating_Sample_Size_Requirements.
Also see "Application of Efficient Sampling with Prediction for Skewed Data," JSM 2022:
Auxiliary data may be incorporated into the survey weights, making "calibrated" weights. Thus there has been work done to mix the use of auxiliary data and even modeling, with probability sampling. See
Särndal, C.-E., Swensson, B., and Wretman, J.(1992), Model Assisted Survey Sampling, Springer-Verlang, and
Brewer, K.R.W.(2002), Combined Survey Sampling Inference: Weighing Basu's Elephants, Arnold: London and Oxford University Press