No. all of inferential statistics depends on probability models. That is the reason probability samples are so important. You can't really trust an inference made without a probability sample.
The Reader's Digest prediction of the election in the US in the 1930's is a reminder forvever what can happen with non-probablistics samples that are large.
Large is no quality criterion per se for samples, it is only good if accompanied by other techniques that can assure that the samples are representative.
Large sample sizes can result in significant results purely by virtue of their size. Conversely, small sample sizes often result in invalid conclusions. That's why it's best to do a power study before the main experiment to determine optimal sample size, which may be 30, 90, or even 1000! I hope this helps :-)
In general, for what most people on ResearchGate would likely encounter, I agree with David, but in areas where one has good auxiliary (regressor) data on the population, there are exceptions, often with highly skewed data. See https://www.researchgate.net/publication/303496276_When_and_How_to_Use_Cutoff_Sampling_with_Prediction.
I implemented regression-based methodology with proven results, and used for thousands of tables of official energy statistics since about 1990 that way.
An interesting quick look at probability-of-selection-based methods, prediction-based methods, and a mixture of the two, is given in Ken Brewer's Waksberg Award article: Brewer, K.R.W. (2014), “Three controversies in the history of survey sampling,” Survey Methodology,
(December 2013/January 2014), Vol 39, No 2, pp. 249-262. Statistics Canada, Catalogue No. 12-001-X.
An Introduction to Model-Based Survey Sampling with Applications, 2012, Ray Chambers and Robert Clark, Oxford Statistical Science Series
Finite Population Sampling and Inference: A Prediction Approach, 2000, Richard Valliant, Alan H. Dorfman, Richard M. Royall,
Wiley Series in Probability and Statistics.
So, in general, you will need randomization, but there are exceptions when modeling with regressor data. I doubt that this is the case for you.
At any rate, stratification is hugely important, regardless.
In general, for applications I've seen on ResearchGate, even a very large sample from a relatively small population should still be randomized ... often stratified random sampling being best. But if sample size n is close to population size N, then you might be able to treat this as a census with nonresponse. In that case, to reduce bias from your lack of randomization (without a model either, I presume) you could look into "response propensity" groups - basically a kind of poststratification to turn "nonignorable nonresponse" into somewhat more "ignorable nonresponse." (Ignorable nonresponse does not really mean to ignore it. That just means you can consider it more like you had a random sample when you really didn't.)
So, there may be some help from a larger sample size, under conditions above, but if you do not have a randomized sample design (and no model either), then you generally would still have a problem with biased results for which you cannot even estimate uncertainty, for a 'quantitative' study. You just would know that your results may not be very good, and would have little idea as to how bad they are ... until possibly later from other evidence when it is too late.
Method When and How to Use Cutoff Sampling with Prediction
Random sampling in the social sciences is seldom possible in Australia for ethical and economic reasons. Captive populations, such as school students, must still provide evidence of informed consent. In other samples. non-responses and refusals can be great. Some commercial polling organisations employ panels composed in such a way as to contain relevant proportions of population characteristics and these organisations have often shown relatively high levels of representativeness to be achieved by these methods. So many researchers have little choice but to use non-random samples.
The realistic issue for most researchers is how to work with non-random samples. Having a large enough sample to build-in a variety of multivariate internal validity checks seems to me to be a way of doing this. When I was active in research, I often used stepwise regression techniques to estimate upper and lower limits of possible covariance of each variable. In conjunction, various forms of factor analysis was also a go to method in those days.
Then again, much depends on the purpose of the study (yes, there is a real world out there where results of research may be used). For instance, double-blind medical studies may require more stringent standards of inference if the issue is whether or not a new drug should be marketed whereas a study designed to ascertain whether or not a social problem is widespread may require a different approach. A layered study, where the first stage is required to provide very general guidance for the design of a later stage, rather than any immediate scientific generalisations, may not require inferences of the same kind.
I do not think it is very helpful to consider this issue of random sampling in isolation from other aspects of the theoretical, social and political horizon of a study, or the position of the study in the history of cognate studies in a field and related disciplines. An area where few prior studies have been done requires a different research strategy to one that is well ploughed. For instance, it is probably unwise to generate hypotheses prior to knowing quite bit about relevant characteristics of a target population. Descriptive and exploratory techniques not only winnow down the potentially infinite universe of purely speculative hypotheses, but also inform future communication with that population.
If phenomena of interest are expected to be confined to a small corner of the (natural) population, the samples for the wider population are not so relevant. The very definition of population is problematic.
Sampling is a set of actual communicative actions and interactions between people, at least, in the social sciences. The very presence of 'a study' has already changed the reality of what is being studied. I am suspicious of discussions of these issues that take all that for granted, that fail to look at relationships between statistical inference and methods of so-called measurement, and between validity and reliability. The history of on the one hand, the interactions between research design, patterns of measurement error, respondent assumptions of the demand characteristics of participating in a study, and other kinds of shaping that occur in any study, and, on the other, the decision procedures employed, such as the ubiquitous null-hypothesis significance test, has been somewhat fraught with problems of a systemic kind. Why do we expect social science research to leap suddenly to highly relevant conclusions after a single study when , say, medical research, for all the journalists beloved 'breakthroughs' and 'game changers' garners its progress only slowly and incrementally.
Social researchers should, perhaps, ponder why it is that social researchers very often seem to need to break entirely new ground and invent entirely new concepts, and seldom build systematically on the work of previous researchers - why is there so little 'scientific progress', why have the social sciences not yet produced a Newton or an Einstein? Perhaps it is because the phenomena they study are labile and historically and culturally specific. Perhaps social science method should more deeply reflect this.
Of course, you can use internal variation in any sample to check apparent closeness of sample statistics to any know population characteristics and especially to those characteristics that cognate studies have shown to be salient. You can also randomly select a portion of respondents for interview or collection of qualitative data in order to check on possible sources of 'bias' within your wider, non-random sample and also to gauge the relative salience of particular parts of the information you are gathering and the nature of the frameworks of meaning that respondents have employed in answering, and so on.
What you can't do is proceed to analyse your data as if it came pure and unsullied from a world other than that peopled by humans, or from perfect random samples when these are not available to you, or that sampling however random and full eliminates error or even the greatest source of error in your particular study. Only a subset of statisticians can do that - those who write statistical textbooks!
The sample size can be suitable for generalization, if its represent the population for which you intend to generalizes. The context of the data is significant as well.
You need regressor data for a model-based approach. To introduce yourself to this, you could read the following paper by Richard Royall:
"The model based (prediction) approach to finite population sampling theory," Institute of Mathematical Statistics Lecture Notes - Monograph Series, Volume 17, 1992, 225-240, Richard M. Royall,
You might also want to consider whether there is anything in the reference list you might want to see if you can legally obtain.
PS - The problem with trying to rely on a large sample size, based on its size alone, is that you may have all of say one or two strata, and none of another. If you have regressor/auxiliary data on the entire population, then that might be more apparent and you could make a more informed decision as to what to do.