Many research studies in social sciences are based on convenience sampling but uses parametric tests for the inferences, whereas the parametric techniques are prescribed only for probability sampling. Please guide how this is made possible?
The researcher can choose to perform a non-parametric test if the data do not fit all the necessary assumptions for a parametric test. And this can happen even with the random sampling approach. Besides, a parametric test has no assumption for a random sampling but rather for a normal distribution. (But yes, the scales have to be interval or ratio).
As for the sampling technique, it is impossible to achieve a truly random sample so some compromise would always be needed. And I agree that much depends on the research question and the purpose of the study. For example, in psychology research theory-generating or theory-testing study does not have to rely of random sampling.
Parametric tests assume the metric nature of data (interval or ration scale questions) and normality of distribution, nothing to do with sample type. researchers tend to use non-probability sampling, as it is very difficult to zero in on the size of the population or sample frame may not be available.
Hmm... I have not a single example from the whole literature I read (bio-medical) where a true random sample is analyzed. At best, the samples are "convenience samples". Often, the population is so very undefined and hypothetical that a random assignment of elements to be sampled is not even possible (e.g. cell culture experiments, when the population is typically thought to be all similar cultures of cells of the same type; or animal experiments, where the population is thought to be all animals of the same species living under similar conditions).
A random sample requires that the probability of each individual/specimen to get into the sample is equal. But in fact, for the major part of the "population" of cell cuture plates or animals, this probability is zero.
For me the issue is not samples and populations but the inherent uncertainty in observed data - indeed in the post below I have argued the need for inference even when there is a full census
" The observed count should be conceived as an outcome of a stochastic process which could produce different results under the same circumstances. It is this underlying ‘process’ that is of interest and the actual observed values give only an imprecise estimate of this. The aim of the analysis therefore is not the descriptive statistic – the observed relative rate – but rather the parameter of the underlying rate in relation to the underlying uncertainty "
I think the key connection is between random samples and generalizability to a known population. Any time you can met that standard your data will be more valuable.
Aside from that, I agree with the statements above that radom sampling is not necessary for parametric statistics.
The researcher can choose to perform a non-parametric test if the data do not fit all the necessary assumptions for a parametric test. And this can happen even with the random sampling approach. Besides, a parametric test has no assumption for a random sampling but rather for a normal distribution. (But yes, the scales have to be interval or ratio).
As for the sampling technique, it is impossible to achieve a truly random sample so some compromise would always be needed. And I agree that much depends on the research question and the purpose of the study. For example, in psychology research theory-generating or theory-testing study does not have to rely of random sampling.
Hello all, I have gone through a youtube video- link added below. It is clear that we select the type of statistics only on the basis of normality of the data and definitely not on the type of sampling. It is true that probability sampling supposedly yields normal data howewer this is not the sole qualification to go for parametric statistics.