My short answer would be a question in return: Why don't you check?
Here's a slightly longer one: I don't know about your field, but in mine it's basically impossible to tell what the data-generating mechanism looks like, say, nine out of ten times. The rest are simulation studies, and even in those cases you could argue that these are only pseudorandom ;-)
So if that's similar in your branch of research, you might want to check the assumptions for whatever statistical model you're trying to fit one-by-one (e.g., by employing Tukey's techniques for exploratory data analysis or any other way that helps you make sense of your data), and you'll see ...
For such hypothesis "tests" you'd want appropriate features of your sample distribution to not be biased from those of the population distribution. For example, here you want the mean of all sample means to be the mean of a population. People often use random sampling for that. If, for example, you had a nonprobability sample which were to only pick from the largest members of the population, that could be a substantial problem.
Thus, your nonprobability sample could be problematic.
You also need to consider effect size for your "tests." This means you also need to consider sample size.
Data from non-probability samples may not fit the assumptions of parametric tests such as the t-test. Non-probability samples are often biased and may not be representative of the population, which can violate the assumption of random sampling required for parametric tests. Additionally, the distribution of the data from non-probability samples may not be normal, which is another important assumption of parametric tests. Therefore, it is important to consider the nature of the data and the sampling method when choosing a statistical test.
If the data is non-normal or the sample is non-random, non-parametric tests such as the Mann-Whitney U test or the Wilcoxon signed-rank test may be more appropriate. These tests do not rely on assumptions about the distribution of the data and are more robust to violations of assumptions compared to parametric tests. However, it is important to note that non-parametric tests may have lower statistical power than parametric tests, meaning they may be less likely to detect a true effect. Therefore, researchers should carefully consider the trade-offs between statistical power and assumptions when selecting a test.
It still matters if, say, your data collection comes from only one segment of the population in some case. Nonparametric tests don't 'fix' that. For example, a rank test will lose some distributional detail, but order still matters.