Dear Colleagues,
I have a question regarding the relationship between sample size and the likelihood of obtaining significant results in SEM. If it is generally true that larger sample sizes can increase statistical power and the chances of detecting significant effects, then what can researchers do to make sure that the hypothesis-testing results are reliable and meaningful?
Let's say my SEM study has a sample size of 5000, does this mean that the p values in the hypotheses I will be testing are very likely to be significant due to the large sample size? Are there any effective measures that we may take to deal with this problem?
I am thinking about the following three steps but am unsure if they are useful in effect:
1) reporting precise effect size,
2) lowering the significance threshold from p < 0.05 to p < 0.001,
3) testing the robustness of the structural model across different subgroups.
Do you have any thoughts or recommendations? Feel free to recommend any literature that you may find useful!
Thank you!
Best,
Leon