In my hypothesis driven SEM, I have in total 28 variables and 58 edges. Considering such amount, do you suggest me to report adjusted p-values with FDR for example or raw p-values?
When dealing with a large number of statistical tests, as in your hypothesis-driven structural equation model (SEM) with 28 variables and 58 edges, it's important to consider the issue of multiple testing and how to control for the increased risk of false positives. Reporting adjusted p-values, such as those corrected using the False Discovery Rate (FDR), is generally a recommended approach in such situations.
Here are some reasons why reporting adjusted p-values, specifically using FDR correction, is beneficial:
Control for Multiple Testing: With a large number of tests, the chance of obtaining statistically significant results by random chance increases. FDR correction helps control the overall rate of false positives while still allowing for some discoveries.
Enhanced Reliability: Reporting adjusted p-values helps ensure that the results you present are more reliable and have a lower likelihood of being spurious.
Transparency: Reporting adjusted p-values demonstrates your awareness of the multiple testing issue and your commitment to providing more accurate and trustworthy results.
Comparison Across Studies: Reporting adjusted p-values makes it easier to compare your findings with other studies that might have used similar correction methods.
Publication Standards: Many journals and research communities encourage or require the use of multiple testing corrections in reporting results, especially when dealing with a large number of tests.
Here's how you could proceed:
Conduct your SEM analysis and obtain the raw p-values for your hypotheses.
Apply the FDR correction (e.g., Benjamini-Hochberg procedure) to your raw p-values to obtain adjusted p-values.
Report both the raw and adjusted p-values in your results section.
Indicate which p-values are adjusted and which are not.
By reporting both raw and adjusted p-values, you provide readers with a complete picture of your findings while acknowledging the potential influence of multiple testing.
Remember that while FDR is a widely used correction method, there are other methods as well, such as the Bonferroni correction. The choice of correction method might depend on the specific context of your research and the level of stringency you require.
Ultimately, the goal is to strike a balance between controlling for false positives and not missing potentially important findings. Consulting with a statistician or a domain expert can help ensure that you choose an appropriate approach for adjusting p-values in your SEM analysis
When conducting hypothesis-driven structural equation modeling (SEM) with a substantial number of variables and edges, it's important to address the issue of multiple comparisons to maintain appropriate control over the familywise error rate. Reporting adjusted p-values, such as those adjusted using the False Discovery Rate (FDR) correction, can be a prudent approach in this situation.
Here's why adjusted p-values are recommended:
Multiple Comparisons: When you're testing a large number of hypotheses (in your case, 58 edges), the probability of observing at least one significant result by chance increases. If you use a standard significance level (e.g., α = 0.05) for each test, the overall probability of a Type I error (false positive) across all tests becomes much higher. Adjusting p-values helps control this overall error rate.
False Discovery Rate (FDR): FDR is a method for controlling the proportion of false positives among all statistically significant results. It's particularly useful when you have many tests and want to strike a balance between controlling the overall error rate and identifying potentially important relationships.
By using FDR-adjusted p-values, you are mitigating the risk of making incorrect conclusions due to the increased chance of observing significant results purely by chance when conducting numerous tests.
Here's how you might approach this:
Conduct Your Hypothesis-Driven SEM: Perform your SEM analysis as planned, obtaining the raw p-values associated with each edge.
Adjust P-Values: Use an appropriate method, such as the Benjamini-Hochberg procedure, to adjust the raw p-values to control the FDR. This will give you a set of adjusted p-values.
Report Adjusted P-Values: When presenting your results, report the adjusted p-values rather than the raw p-values. This provides a more accurate picture of the significance of each relationship while considering the multiple comparisons issue.
Remember that while adjusted p-values are important for controlling the false discovery rate, they are not a definitive solution to the multiple comparisons problem. Careful consideration of the theoretical framework, prior knowledge, and replication in independent samples is also crucial to drawing meaningful conclusions from your SEM analysis.
Lastly, if you're unsure about the specific procedures to follow or the choice of adjustments, consulting with a statistician or an expert in your field can provide valuable guidance tailored to your research context.
It depends on what error rate you want to use for inference. There is nothing wrong with performing inference based on per-comparison error rates using raw p-values. There is also nothing wrong with performing inference based on a family-wise error rate. Controlling a family-wise error rate is certainly seen as stronger evidence against the null when it is rejected, but this approach to testing can also lead to lower power.