As we know that the test hypothesises depends on Alpha (0.05, or 0.01 or 0.10, and the statistic value. However, sometimes we got p_value 0.049 or 0.051
A cut-off is a cut-off. If you don't like 0.05, use another value. Or don't use a cut-off at all (take p at face value, interpret it in context*). It only matters because reviewers might beef. It also depends if this is a central, isolated result or part of larger set of analyses.
I think what is more critical then if p is a bit above or below 0.05 is if the experimental design is good, if an appropriate model is used and if a sensible hypothesis is tested. It's alosoften more relevant to see if the value of the statistic bein tested is in a range that makes sense (what might also be addressed by chosing a more sensibel test hypothesis than the typical "the value equals zero"-hypothesis).
---
*that's not easy, quite a bit of work and requires a thorough understanding. This is obviously out of reach for many researchers, and usually not needed (to my experience).
Ahmad Alshallawi, I agree with the response Dr. Jochen Wilhelm on what really matters over pegging the relevance of the study on whether or not its findings compare to a particular arbitrary p-value threshold.
Assuming that the alpha & beta utilized, hypotheses, experimental design and study execution are sound, and you are faced with this concern: for technical purposes, consider establishing a cut-off p-value in more decimal places than normal. For instance, instead of stating that the p-value cut-off is
Busari Yusuf Technically, one should never accept the null hypothesis. At the most, the data are insufficient to reject the null.
First thing I would do is consider my study's power. If my study is underpowered, I'd be less confident in the low p-value, since it's more likely to be a product of error. If my study has adequate power, it suggests that the research hypothesis is worth investigating further. Perhaps there are features of my design, measures, etc, that could be improved to better differentiate the populations (i.e. increase the effect size).
"There's nothing special about the .05 significance level" - an ex professor of mine