Recently there seems to be an increasing discussion about the use (and misuse) of the p-value and statistical significance. This discussion is not new, of course.
One of the most widespread recommendations is to present not only the p-value obtained, but also the confidence intervals and measures of effect size. While this is a recommendation that improves the presentation of results, researchers generally use arbitrary and unjustified benchmarks for these effect size measures, citing Cohen's traditional work.
In this sense one author who has worked a lot on this subject is Daniel Lakens. In its article "Calculating and reporting effect sizes to facilitate cumulative science…" talking about Cohen's d, indicates that:
"these values are arbitrary and should not be interpreted rigidly... The only reason to use these benchmarks is because findings are extremely novel, and cannot be compared to related findings in the literature..."
In this line, Funder and Ozer in their paper "Evaluating Effect Size in Psychological Research: Sense and Nonsense", propose a more justified way of interpreting the effect size.
Finally, agreement with the group of Daniel Lakens, we should move from an obsetion for finding rigid and arbitrary cut-off points to "...transparently report and justify all choices (...) when designing a study...".