Effect size is being used as an indicator to evaluate the practical significance. When the effect sizes are equal or more than 1 and statistical significance is >0.05, what can we infer?
"Statistical significance" is of "no practical significance" if the size of estimated effect is not practically relevant/useful. Also, an effect that is not statistically significant is not necessarily practically useless - the experiment probably didn't have enough power to be able to declare the effect as statistically significant.
Why you don't use confidence intervals instead? Statistical significance and p-values aren't usually of interest for the reader. He/she would likely prefer a direct measure of the primary effect. And CIs include also a natural estimation of the sampling error.
I guess you have two much variability in your samples, or too small sample size. Imagine treatment is efficient on half the patients, with effect size 2 on these patients. In average, effect-size is 1, and variability is high so you can have p > 0,05...