In general, effect size estimates (and, confidence intervals for those ES values) are always a good idea, since these are independent of sample size (whereas statistical significance is a function of ES and sample size). ES values have been defined for parametric and non-parametric statistical methods.
The specific ES metric that is best for a given application may vary, based on: (a) your analytic method; (b) the variable(s) and comparisons involved; and (c) your target audience (for example, in biomedical studies, odds ratios or risk ratios make be seen far more often than in other disciplines).
Here's some relevant information you may find helpful:
Article Using Effect Size—or Why the P Value Is Not Enough
In general, effect size estimates (and, confidence intervals for those ES values) are always a good idea, since these are independent of sample size (whereas statistical significance is a function of ES and sample size). ES values have been defined for parametric and non-parametric statistical methods.
The specific ES metric that is best for a given application may vary, based on: (a) your analytic method; (b) the variable(s) and comparisons involved; and (c) your target audience (for example, in biomedical studies, odds ratios or risk ratios make be seen far more often than in other disciplines).
Here's some relevant information you may find helpful:
Article Using Effect Size—or Why the P Value Is Not Enough
I have another question. What number differences between groups are acceptable?
For example, One study has two group students who are in different treatment, N1=39, N2=63. Can this difference be acceptable? And in general, what kind of difference is acceptable?