Hello Mee-Jee Kim. What is the dependent variable? If it is something that is quite familiar to your readers, then raw mean differences and differences in differences (for interaction terms) will be meaningful. And in that case, I don't think that standardized measures of effect size add very much (if any) useful information. I do realize, however, that sometimes editors or reviewers may insist on them. ;-)
You might find this article by Thom Baguley helpful.
Baguley, T. (2009). Standardized or simple effect size: What should be reported?. British journal of psychology, 100(3), 603-617.
Article Standardized or simple effect size: What should be reported?
Thanks Bruce Weaver - I'd also add that there are at least three versions of eta-squared:
classical - SS effect divided by SS total
partial - SS effect divided by (SStotal minus some effects that depend on design)
generalized - SS effect divided by (SS total adjusted to equate everything to the same design).
partial eta squared is useless as an effect size measure per se (e.g., it can sum to more than 100% over an ANOVA model so it can't be easily interpreted as a varianced explained measure even if thats what you want. It is sometimes useful for power/sample size estimation (assuming the design hasn't changed etc.)
Classical eta squared is very simple to explain, but not comparable between designs. So if using eta-squared I'd look at generaized eta-squared. This is fiddly to calculate but there's a simplified formula in my book (Serious Stats) and a more detailed explanation in:
Olejnik, S., & Algina, J. (2003). Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs. Psychological Methods, 8(4), 434–447. https://doi.org/10.1037/1082-989X.8.4.434
I'm not a huge fan of variance explained measures but I use generalized eta-squared if I do use any in ANOVA.
Note that the design (independent, repeated or mixed measures) and nature of the factors (measured or manipulated) influences calculation of generalized eta-squared.
In general, larger effect sizes indicate a stronger practical significance of the observed differences or relationships. However, what constitutes a "large" effect size can be context-dependent and may vary across fields.
It's important to note that while p-values in ANOVA indicate whether there are significant differences, effect size measures provide additional information about the magnitude of those differences. Researchers are encouraged to report both p-values and effect sizes to provide a more comprehensive understanding of their findings.
When reporting effect sizes, it's a good practice to consider the context of your study and the field-specific conventions for interpreting effect sizes. Additionally, confidence intervals for effect size estimates can provide a range of plausible values, adding to the interpretation of the results.