Effect size refers to the strength of association between variables [independent variables with the Dependent variable]. Why it is important to know the effect size? Because it helps to calculate the needed sample size. The possible range of effect size is 0 to 1. The larger the effect size, means the smaller the needed sample.
The effect size can be calculated in inferential statistics only. Researchers can refer to previous publication to estimate the ES. In addition, according to Cohen (1988), in each statistical test one can estimate ES to be Small, Moderate, or Large. In each statistical test, the value of small, moderate and large in ttest is .20, .50, .80. These values differ in ANOVA, correlation tests…etc.
When we only report 'statistical significance (whether the difference between groups is so large it would occur by chance less that 5% of the time), whether or not a difference attains statistical significance very much depends on how large the sample was. If you are working with a very large sample (for example a national survey) even a trivial difference may come up as statistically significant. The effect size is a way of summarising ho 'big' the difference is that doesn't depend on the sample size. Some journals are now explicitly asking for effect size to be reported.
If you were wanting to calculate effect sizes and you were looking for a really easy to understand book I'd recommend Andy Fields Discovering Statistics (There are versions for SPSS, SAS, or R statistical programmes)
Effect size is how strong the association is between variables. In contrast with p-values, which tell us the chance the association is not a fluke, an effect size gives us a sense of how meaningful the association is. In Psychology, the most common measure of effect size when comparing groups is Cohen’s d, which is approximately the number of standard deviations between the means of each group. The most common measure for a correlation is r squared, literally the correlation squared, and it’s the percent of variance in one variable accounted for by the other. For studies of very rare events, such as epidemiological studies, odds-ratios are also fairly common. I wouldn’t say social scientists do not report effect sizes as much as other scientists, but it might depend on the particular kind of research. There is also quite a bit of debate about how meaningful particular ways of framing effect sizes are. What does an effect size mean in a more or less controlled experiment? How do we interpret a small effect size in an observational study when the effect impacts a considerably large number of people? For rare events a small effect size in the population overall may seem not so meaningful but the odds ratio for a particular individual’s outcome might be quite meaningful. Hope some of this is helpful, Sabri. ~ Kevin
I think we need to be way more creative when thinking about effect sizes. Why don't we abandon the Cohen conventions and think of more user friendly and intuitive ways to describe the magnitude of effects. For instance, for many years the area of Wales (a country, and part of the UK) has been used as a unit of area. See...
Indeed, there have been many other things used to describe, or place in context, estimates of magnitude: the height of the Empire State building (or Eiffel tower, or whatever), the amount of water in an Olympic size swimming pool, et etc
Maybe we could agree on a universally accepted effect size for psychology that everyone understands and can relate to. Maybe 'nails on a blackboard', 'paper cut on the tip of your finger' could be used for negative effects and 'cat memes' for positive effects.