In my experiment I have two groups: a clinical group (n=15) and a control (n=15).
I have used SPSS to test for the significance and magnitude of differences between these two groups.
Dependent variable 1 executive function was normally distributed, so group differences on this variable were analysed using an independent t test. I calculated the effect size for group differences in terms of executive function using cohens d.
(calculator: https://www.psychometrica.de/effect_size.html)
Dependent variable 2 social cognition was not normally distributed, so group differences on this variable were analysed using a Mann Whitney U test . I calculated the effect size for group differences on social cognition using Eta Squared
(calculator: https://www.psychometrica.de/effect_size.html)
As I understand it the eta squared effect size for Mann Whitney U is interpreted as 0.01 or above small; 0.06 or above medium; 0.14 large. And for Cohens d small, med & large are: 0.2, 0.5, 0.8. ( reference: http://imaging.mrc-cbu.cam.ac.uk/statswiki/FAQ/effectSize)
I have a puzzling result:
for the executive function test (normally distributed): For the independent t test to test for differences between groups, the result was not significant: t=-1.81 (df=28) p=0.81 & with a medium effect size (cohens d: 0.66).
for the social cognition test (abnormal distribution): a mann whitney u test was run to test between groups. The result was significant U=75 p=0.017 & Eta squared was calculated as 0.08, also a medium effect size. (Note that using the psychometrica calculator above when this eta squared value is converted to cohens d the equivalent cohens d value is 0.6)
So my question is, is there any reason why in the same sample, when testing for differences between groups, one test would report a significant difference with a medium effect size and the other would report also a medium effect size but that is not significant?
Thanks in advance for any suggestions or explanations,
Iona