The effect size used for a t-test can depend on the specific situation and research question.
One common effect size used in t-tests is Cohen's d, calculated by dividing the difference between the means of two groups by dividing it by the pooled standard deviation. Cohen's d provides a standardized measure of the difference between the means of two groups and can be interpreted using general guidelines such as small (d=0.2), medium (d=0.5), and large (d=0.8) effect sizes.
Another effect size that can be used for t-tests is the correlation coefficient, r. This effect size is appropriate when examining the relationship between two continuous variables. The strength of the relationship between the two variables can be interpreted using general guidelines such as small (r=0.1), medium (r=0.3), and large (r=0.5) effect sizes.
It's important to note that many other effect sizes can be used in t-tests, and the choice of effect size should be based on the research question and the specific variables being examined.
Ma'Mon Abu Hammad , looking at the correlation between the two groups of the continuous variables doesn't make sense as an effect size statistic for a t-test. What is sometimes done is to look at the correlation between the continuous variable and the dichotomous variable (treated as numeric, like 0 and 1). This is analogous to looking at the r-squared (or eta-squared) from an anova analysis.
But, don't worry; the original poster isn't interested in the answer. It's just a post so their colleague can post an answer.