I am assuming you are speaking of teh coefficients in linear regression. When your response variable is metric and can readily be interpreted in terms of impact, the beta coefficients are effects sizes by themselves. Imagine, you are comparing two management styles (A and B) on employee satisfaction across a number of companies:
If your response variable is the retention rate of staff, you can say right-away: "Companies with management style A loose beta% less employees." Every expert in the field should know whether that is a practically relevant effect or negligible.
If your response variable is employee satisfaction measured by a rating scale, there is no such direct interpretation. In such a case, effects sizes are usually rescaled against the overall dispersion found in the population. In this example, this can simply be achieved by z-transforming the response variable.
Cohen's f-squared would reflect the explanatory power of the overall regression model: R-squared (the explained variance) divided by (1 - R-squared) (the unexplained variance). In the case of a single predictor model, beta (the standardized regression coefficient) = r(x,y) = R, so beta would be related to f-squared as:
f-squared = beta-squared / (1 - beta-squared),
beta-squared = f-squared / (1 + f-squared), and
beta = square root of [f-squared / (1 + f-squared)].
R-squared, f-squared, and beta can and have been used as effect size indicators. A common question is, are they (sufficiently) different from zero to be considered noteworthy?
I got the point in the case of single predictor. What happens in the case of Multiple predictors. I am still confused the real difference between between f square and beta value.