04 October 2019 5 5K Report

Hello everybody,

I have a dataset with 8 predictors (dimensions of brand image) that I regress on 4 different DVs (Differentiation, Relevance, Esteem and Knowledge). One DV at a time.

In order for those linear regressions to meet assumptions, I had to square-root transform the DVs Differentiation (sqrt(Yd)) and power-transform Knowledge with 2.5 (Yk^2.5) (although I am debating quadratic to be more "simple")

Now I want to compare the effect of the 8 predictors on the different DVs, such as "Classic image is influencing Relevance the strongest, followed by Knowledge,..."

The standardized beta coefficients (z scores) shown here have been calculated from the transformed models of sqrt(Yd) and Yk^2.5 and normal models for Relevance and Esteem.

I have read that I can back-transform sqrt(Yd) and Yk^2.5, but that I must never back-transform betas. How do I make the standardized betas comparable? Even if I put Yd^2, how does that influence my interpretation here?

Honestly, I am confused. Can I compare them anyway because they are standardized scores?

Thank you all in advance!

PS: I am using R

Similar questions and discussions