The article cited above (Statistical methods for comparing regression coefficients….) is not the appropriate one for this question.
We can compare two regression coefficients from two different regressions by using the standardized regression coefficients, called beta coefficients; interestingly, the regression results from SPSS report these beta coefficients also. To get the beta coefficients, first we have to change both the DV and IV into standardized variables. A variable can be standardized by subtracting the mean of the variable from its values and dividing this difference by the standard deviation of that variable. Such standardized variable also is known as Z-variate. Next, instead of running the usual (standard) regression, we run regression on these standardized variables. Or much easier, run the usual regression in SPSS; SPSS will report the beta coefficients also along with the usual regression coefficients!
The standardized regression (beta) coefficients of different regression can be compared, because the beta coefficients are expressed in units of standard deviations (SDs). The interpretation of the beta coefficient is as follows: if the standardized IV changes by one standard deviation, the standardized DV, on average, changes by beta standard deviation units. Remember, in the usual regression model, we measure the relationship in the original units of DV and IV. But here it is in units of SDs, and that is why we are able to compare the different coefficients.
Now, in case the regression coefficients are already given and we are not able to run standardized regression, it is possible to convert the usual regression coefficient into beta coefficients, by making use of their relationships. The relationship between the usual regression coefficient and the beta coefficient is as follows: beta coefficient = usual regression coefficient multiplied by the sample standard deviation of IV(X) and divide by the sample standard deviation of the DV (Y). This relationship holds true in multiple regression also. For more details, see Gujarati’s Basic Econometrics, 4th edition, Chapter 6.
I would consider a multivariate model with more than one response variable modeled simultaneously; this will allow comparison of estimated regression coefficients AND their standard errors. A single overall model is needed for this; see previous postings at:
The detail will depend on the metric of how each response is measured. If the two responses are continuous but on very dissimilar scales I would consider turning them into Rankits so that they both have a mean of zero and a sd of 1- so that both slope coefficients then represent change in Y in SD units for a unit change in X. There is no need to use beta weights as the predictors are the same for both models and you are then not subject to the criticism of such standardized coefficients. (see disadvantages https://en.wikipedia.org/wiki/Standardized_coefficient)
The references that I give in the previous posting show it is now possible to handle situations where the responses are discrete or a combination of discrete and continuous.
In simple cases it is possible to stack the two outcomes and the predictors and include a dummy and interactions to model in a standard regression model - but this would pool the unexplained variance across the outcomes and there might be quite different variation associated with each outcome. Not modeling this correctly could lead to problems with the estimated standard errors. The multivariate model has a separate variance for each outcome and a covariance between the two (or more ) outcomes.
***** subsequently edited in relation to SE's and beta weights and in relation to simple stacking approach with a single variance of the random term.
The article cited above (Statistical methods for comparing regression coefficients….) is not the appropriate one for this question.
We can compare two regression coefficients from two different regressions by using the standardized regression coefficients, called beta coefficients; interestingly, the regression results from SPSS report these beta coefficients also. To get the beta coefficients, first we have to change both the DV and IV into standardized variables. A variable can be standardized by subtracting the mean of the variable from its values and dividing this difference by the standard deviation of that variable. Such standardized variable also is known as Z-variate. Next, instead of running the usual (standard) regression, we run regression on these standardized variables. Or much easier, run the usual regression in SPSS; SPSS will report the beta coefficients also along with the usual regression coefficients!
The standardized regression (beta) coefficients of different regression can be compared, because the beta coefficients are expressed in units of standard deviations (SDs). The interpretation of the beta coefficient is as follows: if the standardized IV changes by one standard deviation, the standardized DV, on average, changes by beta standard deviation units. Remember, in the usual regression model, we measure the relationship in the original units of DV and IV. But here it is in units of SDs, and that is why we are able to compare the different coefficients.
Now, in case the regression coefficients are already given and we are not able to run standardized regression, it is possible to convert the usual regression coefficient into beta coefficients, by making use of their relationships. The relationship between the usual regression coefficient and the beta coefficient is as follows: beta coefficient = usual regression coefficient multiplied by the sample standard deviation of IV(X) and divide by the sample standard deviation of the DV (Y). This relationship holds true in multiple regression also. For more details, see Gujarati’s Basic Econometrics, 4th edition, Chapter 6.
I will address tour last question = beta comparison
in classic text books it is referred as interaction tests. If you are not familiar with, below a brief glance at a method.
I suppose you have two samples, each with Y and X measurements, namely Y_1 and X_1 for the first and Y_2 and X_2 for the other.
the question is to compare b_1 and b_2 in the relations Y_k = ...+b_k*X_k+..., k=1,2
create a single sample concatenating the two samples, adding a new (dummy) variable with value 0 (zero) for the first sample and 1 (one) for the ther, let it be G, then compute the product X*G
add this product to your model (applied on the whole data) Y = ...+ b*X + d*(X*G) +...
compare the two former beta's is just compare d to 0 (zero)
you can perform that with Excel's regression function, more sophisticated sofwares as R are obviously better at.