Beta can be larger than 1 and sometimes much more as indicated in the Rockport Fitness Walking Test (Kline GM, Porcari JP, Hintermeister R, et. al. 1987. Estimation of VO2 Max from a one mile track walk, gender, age, and body weight. Medicine and Science in sports and Exercise. 19:253-59 ) .
Of course in multiple regression analysis you can have beta coefficients larger than 1. This would happen when you run regression using variables with different units of measurement, eg: your dv is in dollar, your iv is in billion. Or the case of relationship between advertising costs(iv) and sales (dv).
In multiple regression, beta can be more than one, most especially if the scaling of the variables are different. I have seen real data behave like that before.
I advise you to read your regression result in everyday language. If you are predicting blood pressure and you have a beta coefficient of 1.5 for blood pressure then you say
"Blood pressure goes up 1.5 mm of mercury for every on-unit increase in BMI"
and that makes sense. If it sounds like nonsense when you try it, then you may have something wrong with your data (for example, some genius used 999 as a code for missing values, which is an idiotic thing to do at the best of times).
It sounds to me like a lot of people on this post are mistaking Betas (the standardized regression coefficients) for B's (the unstandardized coefficients). Bs are in the original units of the variables, which is what most people here are referring to. Because of this Bs can be very small or very large depending on the units of you predictor and criterion. However, Betas are in standard deviation units (much like Pearson's r) and are interpreted like this: for a 1 SD change in your predictor, there is a Beta SD change in your criterion, holding other variables constant (because it's a partial regression coefficient). if you are making predictions from a regression model you typically are using the Bs, not the Betas, because to make predictions from Betas your predictor variables would need to be converted to Z scores first (and the predicted criterion value would be in Z units too). I recommend reading Cohen, Cohen, West, & Aiken for a good review of this material. Nonetheless, it does appear that Betas (the standardized coefficients) can be greater than 1.0; however, this is likely only to happen when there is high multicollinearity among your predictors (ie, the predictors are too highly correlated). See Karl G Jöreskog (1999) How Large Can a Standardized Coefficient be? Thus, if you get Betas>1 you should generally try to reduce your multicollinearity (check Tolerance values to make sure they are >0.10).
I tend to agree with Dr. Rhudy. Multicollinearity might be the reason for such situations. One has to be careful of constructs, data acquisition and biases....
I have 18 regressions (2 main, 4 control variables and time dummies) in my paper....in 3 regressions I am getting only one control variable co-efficient greater than one. Is this fine???
Multicollinearity should be the problem because I observed it in my own analysis and I had to remove some of the data. For instance, some of the variables were derived from some original data or variable using some conversion factor or formular
Hi Kristina, 4 years after I asked my question, here is my understanding now:
Theoretically, the beta is not bound to 1. If it is larger than that, it means that one standard deviation change in the independent variable results in more than one standard deviation change in the dependent variable. Given that this is highly unlikely, I would check very carefully for model misspecification, e.g. multicollinearity.
The Beta weights can exceed a total value of 1.0 due to collinearity. I saw this in my data (Hunt et al. Role of central circulatory factors in the fat-free mass - maximal aerobic capacity relation across age. AJP - Heart, 1998). The important thing is that it made perfect sense given our data, and differentiated between the impact of central circulatory factors versus skeletal muscle mass. So the data may still be reasonable and interpretable.
the standardized Beta weights should not be greater than 1 in general, the unstandardized Beta weights can be greater than 1, and this is reflected in a way we cite the statistics according to APA style (the standardized Betas without leading zero, and the unstandardized Betas with leading zeros)