If beta value is negative, the interpretation is that there is negative correlation between the dependent variable and the corresponding independent variable if the other independent variables are held constant. If you are referring to the constant term, if it is negative, it means that if all independent variables are zero, the dependent variable would be equal to that negative value.
If beta value is negative, the interpretation is that there is negative correlation between the dependent variable and the corresponding independent variable if the other independent variables are held constant. If you are referring to the constant term, if it is negative, it means that if all independent variables are zero, the dependent variable would be equal to that negative value.
I assume that your result was unexpected. That is, I assume that you had expected a positive correlation, so the negative sign was a surprise.
If this is part of multiple regression, then various interactions between your 'independent' variables could be substantial. In particular, there could be large collinearity. This also inflates variance.
In any case, when you say your results are "significant," I assume you are referring to a p-value. They represent incomplete information, and can be misleading. They are sample size dependent, and if you ignore effect size, that is a problem. The standard error of a coefficient is sample size dependent, which causes this, but comparing it directly to the estimated coefficient may provide far better insight. (But it sounds like you are saying the estimated standard error is relatively small compared to the estimated coefficient, which would indicate a 'large' enough sample size.) However, remember that if you have more than one such regressor variable, interactions between them can make even that somewhat meaningless.
If you do not think your sample size is too small, you have one linear regressor, and results are not as expected, then perhaps you need to rethink the subject matter and any mistake you could have made.
If you think your sample size is adequate, you have multiple linear regression, and one regressor has this apparent problem, then you may have substantial collinearity and need to revisit your variable selection.
Perhaps you need to consider nonlinear regression.
Perhaps you need to research the terms "model selection," and "model validation."
I agree with all the comments made above. Let me add a point here. The constant term is a garbage-in-garbage-out term that captures the mean effect of variables not included in the model. Therefore, one should not pay much attention to its significance. Statistically, one can undertake a qualified interpretation if it possible for the RHS variables to assume a value of zero. However, we cannot say because you did not tell us your RHS variables. In my line of research, the constant term is paid almost zero attention, given structural stability structural and cointegration.
Thank you all for answering me. Actually I am working on causes of turnover intention, TI is dependent variable while there are eight IV. Sample size is 105. Beta value of constant is in negative while it has sig. p value. Other IVs also have sig p value except one. Estimated standard error is also smaller as compared to the estimated coefficient. Is it a problem to have negative constant or not? I heard somewhere that it denotes a mistake.
By "beta weight of constant", do you refer to the intercept term? I'm asking because the regression equation has beta weights for the intercept (which is a constant), and for terms involving independent variables (and their interactions and polynomials, if included in the model).
A statistically significant intercept means that the intercept is not zero. A positive intercept is greater than zero, and a negative intercept is less than zero. If data are normatively standardized (z-scores), then the intercept will be exactly zero.
Once the estimation is done correctly. The interpretation by Ette Etuk implies when independent variable x is zero , dependent variable y is negative. However, the question you may need to ask is that in real life, is it practicable for the dependent variable y of occurrence under consideration to measured negative? Cross check your data is y negative in any of the data sample? The failure of method of estimation used for such data might be considered
How can it possibly be that a constant (intercept) value is GREATER than the maximum value for the dependent variable? That seems counter intuitive to me if "the constant is the value of y (dep v) when all values of x (ind v) are equal to 0" (aka your reference categories). This has happened to me in OLS regression and it does not make sense.
Thanks for the useful explanation. However, can we still get the same conclusion (i.e. If beta value is negative, the interpretation is that there is negative correlation between the dependent variable and the corresponding independent variable if the other independent variables are held constant.) if all the other independent variables are NOT statistical significant (p