In simple linear regression the equation of the model is
y = b0 + b1 * x + error.
The b0 and b1 are the regression coefficients, b0 is called the intercept, b1 is called the coefficient of the x variable.
Significance tests compare the above model with the following models:
0: y = 0 + B1 * x + error
1: y = B0 + 0 * x + error
The significance test of the intercept thus compares the intercept to 0, thus it tests whether the the regression line goes through the origin (x=0, y=y).
The test of the slope compares the slope to 0, thus it tests whether the regression line is horizontal. If horizontal then x has no influence on y.
You can enter your data in a statistical package (like R, SPSS, JMP etc) run the regression, and among the results you will find the b coefficients and the corresponding p values.
If you divide the estimate by its standard error you get a "t-value" that is known to be t-distributed if the expected value for the estimate is zero. The degrees of freedom for the t-distribution is N-K with N: number of values and K: number of coefficients in the model.
Take care to control the family-wise error-rate (chance of getting at least one false-positive) if you are testing several coefficients (Tukey, Holm, Hochberg, Bonferroni,...).
Since you are asking for such tests I suppose that you are not using statistical software (they would do such tests almost automatically). In Excel, the easiest way to get the estimates together with their standard errors is to use the function LINEST. The t-values and the p-values have to be calculated "by hand". TDIST can be used to get the p-values for a given t-value.
This is just a note about the practicalities of use and interpretation. Others have covered the mathematical part well (I'm an SPSS user myself). When there are two or more variables/factors/predictors in a regression analysis, one needs to be aware first of how the dependent variable looks on each one by itself. If the numbers in each level of each v/f/p are not distributed across the levels, regression won't work. Also, regression analysis looks first for the greatest correlation with the dependent variable, then takes that out and looks for what kind of variability is left. But if the next v/t/p has almost as high a relationship with the dependent variable, that will disappear, and it is tempting to conclude the 1st one is "most important and nothing else is", when both are almost equally important. Multivariate analysis of variance avoids that problem. If the fancier stat packages aren't available, I would just look at the individual v/f/p separately to make sure they aren't just interchangeable. If they are, the relationship with those two must then be explored.
To answer your question in a easy way (I am a layman in statistics but use it a lot in my research). The significance of a regression coefficient is just a number the software can provide you. It tells you whether it is a good fit or not. If the p
From my understanding the significance of regression coefficients is assessed via both p-value and critical ratio (C.R.). To understand what p-value measures, I would discuss the C.R. first.
Basically the C.R. is an operationalised z-statistics (you can also calculate it by dividing standard error by unstandardised coefficient) that tests the null hypothesis indicating the coefficient equals to zero. The common threshold to test this z-statistic (of C.R.) and reject the mentioned null hypothesis is the same as many probability tests i.e. greater than ±1.96 based on an alpha level of 0.05. In association with the z-statistics (C.R.) is assessment of the p-value that indicates the probability of achieving a value as much as such C.R. by chance.
To evaluate whether the p-value is significant, the common approach is to compare it against the traditional thresholds of alpha such as 0.05 or 0.001 etc. If the p-value is less than the chosen threshold then it is significant.
Barbara wrote "... how the dependent variable looks ...", "... correlation with the dependent variable..." and "... a relationship with the dependent variable ...", so the dependent variable was in singular. She discussed the impact of several different predictors on this single dependent variable if considered alone (one by one) and that this may lead to the (unjustified) conclusion that one predictor may be considered important, whereas the other is not. Then she wrote that "Multivariate analysis of variance avoids that problem." - where I though that she meant that all the predictors are used together in one big model to explain the values of the one dependent variable (which should correctly be termed a "multivariable" analysis, not "multivariate", what would indicate that several dependent variables are fitted).
The mathematics related to what the significance of a regression coefficient is has been very well explained by all former RGaters.
Let me add a brief comment regarding the intuition behind it.
When you run a regression Y = b0 + b1X1 + b2X2 + etc, you get an estimate of b1, which is more or less the estimate of the (linear) effect of X1 on Y (holding X2 etc constant).
Being an estimate, you cannot be sure that your estimate of b1 is the true value of the effect of X1 on Y. So you need something more to know how accurate your estimate is. Together with the estimate, you get a pvalue, say 0.34, 0.08 or 0.02. More or less, what these numbers tell you is that, in the first case, you could have gotten an estimate as large as b1 even if there was no causal effect of X1 upon Y (even if the true b1 were zero) with a probability of 0.34. In the second case, this probability drops to 8%, and in the third case, to 2%. So the lower the pvalue, the more confident you can be that you have actually found an effect.
This has an implication for your second question, "how can we determine that the coefficient is significant? You would need to take a decision on which is the highest pvalue you would "tolerate". For instance, if you choose the 5% one, then all coefficients with pvalues lower than 0.05 would be statistically significant.