Basically, we are interested in degrees of freedom in designed experiments because that tells us which member of a class of distributions to use for our various inferences. Best wishes, David
Degrees of freedom in regression model is refers to the number of independent values minus one (p-1) or p if ganeral term in model (b0) . but the the degrees of freedom in an experimental design is depened on numbers of factore levels or to interaction also d.f. of errorand these are very important in experimntal design .
DF = the number of independent values or quantities which can be assigned to a statistical distribution. They are the number of values that are free to vary in a data set :)
The more degrees of freedom you have for the mean square error term for a given factor, the greater power you will have to detect a treatment effect. In more complex experimental designs, such as split-plot design, or repeated measures designs, you will have more than one error term. Every time you add something to the model (e.g. a treatment factor, a blocking factor, a covariate, etc), or reduce your assumptions (e.g. run model with heterogeneous variances), it will "cost" degrees of freedom to estimate that effect or parameter. Both the variation explained by the new effect or parameter you want to estimate, and the degrees of freedom you need to estimate it, will come out of the associated error term. If you explain a lot of variation (many sums of squares) for just a few degrees of freedom, then it will be worth the df spent to estimate that effect and increase your ability to detect a treamtent effect: this is why block designs are powerful. If you add something to the model that is unimportant or irrelevant (e.g. a non-associated covariate or ineffective blocks) which does not explain much variation (few sums of squares), then your power to detect treatment effects will decrease.