Degrees of freedom of a variable and the dimension of the linear space concerned are the same. Dimensions of two linear sub-spaces with only the null element common are additive, and so the degrees of freedom concerned are additive.
When n vectors of the type (xi, yi) are given, and we fit an 1-dimensional equation of the type Y = a + bX, in the Analysis of Variance table, 1 degree of freedom is allotted for regression. As the total degrees of freedom are (n - 1), degrees of freedom available for error variations are (n - 2). Similarly, using n vectors of the type (xi, yi, zi) if we fit a 2-dimensional equation of the type Z = a + bX + cZ, 2 degrees of freedom are allotted to regression, and so the degrees of freedom available for error variations are (n - 3).
The degrees of freedom for errors in the first case is (n - 2) not because there are 2 parameters in the model but because degrees of freedom are additive, and therefore we get the error degrees of freedom (n - 2) by subtraction of the degree of freedom due to regression from the total degrees of freedom (n - 1). Similarly, in the second example, the degrees of freedom for errors are (n - 3) not because there are 3 parameters in the model but because the degrees of freedom found by subtraction is (n - 3).
The number of parameters does not have anything to do here. If the model is 1-dimensional, the degree of freedom for regression is 1. If it is 2-dimensional, the degrees of freedom for regression are 2.
However, when a non-linear model with k parameters is fitted, in the analysis of variance, the degrees of freedom allotted to errors are shown as (n - k). Can that kind of an analysis be valid?