When I try to reduce those terms the R2 value reduces. As a result I have many terms whose sum of squares and df values are 0. The p value and all other values fall in range.
In truth, the more variables that you give a polynomial model, the better the R2 value of the model. Only in the case where there is absolutely no slope to the change in that variable would the R2 stay the same. In many cases, that is not the case - even a random variable could have some change. That being said, these are not statistically significant, so they could either be 1) relevant but the response doesn't change much, or 2) irrelevant. Most of those terms are usually lumped into the error sum of squares. So, up to you if you want to include them, but understand that the model R2 decrease does not mean that they are statistically significant in terms of your response.
Prof. Tschopp gives you an excellent response. By the other hand you wrote that when you include many terms in the model you obtain terms with square sum and df equal to 0? Be careful. If “df” means “degrees of freedom” you are doing something wrong. Degrees of freedom (in a univariate model) are the difference between the total observations minus the estimated parameters. If you have 0 degrees of freedom is not only for the extra terms, this value is for the whole of model parameters and the statistical validation is impossible. 0 degrees of freedom means that you have the same number of parameters than observation. In this case you must eliminate parameters or perform more experiments.
Another solution can be to calculate the prediction R². Classical R²=1-(residual sum of squares)/(total sum of squares). Prediction R²=1-PRESS/(total sum of squares): https://en.wikipedia.org/wiki/PRESS_statistic
This value can be negative, even if classical R² is closed to 1. This indicates that the model is closed to observations bat it is unable to do good prediction.