While fixing agricultural experiment design such as CRD, RBD, LSD, SPD etc most of scientists caution that the error degree of freedom should be greater that 12. but can anybody let me know the reason behind fixing at 12.
Here is a layman’s explanation. Higher df for error leads to lower error Mean square ( Error Mean square = Error Sum of Square / df for error), consequently higher precision. But it also costs more to do experiment. Lower df for error means lower precision. Look up an F table ( 0.05) – For a Treatment df of 5 and error df of 1, F is 230. Increase the df fo error to 2 , F becomes 19.3, increase it to 5, F reduces to 5.05. Thus increase in df for error leads to reduction in F. This rate reduction eases off at about 12, when F becomes 3.11. Further increase in df for error in not compensated by the fall in F. F for df for error 20 is 2.9 and for 120 is 2.29. Hence 12 is a kind of trade-off between cost and precision. No harm if your df for error is less. But you will need to have larger differences between treatments (higher Treatment Sum of Square) to reject null hypothesis ( since table values are higher at lower df of error).
Usually ANOVA tests at least 2 types and 2 conditions. Depending on the observed variable observation error one determines the optimal number of replicates that is seldom less than 4...for spatial inhomogeneity. https://www.google.it/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwiFm9q31rvMAhXDtRQKHbjtDvgQFggkMAE&url=https%3A%2F%2Fwww.ndsu.edu%2Ffaculty%2Fhorsley%2FExptSize.pdf&usg=AFQjCNGM_q8qCd0sofwydMGTkKJT4DgERw&sig2=HzKQOMhsFuGGXK6WSSxf0Q&cad=rja
if the error degrees of term increases in the ANOVA, than we will have less Sum of squares for error term because of dividing by the greater value as a result we will get maximum mean square for treatment term..... thats y it is suggested to have maximum error degrees of freedom
Here is a layman’s explanation. Higher df for error leads to lower error Mean square ( Error Mean square = Error Sum of Square / df for error), consequently higher precision. But it also costs more to do experiment. Lower df for error means lower precision. Look up an F table ( 0.05) – For a Treatment df of 5 and error df of 1, F is 230. Increase the df fo error to 2 , F becomes 19.3, increase it to 5, F reduces to 5.05. Thus increase in df for error leads to reduction in F. This rate reduction eases off at about 12, when F becomes 3.11. Further increase in df for error in not compensated by the fall in F. F for df for error 20 is 2.9 and for 120 is 2.29. Hence 12 is a kind of trade-off between cost and precision. No harm if your df for error is less. But you will need to have larger differences between treatments (higher Treatment Sum of Square) to reject null hypothesis ( since table values are higher at lower df of error).
it show that below 12, the critical F value increased rapidly. it means that, the power of F test will decrease significantly and it will be unable to differentiate true differences.
Just think about the error DF formula
DF=t(r-1)
Which means, you need more replication to reach 12 if you have fewer treatments.