My models have a squared time variable as a regressor. What available heteroscedasticity test(s) in this situation ? Why heteroscedasticity is more dangerous for non linear models than linear models
I do not think it a good idea to "test" for something like heteroscedasticity because (1) it is not generally a matter of Do you have it or not? but more like How much do you have? and (2) when you say "test," you probably mean hypothesis testing, and unless you do a power analysis or other sensitivity analysis, using a single, isolated p-value is generally misleading, as conclusions are often changed simply with a change in sample size.
So in place of (1) above, I recommend studying residual plots, and breaking the estimated residuals into two factors, a size-related factor, and a random factor. This would seem to be very much like what can be done in linear regression, where here is an example of such a study:
You may find the following reference to be helpful:
Carroll and Ruppert(1988), Transformation and Weighting in Regression, Chapman & Hall, Ltd. London, UK.
I think the best thing you can do is also the simplest: Make some residual plots (skipped when showing graphs based on them after initial graphs in the first reference above, by the way). Scatterplots of estimated residuals for your nonlinear regression would help you to see what is happening, and be a very good start to your analysis.
Cheers - Jim
Article Practical Interpretation of Hypothesis Tests - letter to the...
Article Weighting in Regression for Use in Survey Methodology
Conference Paper Alternative to the Iterated Reweighted Least Squares Method ...
If you assume that the error term in your model has a constant variance, you must apply tests against heteroscedasticity. If you do not apply those tests, your estimates may be largely inefficient and you will not be able to draw conclusions about uncertainty associated with the model.
The tests are needed to see if your modelling assumptions are seriously violated or not.
Secondly, about what tests are applicable.
Essentially, if you are using a classical regression model with normally and identically distributed errors, it does not matter if you have a squared time argument as an independent variable or not. You can use the same tests as you use with standard linear models.
Thirdly, what to do if you do have heteroscedasticity.
If you included a squared time argument as a regressor, it is very likely that your variance will increase with time. In this case you need to apply a square-root transformation or log-transformation. But, more generally, a Box-Cox transformation should solve the problem to the extent that you'll be able to apply a linear regression without the squared time argument.
Just to be clear: the fact that you use the squared time as a regressor does not make your model non-linear (or, conversely, just having time does not make it necessarily linear). Linearity is defined on the parameters point of view, and assuming an additive error term.
So, if your model is Y = a + b * t + c * t2 + epsilon, you can use all the classical tools of the linear model to check for variance heterogeneity. Beside graphical tools, Breusch-Pagan test may be an option. Fitting models with an explicit variance model (using weights for instance) and compare them to the homoscedastic model may be another one. Note that if variance is a function of time with fitted parameters, the model becomes non-linear, at least for the minimisation algorithm point of view.