I am analyzing some time-series data. I wrote a script in R and used two methods from two different packages in R to calculate the DW statistics and respective p-values. Surprisingly, for the same value of DW statistics, they give me significantly different p-values. Why and which one is more trustworthy (I assume the one calculated with the durbinWatsonTest)? Part of my code is below:

dwtest(model)

durbinWatsonTest(model)

R output is the following:

data: model DW = 1.8314, p-value = 0.1865

alternative hypothesis: true autocorrelation is greater than 0

lag Autocorrelation D-W Statistic p-value

1 0.07658155 1.831371 0.348

Furthermore, durbinWatsonTest from car package seems to involve some randomness. I executed for the same data (different than above) a script from the terminal within couple of seconds and the output is as below:

lag Autocorrelation D-W Statistic p-value

1 0.1181864 1.7536 0.216

lag Autocorrelation D-W Statistic p-value

1 0.1181864 1.7536 0.204

lag Autocorrelation D-W Statistic p-value

1 0.1181864 1.7536 0.198

p-value is different every time I execute the script.

Any ideas why? Which method gives correct p-values dwtest or durbinWatsonTest?

More Igor Niezgodzki's questions See All
Similar questions and discussions