The first note that the Derbun Watson coefficiebt is less than R-Square When you developing a liner regression. So you must detacting about uonit root for each series of variable and if you observe that all variables are not stationary in the level but in the first or second difference for all variables,that is they have same difference for stationary.... And you can follow The other stape by using some refrences in the same feild.
Thomas, thanks for your answer. Well in case of a panel study, having 100 observations is possible but what about a time series analysis of a particular country. It's very difficult to pool data of such long duration.
On a second note, if I have 30 years annual data (30observations), which methodology do you suggest as alternative of the VECM analysis?
I am not sure that using 30 observations is better solution than using 120 with some type of interpolation (linear, at least). Yes, there are a lot of studies that are dealing with a small number of observations, but it does not mean that they are relevant by automatism. Asymptotically – small number of observations leads to the problem of inconsistency. At last, the problem of seasonal fluctuations exists in annual observations, as well.
The first note that the Derbun Watson coefficiebt is less than R-Square When you developing a liner regression. So you must detacting about uonit root for each series of variable and if you observe that all variables are not stationary in the level but in the first or second difference for all variables,that is they have same difference for stationary.... And you can follow The other stape by using some refrences in the same feild.
Time series analysis usually means the use of past data points to make a forecast of the future data points. It seems reasonable to assume that too ‘old’ data points do not practically affect (correlated to) the most recent data-points, let alone the future data.
The data points that are strongly correlated to the newer ones can be used for making the forecast while the weakly correlated to the newer ones (or not correlated at all) should not be included for forecasting; otherwise, the forecast will likely be skewed.
Thus, the use of too many available past data-points worth of several years of data is not needed for making a meaningful forecast; in fact, the use of a lot of past data-points can be detrimental to producing a reasonably accurate forecast.
The maximum number of ‘steps back to the past’, at which the older data-points are still strongly enough correlated to the newer data-points can be estimated using an autocorrelation function (ACF) of a time series. The ACF is a measure of a linear interdependency between the data-points separated by k time units (time lag).
A possible measure of a correlation cut-off value k is the first zero-crossing of the time-lag axis. However, this metrics will include quite a few data points that are weakly correlated to the new data-points, with the correlation coefficients in the range 0 – 0.5. Sometimes, the correlation cut-off lag is defined in the literature as the smallest value K that makes ACF(k) < K, where K=e-1=0.37 or K=0.5.
At the same time, it is usually accepted that a strong enough linear correlation coefficients start with 0.6 or greater. Therefore it is assumed here that the number of the past data-points used for the purpose of forecasting (a cut-off value of a time-lag) corresponds to the ACF of about 0.6.
Some examples of the number of required past data points to make a forecast for a number of time series is in the attached file.