Thank you Fabrice for your response. However, I have seen researchers partially relying on autocorrelation functions for exploring stationarity/ non-stationarity, and since they decay hyperbolically in LRD case, they conclude that their data is non-stationary.
The paper you provided the link to, seems interesting, I will study it in detail. However, if you can share your insight on the reason that long range dependence does not affect stationarity, or lack thereof, that might help me even more.
the "slow decay in autocorrelation implies non stationarity" rule of thumb makes sense for short-range memory time series only
a stationary long-range memory times series such as ARFIMA(0, d, 0) has a hyperbolically decaying auto-correlation (but is indeed stationary !)
.
distinguishing between long-range dependence and non-stationarity (to be specified, for instance non-stationarity in the mean) is not an easy topic (see the first reference above)
A time series of long range dependence must exhibit a trend and/or seasonality, and therefore cannot be stationary. Stationarity entails absence of trend and/or seasonality.
Thank you Ette for your response. But the papers provided by Fabrice says otherwise. The researchers in these papers are treating LRD processes and stationary processes as different entities as Fabrice has said, and they are developing testing techniques for stationarity of LRD time series. After reading those papers, would you say that presence/ absence of trend is not a decisive factor in stationarity of a time series?
Hi, if you consider a linear model (i.e., one that can be expressed as an MA(infinity) model), for example an ARFIMA(p,d,q) in which the ARMA component is stationary and invertible, it is said to be LRD for d>0 and is second-order stationary for |d|0 the coefficients in its Wold coefficient are not absolutely summable but are still square summable for |d|
warning : what follows is just handwaving ; there are plenty of excellent books on this topic which make things rigorous
.
if you restrict yourself to ARIMA models and the Box-Jenkins approach, the situation looks like this (warning again, this is just handwaving) :
- persistence in autocovariance (that is, no sign of exponential fading as a "nice" AR should exhibit) is a hint of non-stationarity
- the cure for that is to difference the time series so as to stationarize it : as we difference, we monitor the autocovariance and stop differencing when it seems to exhibit some "reasonably exponential" decay ; care must be taken not to "over-difference" as we would enter the realm of MA movements with correlated noise (a hint for that is the appearance of a strong negative autocorrelation at lag 1)
.
this all seems nice and dandy except that for some time series, differencing does not seem to work very well : say, if we do not differentiate, the time series looks non stationary and if we differentiate once, it looks over-differentiated !
such time series exhibit long-range dependence and this is where the "fractional differencing" trick enters : it allows to finely tune the differentiation so as to properly stationarize the time series as you are not restricted to (1-L)n with n integer but you have access to all (1-L)d with real d
of course you are not anymore in the ARIMA class, you are in the ARFIMA class, and, depending on the value of d, you may have different behaviours for both persistence and stationarity
.
this is nicely summed up in the introduction of the attached paper (beware of the typos but Fig. 1 is ok)
I am not a statistician, I come from signal processing/Electronics Engineering background, hence, please bear with me:)
What if I applied wavelet transformation to my time series. Applying it, will transform the data into a new time-frequency domain. Actually, the wavelet technique is based upon applying two filters to the data; a difference filter and a moving average filter, hence, two coefficient sets are obtained. I have already tried that and the coefficient sets obtained via difference filtering indicates stationarity when tested by ADF, KPSS and plotting the ACF, while the sets obtained via the moving average filter exhibit LRD and doesn't show stationarity. Do these results make statistical sense at all:)
i have not seen a wavelet for almost 20 years ... i am afraid i forgot most if not all i had understood (or believed i had understood !)
.
anyway, your results seem consistent with the theory if my memory does not fail me
the exact behaviour depends crucially on the number N of vanishing moments of the wavelet and the Hurst exponent H : if N > H+1/2, the dj,k should exhibit short-range correlation only, otherwise, they exhibit a long-range behaviour
if your H is larger than 1/2, then it requires N = 2 at least to insure a short-range behaviour ; a Haar wavelet (which has N = 1 only) will bring too little decorrelation and your dj,k may still exhibit long-range correlation
.
i have to check in my old bibliography on trafic modelling ... i remember a paper by Abry, Flandrin, Taqqu and Veitch on this topic
In econometrics, Kwiatkowski–Phillips–Schmidt–Shin (KPSS) tests are used for testing a null hypothesis that an observable time series is stationary around a deterministic trend (i.e. trend-stationary) against the alternative of a unit root.