Long-range dependency

Long-range dependency (LRD), also called long memory or long-range persistence, is a phenomenon that may arise in the analysis of spatial or time series data. It relates to the rate of decay of statistical dependence, with the implication that this decays more slowly than an exponential decay, typically a power-like decay. LRD is often related to self-similar processes or fields. LRD has been used in various fields such as internet traffic modelling, econometrics, hydrology, linguistics and the earth sciences. Different mathematical definitions of LRD are used for different contexts and purposes. Some references are [1] [2] [3] [4] [5] [6]

Short-range dependence versus long-range dependence

One way of characterising long-range and short-range dependent stationary process is in terms of their autocovariance functions. For a short-range dependent process, the coupling between values at different times decreases rapidly as the time difference increases. Either the autocovariance drops to zero after a certain time-lag, or it eventually has an exponential decay. In the case of LRD, there is much stronger coupling. The decay of the autocovariance function is power-like and so decays slower than exponentially.

A second way of characterizing long- and short-range dependence is in terms of the variance of partial sum of consecutive values. For short-range dependence, the variance grows typically proportionally to the number of terms. As for LRD, the variance of the partial sum increases more rapidly which is often a power function with the exponent greater than 1. A way of examining this behavior uses the rescaled range. This aspect of long-range dependence is important in the design of dams on rivers for water resources, where the summations correspond to the total inflow to the dam over an extended period.[7]

The above two ways are mathematically related to each other, but they are not the only ways to define LRD. In the case where the autocovariance of the process does not exist (heavy tails), one has to find other ways to define what LRD means, and this is often done with the help of self-similar processes.

The Hurst parameter H is a measure of the extent of long-range dependence in a time series (while it has another meaning in the context of self-similar processes). H takes on values from 0 to 1. A value of 0.5 indicates the absence of long-range dependence.[8] The closer H is to 1, the greater the degree of persistence or long-range dependence. H less that 0.5 corresponds to anti-persistency, which as the opposite of LRD indicates strong negative correlation so that the process fluctuates violently.

Relation to self-similar processes

Given a stationary LRD sequence, the partial sum if viewed as a process indexed by the number of terms after a proper scaling, is a self-similar process with stationary increments asymptotically. In the converse, given a self-similar process with stationary increments with Hurst index H>0.5, its increments (consecutive differences of the process) is a stationary LRD sequence. This also holds true if the sequence is short-range dependent, but in this case the self-similar process resulting from the partial sum can only be Brownian motion (H = 0.5), while in the LRD case the self-similar process is a self-similar process with H>0.5, the most typical one being fractional Brownian motion.

Models

Among stochastic models that are be used for long-range dependence, some popular ones are autoregressive fractionally integrated moving average models, which are defined for discrete-time processes, while continuous-time models might start from fractional Brownian motion.

See also

Notes

  1. Beran, Jan (1994). Statistics for Long-Memory Processes. CRC Press.
  2. Doukhan et al. (2003). Theory and Applications of Long-Range Dependence. Birkhäuser.
  3. Malamud, Bruce D.; Turcotte, Donald L. (1999). "Self-Affine Time Series: I. Generation and Analyses". Advances in Geophysics 40: 1–90. doi:10.1016/S0065-2687(08)60293-9.
  4. Samorodnitsky, Gennady (2007). Long range dependence. Foundations and Trends® in Stochastic Systems.
  5. Beran et al. (2013). Long memory processes: probabilistic properties and statistical methods. Springer.
  6. Witt, Annette; Malamud, Bruce D. (September 2013). "Quantification of Long-Range Persistence in Geophysical Time Series: Conventional and Benchmark-Based Improvement Techniques". Surveys in Geophysics (Springer) 34 (5): 541–651. doi:10.1007/s10712-012-9217-8.
    • Hurst, H.E., Black, R.P., Simaika, Y.M. (1965) Long-term storage: an experimental study Constable, London.
  7. Beran (1994) page 34

References

  • Beran, J. (1994) Statistics for Long-Memory Processes, Chapman & Hall. ISBN 0-412-04901-5.

Further reading

  • Brockwell A.E., "Likelihood-based analysis of a class of generalized long-memory time series models", Journal of Time Series Analysis, 28: 386–407 (2006). doi:10.1111/j.1467-9892.2006.00515.x
  • Granger, C. W. J.; Joyeux, R. (1980). "An introduction to long-memory time series models and fractional differencing". Journal of Time Series Analysis 1: 15–30. doi:10.1111/j.1467-9892.1980.tb00297.x.
  • Ledesma, S. and Liu, D. (2000) "Synthesis of fractional Gaussian noise using linear approximation for generating self-similar network traffic", Computer Communication Review, 30, 417.
  • Ledesma, S., Liu, D. and Hernandez D. (2007) "Two Approximation Methods to Synthesize the Power Spectrum of Fractional Gaussian Noise",Computational Statistics and Data Analysis Journal, 52 (2), 10471062.
  • Witt, A. and Malamud, B. D. (2013) "Quantification of long-range persistence in geophysical time series: Conventional and benchmark-based improvement techniques" Surveys in Geophysics, 34(5), 541651. Available online at: http://link.springer.com/article/10.1007/s10712-012-9217-8