Volterra Series

From Wikipedia, the free encyclopedia

In mathematics, a Volterra Series denotes a functional expansion of a dynamic, nonlinear, time-invariant functional. Volterra Series are frequently used in system identification. Beginning in 1887 it was developed by Vito Volterra in analogy to the Taylor Series for functions. It has been applied in the fields of medicine (biomedical engineering), biology, especially neuroscience. It's main advantage lies in its generality, allowing the representation of wide range of systems. It is therefore sometimes referred to as a non-parametric model.

Contents

[edit] History

[edit] Mathematical Theory

The theory of Volterra Series can be viewed from two different perspectives: either one considers an operator mapping between two real or complex function spaces or a functional mapping from a real (complex) function space into the real (complex) numbers. The latter, functional perspective is in more frequent use, due to the assumed time-invariance of the system.

[edit] Continuous Time

[edit] Discrete Time

Let F be a continuous functional, which is time-invariant and has finite memory. Then Frechets Theorem states, that this system can be approximated uniformly and to an arbitrary degree of precision by a sufficiently high, but finite order Volterra Series. The input set over which this approximation holds encompasses all equicontinuous, uniformly bounded functions. In physically realizable setting this constraint on the input set should always hold.

[edit] Methods to estimate the Kernel coefficients

Estimating the Volterra coefficients individually is complicated since the basis functionals of the Volterra Series (i.e. x^k, k=1,...,N) are correlated. This leads to the problem of simultaneously solving a set of integral-equations for the coefficients. Hence, estimation of Volterra coefficients is generally performed by estimating the coefficients of an orthogonalized series, e.g. the Wiener Series, and then recomputing the coefficients of the original Volterra Series. The Volterra Series main appeal over the orthogonalized series lies in its intuitive, canonical structure, i.e. all interactions of the input have one fixed degree. The orthogonalized basis functionals will generally be quite complicated.

An important aspect, with respect to which the following methods differ is whether the orthogonalization of the basis functionals is to be performed over the idealized specification of the input signal (e.g. gaussian, white noise) or over the actual realization of the input (i.e. the pseudo-random, bounded, almost-white version of gaussian white noise, or any other stimulus). The latter methods, despite their lack of mathematical elegance, have been shown to be more flexible (as arbitrary inputs can be easily accommodated) and precise (due to the effect, that the idealized version of the input signal is not always realizable).

[edit] Crosscorrelation Method

This method, developed by Lee & Schetzen, orthogonalizes with respect to the actual mathematical description of the signal, i.e. the projection onto the new basis functionals is based on the knowledge of the moments of the random signal.

[edit] Exact Orthogonal Algorithm

This method and its more efficient version (Fast Orthogonal Algorithm) were invented by Korenberg. In this method the orthogonalization is performed empirically over the actual input. It has been shown to perform more precisely than the Crosscorrelation method. Another advantage is that arbitrary inputs can be used for the orthogonalization and that fewer data-points suffice to reach a desired level of accuracy. Also, estimation can be performed incrementally until some criterion is fulfilled.

[edit] Linear Regression

Linear Regression is a standard tool from linear analysis. Hence, one of its main advantages is the widespread existence of standard tools for solving linear regressions efficiently. It has some educational value, since it highlights the basic property of Volterra Series: linear combination of non-linear basis-functionals. For estimation the order of the original should be known, since the volterra basis-functionals are not orthogonal and estimation can thus not be performed incrementally.

[edit] Kernel Method

This method was invented by Franz & Schölkopf and is based on Statistical Learning Theory. Consequently this approach is also based on minimizing the empirical error (often called empirical risk minimization). Franz and Schölkopf proposed that the kernel method could essentially replace the Volterra Series representation, although noting that the latter is more intuitive.

[edit] Differential Sampling

This method was developed by van Hemmen and coworkers and utilizes dirac-Delta functions to sample the Volterra coefficients.

[edit] Applications

[edit] See also

  • Wiener Series
  • Fast Orthogonal Algorithm


[edit] References

  • The Volterra and Wiener Theories of Nonlinear Systems, Michael Schetzen (1980)
  • The Identification of Nonlinear Biological Systems: Volterra Kernel Approaches, Michael J. Korenberg, Ian W. Hunter, Annals Biomedical Engineering (1996), Volume 24.