Volterra series
From Wikipedia, the free encyclopedia
In mathematics, a Volterra series denotes a functional expansion of a dynamic, nonlinear, time-invariant functional. Volterra series are frequently used in system identification.
Beginning in 1887 it was developed by Vito Volterra in analogy to the Taylor series for functions. It has been applied in the fields of medicine (biomedical engineering), biology, especially neuroscience. Its main advantage lies in its generality: it can represent a wide range of systems. It is therefore sometimes referred to as a non-parametric model.
Contents |
[edit] History
[edit] Mathematical theory
The theory of Volterra series can be viewed from two different perspectives: either one considers an operator mapping between two real or complex function spaces or a functional mapping from a real (complex) function space into the real (complex) numbers. The latter, functional perspective is in more frequent use, due to the assumed time-invariance of the system.
[edit] Continuous time
[edit] Discrete time
Let F be a continuous functional, which is time-invariant and has finite memory. Then Fréchet's theorem states, that this system can be approximated uniformly and to an arbitrary degree of precision by a sufficiently high, but finite order Volterra series. The input set over which this approximation holds encompasses all equicontinuous, uniformly bounded functions. In physically realizable setting this constraint on the input set should always hold.
[edit] Methods to estimate the Kernel coefficients
Estimating the Volterra coefficients individually is complicated since the basis functionals of the Volterra series (i.e. x^k, k=1,...,N) are correlated. This leads to the problem of simultaneously solving a set of integral-equations for the coefficients. Hence, estimation of Volterra coefficients is generally performed by estimating the coefficients of an orthogonalized series, e.g. the Wiener Series, and then recomputing the coefficients of the original Volterra series. The Volterra series main appeal over the orthogonalized series lies in its intuitive, canonical structure, i.e. all interactions of the input have one fixed degree. The orthogonalized basis functionals will generally be quite complicated.
An important aspect, with respect to which the following methods differ is whether the orthogonalization of the basis functionals is to be performed over the idealized specification of the input signal (e.g. gaussian, white noise) or over the actual realization of the input (i.e. the pseudo-random, bounded, almost-white version of gaussian white noise, or any other stimulus). The latter methods, despite their lack of mathematical elegance, have been shown to be more flexible (as arbitrary inputs can be easily accommodated) and precise (due to the effect that the idealized version of the input signal is not always realizable).
[edit] Crosscorrelation method
This method, developed by Lee & Schetzen, orthogonalizes with respect to the actual mathematical description of the signal, i.e. the projection onto the new basis functionals is based on the knowledge of the moments of the random signal.
[edit] Exact orthogonal algorithm
This method and its more efficient version (Fast Orthogonal Algorithm) were invented by Korenberg. In this method the orthogonalization is performed empirically over the actual input. It has been shown to perform more precisely than the Crosscorrelation method. Another advantage is that arbitrary inputs can be used for the orthogonalization and that fewer data-points suffice to reach a desired level of accuracy. Also, estimation can be performed incrementally until some criterion is fulfilled.
[edit] Linear regression
Linear regression is a standard tool from linear analysis. Hence, one of its main advantages is the widespread existence of standard tools for solving linear regressions efficiently. It has some educational value, since it highlights the basic property of Volterra series: linear combination of non-linear basis-functionals. For estimation the order of the original should be known, since the volterra basis-functionals are not orthogonal and estimation can thus not be performed incrementally.
[edit] Kernel method
This method was invented by Franz & Schölkopf and is based on statistical learning theory. Consequently, this approach is also based on minimizing the empirical error (often called empirical risk minimization). Franz and Schölkopf proposed that the kernel method could essentially replace the Volterra series representation, although noting that the latter is more intuitive.
[edit] Differential sampling
This method was developed by van Hemmen and coworkers and utilizes Dirac delta functions to sample the Volterra coefficients.
[edit] See also
- Wiener series
- Fast orthogonal algorithm
[edit] References
- The Volterra and Wiener Theories of Nonlinear Systems, Michael Schetzen (1980)
- The Identification of Nonlinear Biological Systems: Volterra Kernel Approaches, Michael J. Korenberg, Ian W. Hunter, Annals Biomedical Engineering (1996), Volume 24.