Statistical signal processing
From Wikipedia, the free encyclopedia
Statistical signal processing is an area of signal processing dealing with signals and their statistical properties (e.g., mean, covariance, etc.). It is primarily covered by the field of electrical and computer engineering, although important applications exist in almost all scientific fields.
Statistical signal processing is founded on the principle that signals are not deterministic functions. Rather, signals are modeled as functions consisting of both deterministic and stochastic components. A simple example and also a common model of many statistical systems is a signal x(t) that consists of a deterministic part s(t) with added Gaussian noise: . Given information about a statistical system and the random variable from which it is derived, we can increase our knowledge of the output signal; conversely, given the statistical properties of the output signal, we can infer the properties of the underlying random variable.
These statistical techniques are developed in the fields of estimation theory, detection theory, and numerous related fields that rely on statistical information to maximize their efficiency.
Digital Signal Processing |
---|
Theory — Nyquist–Shannon sampling theorem, estimation theory, detection theory |
Sub-fields — audio signal processing | control engineering | digital image processing | speech processing | statistical signal processing |
Techniques — Discrete Fourier transform (DFT) | Discrete-time Fourier transform (DTFT) | bilinear transform | Z-transform, advanced Z-transform |
Sampling — oversampling | undersampling | downsampling | upsampling | aliasing | anti-aliasing filter | sampling rate | Nyquist rate/frequency |