Signal subspace

In signal processing, signal subspace methods are empirical linear methods for dimensionality reduction and noise reduction. These approaches have attracted significant interest and investigation recently in the context of speech enhancement, speech modeling, and speech classification research.

Essentially the methods represent the application of a principal components analysis (PCA) approach to ensembles of observed time-series obtained by sampling, for example sampling an audio signal. Such samples can be viewed as vectors in a high-dimensional vector space over the real numbers. PCA is used to identify a set of orthogonal basis vectors (basis signals) which capture as much as possible of the energy in the ensemble of observed samples. The vector space spanned by the basis vectors identified by the analysis is then the signal subspace. The underlying assumption is that information in speech signals is almost completely contained in a small linear subspace of the overall space of possible sample vectors, whereas additive noise is typically distributed through the larger space isotropically (for example when it is white noise).

By projecting a sample on a signal subspace, that is, keeping only the component of the sample that is in the signal subspace defined by linear combinations of the first few most energized basis vectors, and throwing away the rest of the sample, which is in the remainder of the space orthogonal to this subspace, a certain amount of noise filtering is then obtained.

Signal subspace noise-reduction can be compared to Wiener filter methods. There are two main differences:

In the simplest case signal subspace methods assume white noise, but extensions of the approach to colored noise removal and the evaluation of the subspace-based speech enhancement for robust speech recognition have also been reported.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.