Generative Topographic Map
From Wikipedia, the free encyclopedia
In the field of machine learning, an alternative to the self-organizing map (SOM), called the generative topographic map (GTM), was presented in 1996 in a paper by Bishop, Svensen, and Williams. The GTM is a probabilistic counterpart to SOM, which is provably convergent and does not require a shrinking neighborhood or a decreasing step size. The GTM is a generative model of data: the training data is assumed to arise by first probabilistically picking a point in a low-dimensional space, mapping the point to the observed high-dimensional input space (via a smooth function), then adding noise in the high-dimensional input space. The parameters of the low-dimensional probability distribution, the smooth map, and the noise in the high-dimensional input space are all learned from the training set by the Expectation-Maximization (EM) algorithm.
The approach is strongly related to density networks which use importance sampling and a multi-layer perceptron to form a non-linear latent variable model. In the GTM the latent space is a discrete grid of points which is assumed to be non-linearly projected into data space. A Gaussian noise assumption is then made in data space so that the model becomes a constrained mixture of Gaussians. As a constrained mixture of Gaussians the model's likelihood can be maximized through an Expectation-Maximisation algorithm.
In theory, an arbitrary nonlinear parametric deformation could be used. The optimal parameters could be found by gradient descent etc.
The suggested approach to the nonlinear mapping is to use a radial basis function network (RBF) to create a nonlinear mapping between the latent space and the data space. The nodes of the RBF network then form a feature space and the nonlinear mapping can then be taken as a linear transform of this feature space. This approach has the advantage over the suggested density network approach that it can be optimised analytically.
Contents |
[edit] Application Domains
In data analysis, GTMs are like a nonlinear version of PCA, which allow high dimensional data to be modelled as resulting from Gaussian noise added to sources in lower-dimensional latent space. For example, to locate stocks in plottable 2D space based on their hi-D time-series shapes. Other applications may want to have fewer sources than data points, for example mixture models.
In generative deformational modelling, the latent and data spaces have the same dimensions, for example, 2D images or 1 audio sound waves. However we can still use GTM to model the deformation process. Extra 'empty' dimensions can be added to the source (known as the 'template' in this form of modelling), for example locating the 1D sound wave in 2D space. Further nonlinear dimensions are then added, got by combining the original dimensions. The enlarged latent space is then projected back into the 1D data space. The probability of a given projection is, as before, given by the product of the likelihood of the data under the Gaussian noise model with the prior on the deformation parameter. Note that unlike conventional spring-based deformation modelling, this has the advantage of being analytically optimizable. However it has the disadvantage of being a 'data-mining' approach, ie. the shape of the deformation prior is unlikely to be meaningful as an explanation of the possible deformations, as it is based on a very high, artificial- and arbitrarily constructed nonlinear latent space. For this reason the prior will have to be learned from data rather than created by a human expert, as is possible for spring-based models.
[edit] Comparison with Kohonen's SOM
A key difference between the GTM and the SOM is that the nodes in the SOM can wander around at will; GTM nodes are constrained by the allowable transformations and the probabilities on those transformations. If the deformations are well-behaved the topology of the latent space is preserved.
Note that the Kohonen model was created as a biological model of neurons and is a heuristic algorithm. On the contrary GTM has nothing to do with neuroscience or cognition and is a probabilistically principled model. Thus, it has a number of advantages over SOM, namely:
- it explicitly formulates a density model over the data.
- it employs a cost function that quantifies how well the map is trained.
- it employs a sound optimization procedure (EM algorithm).
The GTM method was introduced by Bishop, Svensen and Williams in their Technical Report in 1997 (Technical Report NCRG/96/015, Aston University, UK) which were published later in Neural Computation. Generative Topographic Mapping was described in PhD thesis of Markus Svensen (Aston, 1998).
[edit] See also
- Artificial Neural Network
- Connectionism
- Data mining
- Machine learning
- Neural network software
- Pattern recognition