Generative topographic map

The parameters of the low-dimensional probability distribution, the smooth map and the noise are all learned from the training data using the expectation–maximization (EM) algorithm.

The approach is strongly related to density networks which use importance sampling and a multi-layer perceptron to form a non-linear latent variable model.

In generative deformational modelling, the latent and data spaces have the same dimensions, for example, 2D images or 1 audio sound waves.

Extra 'empty' dimensions are added to the source (known as the 'template' in this form of modelling), for example locating the 1D sound wave in 2D space.

The probability of a given projection is, as before, given by the product of the likelihood of the data under the Gaussian noise model with the prior on the deformation parameter.

For this reason the prior is learned from data rather than created by a human expert, as is possible for spring-based models.