These algorithms perform well with stochastic textures only, otherwise they produce completely unsatisfactory results as they ignore any kind of structure within the sample image.
Algorithms of that family use a fixed procedure to create an output image, i. e. they are limited to a single kind of structured texture.
This method, proposed by the Microsoft group for internet graphics, is a refined version of tiling and performs the following three steps: The result is an acceptable texture image, which is not too repetitive and does not contain too many artifacts.
These methods, using Markov fields,[3] non-parametric sampling,[4] tree-structured vector quantization[5] and image analogies[6] are some of the simplest and most successful general texture synthesis algorithms.
More recently, deep learning methods were shown to be a powerful, fast and data-driven, parametric approach to texture synthesis.
The work of Leon Gatys[10] is a milestone: he and his co-authors showed that filters from a discriminatively trained deep neural network can be used as effective parametric image descriptors, leading to a novel texture synthesis method.
In addition, flexible sampling in the noise space allows to create novel textures of potentially infinite output size, and smoothly transition between them.