Variable-order Markov model

In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization.

This realization sequence is often called the context; therefore the VOM models are also called context trees.

[1] VOM models are nicely rendered by colorized probabilistic suffix trees (PST).

[2] The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction.

[3][4][5] Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet {a, b, c}.

Specifically, consider the string constructed from infinite concatenations of the sub-string aaabc: aaabcaaabcaaabcaaabc…aaabc.

Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0.

In practical settings there is seldom sufficient data to accurately estimate the exponentially increasing number of conditional probability components as the order of the Markov chain increases.

The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved.

"[1] Let A be a state space (finite alphabet) of size

Consider a sequence with the Markov property

Given a training set of observed states,

, the construction algorithm of the VOM models[3][4][5] learns a model P that provides a probability assignment for each state in the sequence given its past (previously observed symbols) or future states.

Specifically, the learner generates a conditional probability distribution

, where the * sign represents a sequence of states of any length, including the empty context.

VOM models attempt to estimate conditional distributions of the form

In contrast, conventional Markov models attempt to estimate these conditional distributions by assuming a fixed contexts' length

and, hence, can be considered as special cases of the VOM models.

[3][4][5] Various efficient algorithms have been devised for estimating the parameters of the VOM model.

[4] VOM models have been successfully applied to areas such as machine learning, information theory and bioinformatics, including specific applications such as coding and data compression,[1] document compression,[4] classification and identification of DNA and protein sequences,[6] [1][3] statistical process control,[5] spam filtering,[7] haplotyping,[8] speech recognition,[9] sequence analysis in social sciences,[2] and others.