[1] They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, finance, information theory, physics, signal processing, and speech processing.
The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes.
[15] However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis.
Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future.
[16][17] After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé.
A stationary distribution π is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by By comparing this definition with that of an eigenvector we see that the two concepts are related and that is a normalized (
[41] Additionally, in this case Pk converges to a rank-one matrix in which each row is the stationary distribution π: where 1 is the column vector with all entries equal to 1.
Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task.
(For non-diagonalizable, that is, defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way.
be the number of states, then[55] If a Markov chain has a stationary distribution, then it can be converted to a measure-preserving dynamical system: Let the probability space be
[52] In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states.
Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.
The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction.
[70] An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products.
Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains.
Claude Shannon's famous 1948 paper A Mathematical Theory of Communication, which in a single step created the field of information theory, opens by introducing the concept of entropy by modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters.
Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding.
For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue.
In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically.
[citation needed] In 1971 a Naval Postgraduate School Master's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain and Lanchester's laws.
[93] Herbert A. Simon and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes.
An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting.
[101] Markov chains are generally used in describing path-dependent arguments, where current structural configurations condition future outcomes.
An example is the reformulation of the idea, originally due to Karl Marx's Das Kapital, tying economic development to the rise of capitalism.
In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime.
These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system.
Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered.
Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.
Markov processes are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison,[110] Mark V. Shaney,[111][112] and Academias Neutronium).