Stochastic matrix

Each of its entries is a nonnegative real number representing a probability.

The stochastic matrix was first developed by Andrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, including probability theory, statistics, mathematical finance and linear algebra, as well as computer science and population genetics.

Right stochastic matrices act upon row vectors of probabilities by multiplication from the right (hence their name) and the matrix entry in the i-th row and j-th column is the probability of transition from state i to state j.

Left stochastic matrices act upon column vectors of probabilities by multiplication from the left (hence their name) and the matrix entry in the i-th row and j-th column is the probability of transition from state j to state i.

The stochastic matrix was developed alongside the Markov chain by Andrey Markov, a Russian mathematician and professor at St. Petersburg University who first published on the topic in 1906.

[3] His initial intended uses were for linguistic analysis and other mathematical subjects like card shuffling, but both Markov chains and matrices rapidly found use in other fields.

[3][4] Stochastic matrices were further developed by scholars such as Andrey Kolmogorov, who expanded their possibilities by allowing for continuous-time Markov processes.

[5] By the 1950s, articles using stochastic matrices had appeared in the fields of econometrics[6] and circuit theory.

[7] In the 1960s, stochastic matrices appeared in an even wider variety of scientific works, from behavioral science[8] to geology[9][10] to residential planning.

[11] In addition, much mathematical work was also done through these decades to improve the range of uses and functionality of the stochastic matrix and Markovian processes more generally.

From the 1970s to present, stochastic matrices have found use in almost every field that requires formal analysis, from structural science[12] to medical diagnosis[13] to personnel management.

[14] In addition, stochastic matrices have found wide use in land change modeling, usually under the term Markov matrix.

[15] A stochastic matrix describes a Markov chain Xt over a finite state space S with cardinality α.

The above elementwise sum across each row i of P may be more concisely written as P1 = 1, where 1 is the α-dimensional column vector of all ones.

A stationary probability vector π is defined as a distribution, written as a row vector, that does not change under application of the transition matrix; that is, it is defined as a probability distribution on the set {1, …, n} which is also a row eigenvector of the probability matrix, associated with eigenvalue 1:

By the Gershgorin circle theorem, all of the eigenvalues of a stochastic matrix have absolute values less than or equal to one.

Additionally, every right stochastic matrix has an "obvious" column eigenvector associated to the eigenvalue 1: the vector 1 used above, whose coordinates are all equal to 1.

Finally, the Brouwer Fixed Point Theorem (applied to the compact convex set of all probability distributions of the finite set {1, ..., n}) implies that there is some left eigenvector which is also a stationary probability vector.

On the other hand, the Perron–Frobenius theorem also ensures that every irreducible stochastic matrix has such a stationary vector, and that the largest absolute value of an eigenvalue is always 1.

where πj is the j-th element of the row vector π.

That both of these computations give the same stationary vector is a form of an ergodic theorem, which is generally true in a wide variety of dissipative dynamical systems: the system evolves, over time, to a stationary state.

Intuitively, a stochastic matrix represents a Markov chain; the application of the stochastic matrix to a probability distribution redistributes the probability mass of the original distribution while preserving its total mass.

The cat and the mouse both jump to a random adjacent box when the timer advances.

Let the random variable K be the time the mouse stays in the game.

The Markov chain that represents this game contains the following five states specified by the combination of positions (cat,mouse).

Note that while a naive enumeration of states would list 25 states, many are impossible either because the mouse can never have a lower index than the cat (as that would mean the mouse occupied the cat's box and survived to move past it), or because the sum of the two indices will always have even parity.

In addition, the 3 possible states that lead to the mouse's death are combined into one: We use a stochastic matrix,

To compute the long-term average or expected value of a stochastic variable

Suppose the system starts in state 2, represented by the vector

represents a column matrix of all ones that acts as a sum over states.

The survival function of the mouse. The mouse will survive at least the first time step.