In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real.
The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices.
This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem,[1] Hawkins–Simon condition[2]); to demography (Leslie population age distribution model);[3] to social networks (DeGroot learning process); to Internet search engines (PageRank);[4] and even to ranking of American football teams.
The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus).
The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix.
, the maximum eigenvalue is r = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive.
For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form
ranges over the complex h' th roots of 1 for some positive integer h called the period of the matrix.
Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P: where E and G are non-trivial (i.e. of size greater than zero) square matrices.
In fact, when A is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in GA (see Kitchens[15] page 16).
shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocks Aj need not be square, and h need not divide n. Let A be an irreducible non-negative matrix, then: A matrix A is primitive provided it is non-negative and Am is positive for some m, and hence Ak is positive for all k ≥ m. To check primitivity, one needs a bound on how large the minimal such m can be, depending on the size of A:[24] Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature.
The inverse of PAP−1 (if it exists) must have diagonal blocks of the form Bi−1 so if any Bi isn't invertible then neither is PAP−1 or A. Conversely let D be the block-diagonal matrix corresponding to PAP−1, in other words PAP−1 with the asterisks zeroised.
If A is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above.
More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices.
Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology.
Let ε be half the smallest diagonal entry of Am and set T = Am − εI which is yet another positive matrix.
Moreover, if Ax = λx then Amx = λmx thus λm − ε is an eigenvalue of T. Because of the choice of m this point lies outside the unit disk consequently ρ(T) > 1.
One of the definitions of irreducibility for non-negative matrices is that for all indexes i,j there exists m, such that (Am)ij is strictly positive.
This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix.
Combining the two claims above reveals that the Perron–Frobenius eigenvalue r is simple root of the characteristic polynomial.
In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as r. The same claim is true for them, but requires more work.
Then f is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue r. For the proof we denote the maximum of f by the value R. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvector v into f, we obtain f(v) = r and conclude r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector x and let ξ=f(x).
Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity.
This means that if A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one.
The power method is a convenient way to compute the Perron projection of a primitive matrix.
Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix.
If P is the peripheral projection then the matrix R = AP = PA is non-negative and irreducible, Rh = P, and the cyclic group P, R, R2, ...., Rh−1 represents the harmonics of A.
Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle.
More precisely, since M is block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one[citation needed] A problem that causes confusion is a lack of standardisation in the definitions.
For avoidance of doubt a non-zero non-negative square matrix A such that 1 + A is primitive is sometimes said to be connected.