Matrix exponential

Consequences of the preceding identity are the following: Using the above results, we can easily verify the following claims.

Finally, a Laplace transform of matrix exponentials amounts to the resolvent,

for all sufficiently large positive values of s. One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations.

where A is not constant, but the Magnus series gives the solution as an infinite sum.

By Jacobi's formula, for any complex square matrix the following trace identity holds:[3]

For any real numbers (scalars) x and y we know that the exponential function satisfies ex+y = ex ey.

Using a large finite k to approximate the above is basis of the Suzuki-Trotter expansion, often used in numerical time evolution.

For Hermitian matrices there is a notable theorem related to the trace of matrix exponentials.

defines a smooth curve in the general linear group which passes through the identity element at t = 0.

In fact, this gives a one-parameter subgroup of the general linear group since

Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis.

[14][15][16][17] In this section, we discuss methods that are applicable in principle to any matrix, and which can be carried out explicitly for small matrices.

[18] Subsequent sections describe methods suitable for numerical evaluation on large matrices.

Since the series has a finite number of steps, it is a matrix polynomial, which can be computed efficiently.

By virtue of the Cayley–Hamilton theorem the matrix exponential is expressible as a polynomial of order n−1.

Letting a be a root of P, Qa,t(z) is solved from the product of P by the principal part of the Laurent series of f at a: It is proportional to the relevant Frobenius covariant.

In particular, St(z), the Lagrange-Sylvester polynomial, is the only Qt whose degree is less than that of P. Example: Consider the case of an arbitrary 2×2 matrix,

Recall from above that an n×n matrix exp(tA) amounts to a linear combination of the first n−1 powers of A by the Cayley–Hamilton theorem.

It is easiest, however, to simply solve for these Bs directly, by evaluating this expression and its first derivative at t = 0, in terms of A and I, to find the same answer as above.

But this simple procedure also works for defective matrices, in a generalization due to Buchheim.

[22] This is illustrated here for a 4×4 example of a matrix which is not diagonalizable, and the Bs are not projection matrices.

Multiply each exponentiated eigenvalue by the corresponding undetermined coefficient matrix Bi.

If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but now multiplying by an extra factor of t for each repetition, to ensure linear independence.

To solve for all of the unknown matrices B in terms of the first three powers of A and the identity, one needs four equations, the above one providing one such at t = 0.

The exponential of J2(16) can be calculated by the formula e(λI + N) = eλ eN mentioned above; this yields[23]

The matrix exponential has applications to systems of linear differential equations.

Recall from earlier in this article that a homogeneous differential equation of the form

we can express a system of inhomogeneous coupled linear differential equations as

The second step is possible due to the fact that, if AB = BA, then eAtB = BeAt.

For the inhomogeneous case, we can use integrating factors (a method akin to variation of parameters).