Eigenvalue perturbation

This is useful for studying how sensitive the original system's eigenvectors and eigenvalues

This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities.

Generalized eigenvalue problems are less widespread but are a key in the study of vibrations.

They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943) [2] is fundamental.

The Finite element method is a widespread particular case.

In classical mechanics, generalized eigenvalues may crop up when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix

, the potential strain energy provides the rigidity matrix

For further details, see the first section of this article of Weinstein (1941, in French) [3] With both methods, we obtain a system of differential equations or Matrix differential equation

Suppose we have solutions to the generalized eigenvalue problem, where

Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations: We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that where δij is the Kronecker delta.

Now we want to solve the equation In this article we restrict the study to first order perturbation.

) leaves Removing the higher-order terms, this simplifies to As the matrix is symmetric, the unperturbed eigenvectors are

In the same way, substituting in (2), and removing higher order terms, we get

should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation.

Substituting (4) into (3) and rearranging gives Because the eigenvectors are M0-orthogonal when M0 is positive definite, we can remove the summations by left-multiplying by

gives Canceling those terms in (6) leaves Rearranging gives But by (2), this denominator is equal to 1.

(assumption simple eigenvalues) by left-multiplying equation (5) by

: Or by changing the name of the indices: To find εii, use the fact that: implies: In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct, for infinitesimal

So far, we have not proved that these higher order terms may be neglected.

This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion.

is provided by the linear system As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of

In order to use the Implicit function theorem, we study the invertibility of the Jacobian

form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible.

This is the first order expansion of the perturbed eigenvalues and eigenvectors.

This means it is possible to efficiently do a sensitivity analysis on λi as a function of changes in the entries of the matrices.

; however you can compute eigenvalues and eigenvectors with the help of online tools such as [1] (see introduction in Wikipedia WIMS) or using Sage SageMath.

Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of

An eigenvalue problem involving non-symmetric matrices is not guaranteed to have

A technical report of Rellich [4] for perturbation of eigenvalue problems provides several examples.

<< Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator