It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.
[1] The inverse power iteration algorithm starts with an approximation
It is exactly the same formula as in the power method, except replacing the matrix
to the eigenvalue is chosen, the faster the algorithm converges; however, incorrect choice of
In practice, the method is used when a good approximation for the eigenvalue is known, and hence one needs only few (quite often just one) iterations.
The basic idea of the power iteration is choosing an initial vector
(either an eigenvector approximation or a random vector) and iteratively calculating
Except for a set of zero measure, for any initial vector, the result will converge to an eigenvector corresponding to the dominant eigenvalue.
, so it converges to the eigenvector corresponding to the dominant eigenvalue of the matrix
The power method is known to converge linearly to the limit, more precisely:
hence for the inverse iteration method similar result sounds as:
There are two options: one may choose an algorithm that solves a linear system, or one may calculate the inverse
Both options have complexity O(n3), the exact number depends on the chosen method.
Calculating the inverse matrix once, and storing it to apply at each iteration is of complexity O(n3) + k O(n2).
and using forward and back substitution to solve the system of equations at each iteration is also of complexity O(n3) + k O(n2).
Conversely, solving systems of linear equations will typically have a lesser initial cost, but require more operations for each iteration.
If it is necessary to perform many iterations (or few iterations, but for many eigenvectors), then it might be wise to bring the matrix to the upper Hessenberg form first (for symmetric matrix this will be tridiagonal form).
arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition.
arithmetic operations using a technique based on Householder reduction.
[2][3] Solution of the system of linear equations for the tridiagonal matrix costs
Also transformation to the Hessenberg form involves square roots and the division operation, which are not universally supported by hardware.
On general purpose processors (e.g. produced by Intel) the execution time of addition, multiplication and division is approximately equal.
But on embedded and/or low energy consuming hardware (digital signal processors, FPGA, ASIC) division may not be supported by hardware, and so should be avoided.
When implementing the algorithm using fixed-point arithmetic, the choice of the constant
Small values will lead to fast growth of the norm of
Typically, the method is used in combination with some other method which finds approximate eigenvalues: the standard example is the bisection eigenvalue algorithm, another example is the Rayleigh quotient iteration, which is actually the same inverse iteration with the choice of the approximate eigenvalue as the Rayleigh quotient corresponding to the vector obtained on the previous step of the iteration.
So taking the norm of the matrix as an approximate eigenvalue one can see that the method will converge to the dominant eigenvector.
In such applications, typically the statistics of matrices is known in advance and one can take as an approximate eigenvalue the average eigenvalue for some large matrix sample.
Better, one may calculate the mean ratio of the eigenvalues to the trace or the norm of the matrix and estimate the average eigenvalue as the trace or norm multiplied by the average value of that ratio.
This approach of estimating an average eigenvalue can be combined with other methods to avoid excessively large error.