In numerical linear algebra, the Bartels–Stewart algorithm is used to numerically solve the Sylvester matrix equation
Developed by R.H. Bartels and G.W.
Stewart in 1971,[1] it was the first numerically stable method that could be systematically applied to solve such equations.
The algorithm works by using the real Schur decompositions of
into a triangular system that can then be solved using forward or backward substitution.
In 1979, G. Golub, C. Van Loan and S. Nash introduced an improved version of the algorithm,[2] known as the Hessenberg–Schur algorithm.
It remains a standard approach for solving Sylvester equations when
is of small to moderate size.
has a unique solution.
by applying the following steps:[2] 1.Compute the real Schur decompositions The matrices
are block-upper triangular matrices, with diagonal blocks of size
Solve the simplified system
This can be done using forward substitution on the blocks.
should be concatenated and solved for simultaneously.
Using the QR algorithm, the real Schur decompositions in step 1 require approximately
flops, so that the overall computational cost is
is symmetric, the solution
is found more efficiently in step 3 of the algorithm.
[1] The Hessenberg–Schur algorithm[2] replaces the decomposition
This leads to a system of the form
that can be solved using forward substitution.
The advantage of this approach is that
can be found using Householder reflections at a cost of
flops required to compute the real Schur decomposition of
The subroutines required for the Hessenberg-Schur variant of the Bartels–Stewart algorithm are implemented in the SLICOT library.
These are used in the MATLAB control system toolbox.
cost of the Bartels–Stewart algorithm can be prohibitive.
are sparse or structured, so that linear solves and matrix vector multiplies involving them are efficient, iterative algorithms can potentially perform better.
These include projection-based methods, which use Krylov subspace iterations, methods based on the alternating direction implicit (ADI) iteration, and hybridizations that involve both projection and ADI.
[3] Iterative methods can also be used to directly construct low rank approximations to