Gauss–Seidel method

It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel.

Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant,[1] or symmetric and positive definite.

It was only mentioned in a private letter from Gauss to his student Gerling in 1823.

be a square system of n linear equations, where:

is decomposed into a lower triangular component

, and a strictly upper triangular component

The system of linear equations may be rewritten as: The Gauss–Seidel method now solves the left hand side of this expression for

However, by taking advantage of the triangular form of

The procedure is generally continued until the changes made by an iteration are below some tolerance, such as a sufficiently small residual.

The element-wise formula for the Gauss–Seidel method is related to that of the (iterative) Jacobi method, with an important difference: In Gauss-Seidel, the computation of

This means that, unlike the Jacobi method, only one storage vector is required as elements can be overwritten as they are computed, which can be advantageous for very large problems.

However, unlike the Jacobi method, the computations for each element are generally much harder to implement in parallel, since they can have a very long critical path, and are thus most feasible for sparse matrices.

Furthermore, the values at each iteration are dependent on the order of the original equations.

The convergence properties of the Gauss–Seidel method are dependent on the matrix

Golub and Van Loan give a theorem for an algorithm that splits

[8] Since elements can be overwritten as they are computed in this algorithm, only one storage vector is needed, and vector indexing is omitted.

The algorithm goes as follows: A linear system shown as

into the sum of a lower triangular component

and a strict upper triangular component

The closer the guess to the final solution, the fewer iterations the algorithm will need.

As expected, the algorithm converges to the solution:

In fact, the matrix A is strictly diagonally dominant, but not positive definite.

into the sum of a lower triangular component

and a strict upper triangular component

In a test for convergence we find that the algorithm diverges.

is neither diagonally dominant nor positive definite.

At any step in a Gauss-Seidel iteration, solve the first equation for

Then, repeat iterations until convergence is achieved, or break if the divergence in the solutions start to diverge beyond a predefined level.

Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached.

The following iterative procedure produces the solution vector of a linear system of equations: Produces the output: The following code uses the formula