In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations.
In this case, the term Gaussian elimination refers to the process until it has reached its upper triangular, or (unreduced) row echelon form.
For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.
Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by a Frobenius matrix.
Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier.
The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom.
For example, the following matrix is in row echelon form, and its leading coefficients are shown in red:
Suppose the goal is to find and describe the set of solutions to the following system of linear equations:
The table below is the row reduction process applied simultaneously to the system of equations and its associated augmented matrix.
In practice, one does not usually deal with the systems in terms of equations, but instead makes use of the augmented matrix, which is more suitable for computer manipulations.
Once y is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete.
From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution.
The first reference to the book by this title is dated to 179 AD, but parts of it were written as early as approximately 150 BC.
According to Grcar[4] solution of linear equations by elimination was invented independently in several cultures in Eurasia starting from antiquity and in Europe definite examples of procedure were published already by late Renaissance (in 1550's).
It is quite possible that already then the procedure was considered by mathematicians elementary and in no need to explanation for professionals, so we may never learn its detailed history except that by then it was practiced in at least several places in Europe.
[5][6] In 1670, he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied.
Cambridge University eventually published the notes as Arithmetica Universalis in 1707 long after Newton had left academic life.
The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century.
Carl Friedrich Gauss in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professional hand computers to solve the normal equations of least-squares problems.
[7] The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject.
Computationally, for an n × n matrix, this method needs only O(n3) arithmetic operations, while using Leibniz formula for determinants requires
The number of arithmetic operations required to perform row reduction is one way of measuring the algorithm's computational efficiency.
For example, to solve a system of n equations for n unknowns by performing row operations on the matrix until it is in echelon form, and then solving for each unknown in reverse order, requires n(n + 1)/2 divisions, (2n3 + 3n2 − 5n)/6 multiplications, and (2n3 + 3n2 − 5n)/6 subtractions,[10] for a total of approximately 2n3/3 operations.
[12]: 37 Independently, and almost simultaneously, Erwin Bareiss discovered another algorithm, based on the following remark, which applies to a division-free variant of Gaussian elimination.
Moreover, Hadamard's inequality provides an upper bound on the absolute values of the intermediate and final entries, and thus a bit complexity of
If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number.
The transformation is performed in place, meaning that the original matrix is lost for being eventually replaced by its row-echelon form.
This algorithm differs slightly from the one discussed earlier, by choosing a pivot with largest absolute value.
In any case, choosing the largest possible absolute value of the pivot improves the numerical stability of the algorithm, when floating point is used for representing numbers.
[15] Upon completion of this procedure the matrix will be in row echelon form and the corresponding system may be solved by back substitution.