The version of the Euclidean algorithm described above—which follows Euclid's original presentation—may require many subtraction steps to find the GCD when one of the given numbers is much bigger than the other.
Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers.
[9][10] Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.
In tabular form, the steps are: The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor.
[27][31] The algorithm may even pre-date Eudoxus,[32][33] judging from the use of the technical term ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) in works by Euclid and Aristotle.
[35] Centuries later, Euclid's algorithm was discovered independently both in India and in China,[36] primarily to solve Diophantine equations that arose in astronomy and making accurate calendars.
In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer",[37] perhaps because of its effectiveness in solving Diophantine equations.
[40] The Euclidean algorithm was first described numerically and popularized in Europe in the second edition of Bachet's Problèmes plaisants et délectables (Pleasant and enjoyable problems, 1624).
The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson,[41] who attributed it to Roger Cotes as a method for computing continued fractions efficiently.
In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832.
In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals.
[54][55] Bézout's identity states that the greatest common divisor g of two integers a and b can be represented as a linear sum of the original two numbers a and b.
Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M Since the top equation gives the two integers of Bézout's identity are s = (−1)N+1m22 and t = (−1)Nm12.
Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message.
The sequence of steps constructed in this way does not depend on whether a/b is given in lowest terms, and forms a path from the root to a node containing the number a/b.
Later, in 1841, P. J. E. Finck showed[88] that the number of division steps is at most 2 log2 v + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input.
This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory,[100] and also the first practical application of the Fibonacci numbers.
[112] A third average Y(n) is defined as the mean number of steps required when both a and b are chosen randomly (with uniform distribution) from 1 to n[111] Substituting the approximate formula for T(a) into this equation yields an estimate for Y(n)[113] In each step k of the Euclidean algorithm, the quotient qk and remainder rk are computed for a given pair of integers rk−2 and rk−1 The computational expense per step is associated chiefly with finding qk, since the remainder rk can be calculated quickly from rk−2, rk−1, and qk The computational expense of dividing h-bit numbers scales as O(h(ℓ + 1)), where ℓ is the length of the quotient.
Since the operation of subtraction is faster than division, particularly for large numbers,[115] the subtraction-based Euclid's algorithm is competitive with the division-based version.
The approximation is described by convergents mk/nk; the numerator and denominators are coprime and obey the recurrence relation where m−1 = n−2 = 1 and m−2 = n−1 = 0 are the initial values of the recursion.
Finally, dividing r0(x) by r1(x) yields a zero remainder, indicating that r1(x) is the greatest common divisor polynomial of a(x) and b(x), consistent with their factorization.
[43] This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares.
[153] In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm.
For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares.
[142] Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville.
[155] Lamé's approach required the unique factorization of numbers of the form x + ωy, where x and y are integers, and ω = e2iπ/n is an nth root of 1, that is, ωn = 1.
[159] In 1973, Weinberger proved that a quadratic integer ring with D > 0 is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds.
Similarly, they have a common left divisor if α = dξ and β = dη for some choice of ξ and η in the ring.
[131][160] Choosing the right divisors, the first step in finding the gcd(α, β) by the Euclidean algorithm can be written where ψ0 represents the quotient and ρ0 the remainder.
For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way.