Greatest common divisor

[1][2] In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names include highest common factor, etc.

[3][4][5][6] Historically, other names for the same concept have included greatest common measure.

This case is important as the terminating step of the Euclidean algorithm.

The above definition is unsuitable for defining gcd(0, 0), since there is no greatest integer n such that 0 × n = 0.

[13] The GCD of a and b is their greatest positive common divisor in the preorder relation of divisibility.

This is commonly proved by using either Euclid's lemma, the fundamental theorem of arithmetic, or the Euclidean algorithm.

The number 54 can be expressed as a product of two integers in several different ways: Thus the complete list of divisors of 54 is 1, 2, 3, 6, 9, 18, 27, 54.

Two numbers are called relatively prime, or coprime, if their greatest common divisor equals 1.

The greatest common divisor is useful for reducing fractions to the lowest terms.

In practice, this method is only feasible for small numbers, as computing prime factorizations takes too long.

A more efficient method is the Euclidean algorithm, a variant in which the difference of the two numbers a and b is replaced by the remainder of the Euclidean division (also called division with remainder) of a by b. Denoting this remainder as a mod b, the algorithm replaces (a, b) with (b, a mod b) repeatedly until the pair is (d, 0), where d is the greatest common divisor.

Its efficiency results from the fact that, in binary representation, testing parity consists of testing the right-most digit, and dividing by two consists of removing the right-most digit.

Step 1 determines d as the highest power of 2 that divides a and b, and thus their greatest common divisor.

Here, this length is n = log a + log b, and the complexity is thus Lehmer's algorithm is based on the observation that the initial quotients produced by Euclid's algorithm can be determined based on only the first few digits; this is useful for numbers that are larger than a computer word.

In essence, one extracts initial digits, typically forming one or two computer words, and runs Euclid's algorithms on these smaller numbers, as long as it is guaranteed that the quotients are the same with those that would be obtained with the original numbers.

This process is repeated until numbers are small enough that the binary algorithm (see below) is more efficient.

In fact, most of the quotients are very small, so a fair number of steps of the Euclidean algorithm can be collected in a 2-by-2 matrix of single-word integers.

When Lehmer's algorithm encounters a quotient that is too large, it must fall back to one iteration of Euclidean algorithm, with a Euclidean division of large numbers.

Keith Slavin has shown that for odd a ≥ 1: which is a function that can be evaluated for complex b.

[16] Wolfgang Schramm has shown that is an entire function in the variable b for all positive integers a where cd(k) is Ramanujan's sum.

This means that the computation of greatest common divisor has, up to a constant factor, the same complexity as the multiplication.

More precisely, if the multiplication of two integers of n bits takes a time of T(n), then the fastest known algorithm for greatest common divisor has a complexity O(T(n) log n).

The computation of the greatest common divisors belongs thus to the class of problems solvable in quasilinear time.

[19] Since NC contains NL, it is also unknown whether a space-efficient algorithm for computing the GCD exists, even for nondeterministic Turing machines.

(sequence A018804 in the OEIS) In 1972, James E. Nymann showed that k integers, chosen independently and uniformly from {1, ..., n}, are coprime with probability 1/ζ(k) as n goes to infinity, where ζ refers to the Riemann zeta function.

This result was extended in 1987 to show that the probability that k random integers have greatest common divisor d is d−k/ζ(k).

In this case the probability that the GCD equals d is d−2/ζ(2), and since ζ(2) = π2/6 we have This last summation is the harmonic series, which diverges.

The notion of greatest common divisor can more generally be defined for elements of an arbitrary commutative ring, although in general there need not exist one for every pair of elements.

If R is an integral domain, then any two GCDs of a and b must be associate elements, since by definition either one must divide the other.

(Indeed, Ernst Kummer used this ideal as a replacement for a GCD in his treatment of Fermat's Last Theorem, although he envisioned it as the set of multiples of some hypothetical, or ideal, ring element d, whence the ring-theoretic term.)

"Tall, slender rectangle divided into a grid of squares. The rectangle is two squares wide and five squares tall."
A 24-by-60 rectangle is covered with ten 12-by-12 square tiles, where 12 is the GCD of 24 and 60. More generally, an a -by- b rectangle can be covered with square tiles of side length c only if c is a common divisor of a and b .
Animation showing an application of the Euclidean algorithm to find the greatest common divisor of 62 and 36, which is 2.
or Thomae's function. Hatching at bottom indicates ellipses (i.e. omission of dots due to the extremely high density).