A primality test is an algorithm for determining whether an input number is prime.
Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input).
The simplest primality test is trial division: given an input number,
Every positive integer except 1 is divisible by at least one prime number by the Fundamental Theorem of Arithmetic.
A rather simple optimization is to test divisibility by 2 and by just the odd numbers between 3 and
Observations analogous to the preceding can be applied recursively, giving the Sieve of Eratosthenes.
(Such a list can be computed with the Sieve of Eratosthenes or by an algorithm that tests each incremental
A simple but very inefficient primality test uses Wilson's theorem, which states that
These are tests that seem to work well in practice, but are unproven and therefore are not, technically speaking, algorithms at all.
In general, if p ≡ a (mod x2+4), where a is a quadratic non-residue (mod x2+4) then p should be prime if the following conditions hold: f(x)k is the k-th Fibonacci polynomial at x. Selfridge, Carl Pomerance and Samuel Wagstaff together offer $620 for a counterexample.
[2] Probabilistic tests are more rigorous than heuristics in that they provide provable bounds on the probability of being fooled by a composite number.
The probability of error can be reduced by repeating the test with several independently chosen values of a; for two commonly used tests, for any composite n at least half the a's detect n's compositeness, so k repetitions reduce the error probability to at most 2−k, which can be made arbitrarily small by increasing k. The basic structure of randomized primality tests is as follows: After one or more iterations, if n is not found to be a composite number, then it can be declared probably prime.
It works as follows: If an−1 (modulo n) is 1 but n is not prime, then n is called a pseudoprime to base a.
Nevertheless, the Fermat test is often used if a rapid screening of numbers is needed, for instance in the key generation phase of the RSA public key cryptographic algorithm.
Leonard Adleman and Ming-Deh Huang presented an errorless (but expected polynomial-time) variant of the elliptic curve primality test.
Unlike the other probabilistic tests, this algorithm produces a primality certificate, and thus can be used to prove that a number is prime.
A combination of Shor's algorithm, an integer factorization method, with the Pocklington primality test could solve the problem in
[7] Near the beginning of the 20th century, it was shown that a corollary of Fermat's little theorem could be used to test for primality.
[9] However, as this test requires a partial factorization of n − 1 the running time was still quite slow in the worst case.
(Running time is measured in terms of the size of the input, which in this case is ~ log n, that being the number of bits needed to represent the number n.) The elliptic curve primality test can be proven to run in O((log n)6), if some conjectures on analytic number theory are true.[which?]
Similarly, under the extended Riemann hypothesis, the deterministic Miller's test, which forms the basis of the probabilistic Miller–Rabin test, can be proved to run in Õ((log n)4).
[10] In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all.
Because the implementation of these two methods is rather difficult and creates a risk of programming errors, slower but simpler tests are often preferred.
In 2002, the first provably unconditional deterministic polynomial time test for primality was invented by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena.
[12] Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((log n)6) unconditionally.
[13] Agrawal, Kayal and Saxena suggest a variant of their algorithm which would run in Õ((log n)3) if Agrawal's conjecture is true; however, a heuristic argument by Hendrik Lenstra and Carl Pomerance suggests that it is probably false.
In 1975, Vaughan Pratt showed that there existed a certificate for primality that was checkable in polynomial time, and thus that PRIMES was in NP, and therefore in
The Adleman–Pomerance–Rumely primality test from 1983 put PRIMES in QP (quasi-polynomial time), which is not known to be comparable with the classes mentioned above.
Because of its tractability in practice, polynomial-time algorithms assuming the Riemann hypothesis, and other similar evidence, it was long suspected but not proven that primality could be solved in polynomial time.
The Lucas test relies on the fact that the multiplicative order of a number a modulo n is n − 1 for a prime n when a is a primitive root modulo n. If we can show a is primitive for n, we can show n is prime.