Its probabilistic variant remains widely used in practice, as one of the simplest and fastest tests known.
Miller's version of the test is deterministic, but its correctness relies on the unproven extended Riemann hypothesis.
No composite number is a strong pseudoprime to all bases at the same time (contrary to the Fermat primality test for which Fermat pseudoprimes to all bases exist: the Carmichael numbers).
For testing arbitrarily large n, choosing bases at random is essential, as we don't know the distribution of witnesses and strong liars among the numbers 2, 3, ..., n − 2.
[b] However, a pre-selected set of a few small bases guarantees the identification of all composites up to a pre-computed maximum.
However, the example with 341 in a later section shows how these calculations can sometimes produce a factor of n. For a practical guide to choosing the value of a, see Testing against small sets of bases.
For instance, for most numbers n, this probability is bounded by 8−k; the proportion of numbers n which invalidate this upper bound vanishes as we consider larger values of n.[8] Hence the average case has a much better accuracy than 4−k, a fact which can be exploited for generating probable primes (see below).
However, such improved error bounds should not be relied upon to verify primes whose probability distribution is not controlled, since a cryptographic adversary might send a carefully chosen pseudoprime in order to defeat the primality test.
These two probabilities are related by Bayes' law: In the last equation, we simplified the expression using the fact that all prime numbers are correctly reported as strong probable primes (the test has no false negative).
By dropping the left part of the denominator, we derive a simple upper bound: Hence this conditional probability is related not only to the error measure discussed above — which is bounded by 4−k — but also to the probability distribution of the input number.
In the general case, as said earlier, this distribution is controlled by a cryptographic adversary, thus unknown, so we cannot deduce much about
Taking n as the limit would imply O(n) trials, hence the running time would be exponential with respect to the size log n of the input.
To improve the running time, the challenge is then to lower the limit as much as possible while keeping the test reliable.
If the tested number n is composite, the strong liars a coprime to n are contained in a proper subgroup of the group (Z/nZ)*, which means that if we test all a from a set which generates (Z/nZ)*, one of them must lie outside the said subgroup, hence must be a witness for the compositeness of n. Assuming the truth of the extended Riemann hypothesis (ERH), it is known that the group is generated by its elements smaller than O((ln n)2), which was already noted by Miller.
[7] The running time of the algorithm is, in the soft-O notation, Õ((log n)4) (using FFT‐based multiplication).
It is also slower in practice than commonly used proof methods such as APR-CL and ECPP which give results that do not rely on unproven assumptions.
For theoretical purposes requiring a deterministic polynomial time algorithm, it was superseded by the AKS primality test, which also does not rely on unproven assumptions.
When the number n to be tested is small, trying all a < 2(ln n)2 is not necessary, as much smaller sets of potential witnesses are known to suffice.
For example, Pomerance, Selfridge, Wagstaff[4] and Jaeschke[12] have verified that Using the 2010 work of Feitsma and Galway[13] enumerating all base 2 pseudoprimes up to 264, this was extended (see OEIS: A014233), with the first result later shown using different methods in Jiang and Deng:[14] Sorenson and Webster[15] verify the above and calculate precise results for these larger than 64‐bit results: Other criteria of this sort, often more efficient (fewer bases required) than those shown above, exist.
[10][16][17][18] They give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.
There is a small list of potential witnesses for every possible input size (at most b values for b‐bit numbers).
By inserting greatest common divisor calculations into the above algorithm, we can sometimes obtain a factor of n instead of merely determining that n is composite.
Hence, if factoring is a goal, these gcd calculations can be inserted into the algorithm at little additional computational cost.
This leads to the following pseudocode, where the added or changed code is highlighted: This is not a probabilistic factorization algorithm because it is only able to find factors for numbers n which are pseudoprime to base a (in other words, for numbers n such that an−1 ≡ 1 mod n).
This algorithm terminates almost surely (since at each iteration there is a chance to draw a prime number).
If we draw odd integers uniformly in the range [2b−1, 2b−1], then we get: where π is the prime-counting function.
Using an asymptotic expansion of π (an extension of the prime number theorem), we can approximate this probability when b grows towards infinity.
Taking into account the worst-case complexity of each Miller–Rabin test (see earlier), the expected running time of the generator with inputs b and k is then bounded by O(k b4) (or Õ(k b3) using FFT-based multiplication).
Using the relation between conditional probabilities (shown in an earlier section) and the asymptotic behavior of
One of these error bounds is 4−k, which holds for all b ≥ 2 (the authors only showed it for b ≥ 51, while Ronald Burthe Jr. completed the proof with the remaining values 2 ≤ b ≤ 50[21]).