Miller–Rabin primality test


The Miller–Rabin primality test or Rabin–Miller primality test is a primality test: an algorithm which determines whether a given number is prime, similar to the Fermat primality test and the Solovay–Strassen primality test. Gary L. Miller discovered it in 1976; Miller's version of the test is deterministic, but its correctness relies on the unproven extended Riemann hypothesis. Michael O. Rabin modified it to obtain an unconditional probabilistic algorithm in 1980. This test is often said to have been discovered by M. M. Artjuhov in 1967
but that is incorrect: a reading of Artjuhov's paper shows he discovered the Solovay-Strassen test, not the Miller-Rabin test.

Mathematical concepts

Just like the Fermat and Solovay–Strassen tests, the Miller–Rabin test relies on an equality or set of equalities that hold true for prime values, then checks whether or not they hold for a number that we want to test for primality.
First, a lemma about square roots of unity in the finite field Z/pZ, where p is prime and p > 2. Certainly 1 and −1 always yield 1 when squared modulo p; call these trivial square roots of 1. There are no nontrivial square roots of 1 modulo p. To show this, suppose that x is a square root of 1 modulo p. Then:
In other words, prime
p divides the product By Euclid's lemma it divides one of the factors or implying that x is congruent to either 1 or −1 modulo p.
Now, let
n be prime, and n > 2. It follows that is even and we can write it as 2s·d, where s and d are positive integers and d is odd. For each a in *, either
or
for some 0 ≤ r ≤
s − 1.
To show that one of these must be true, recall Fermat's little theorem, that for a prime number n:
By the lemma above, if we keep taking square roots of
an−1, we will get either 1 or −1. If we get −1 then the second equality holds and it is done. If we never get −1, then when we have taken out every power of 2, we are left with the first equality.
The Miller–Rabin primality test is based on the contrapositive of the above claim. That is, if we can find an
a such that
and
for all 0 ≤ r ≤
s − 1, then n is not prime. We call a a witness for the compositeness of n. Otherwise a is called a strong liar, and n is a strong probable prime to base a. The term "strong liar" refers to the case where n is composite but nevertheless the equations hold as they would for a prime.
Note that Miller–Rabin pseudoprimes are called strong pseudoprimes.
Every odd composite
n has many witnesses a. However, no simple way of generating such an a is known. The solution is to make the test probabilistic: we choose a non-zero a in
Z/n
Z randomly, and check whether or not it is a witness for the compositeness of n. If n is composite, most of the choices for a will be witnesses, and the test will detect n as composite with high probability. There is, nevertheless, a small chance that we are unlucky and hit an a which is a strong liar for n. We may reduce the probability of such error by repeating the test for several independently chosen a.
However, there are diminishing returns in doing tests to many bases, because if n is a pseudoprime to base a, then it seems more likely to be a pseudoprime to another base b.
For testing large numbers, it is common to choose random bases a, as, a priori, we don't know the distribution of witnesses and liars among the numbers 1, 2,..., n − 1. In particular, Arnault gave a 397-digit composite number for which all bases a less than 307 are strong liars. As expected this number was reported to be prime by the Maple isprime function, which implemented the Miller–Rabin test by checking the specific bases 2,3,5,7, and 11. However, selection of a few specific small bases can guarantee identification of composites for n less than some maximum determined by said bases. This maximum is generally quite large compared to the bases. As random bases lack such determinism for small n, specific bases are better in some circumstances.

Example

Suppose we wish to determine if n = 221 is prime. We write as 22·55, so that we have s = 2 and d = 55. We randomly select a number a such that 1 < a < n - 1, say a = 174. We proceed to compute:
Since 220 ≡ −1 mod n, either 221 is prime, or 174 is a strong liar for 221. We try another random a, this time choosing a = 137:
Hence 137 is a witness for the compositeness of 221, and 174 was in fact a strong liar. Note that this tells us nothing about the factors of 221. However, the example with 341 in the next section shows how these calculations can sometimes produce a factor of n.

Miller–Rabin test

The algorithm can be written in pseudocode as follows. The parameter k determines the accuracy of the test. The greater the number of rounds, the more accurate the result.
Input #1: n > 3, an odd integer to be tested for primality
Input #2: k, the number of rounds of testing to perform
Output: “composite” if n is found to be composite, “probably prime” otherwise
write n as 2r·d + 1 with d odd
WitnessLoop: repeat k times:
pick a random integer a in the range
xad mod n
if x = 1 or x = n − 1 then
continue WitnessLoop
repeat r − 1 times:
xx2 mod n
if x = n − 1 then
continue WitnessLoop
returncomposite
returnprobably prime

Complexity

Using repeated squaring, the running time of this algorithm is, where n is the number tested for primality, and k is the number of rounds performed; thus this is an efficient, polynomial-time algorithm. FFT-based multiplication can push the running time down to.

Accuracy

The error made by the primality test is measured by the probability for a composite number to be declared probably prime. The more bases a are tried, the better the accuracy of the test. It can be shown that if n is composite, then at most of the bases a are strong liars for n. As a consequence, if n is composite then running k iterations of the Miller–Rabin test will declare n probably prime with a probability at most 4k.
This is an improvement over the Solovay–Strassen test, whose worst‐case error bound is 2k. Moreover, the Miller–Rabin test is strictly stronger than the Solovay–Strassen test in the sense that for every composite n, the set of strong liars for n is a subset of the set of Euler liars for n, and for many n, the subset is proper.
In addition, for large values of n, the probability for a composite number to be declared probably prime is often significantly smaller than 4k. For instance, for most numbers n, this probability is bounded by 8k; the proportion of numbers n which invalidate this upper bound vanishes as we consider larger values of n. Hence the average case has a much better accuracy than 4k, a fact which can be exploited for generating probable primes. However, such improved error bounds should not be relied upon to verify primes whose probability distribution is not controlled, since a cryptographic adversary might send a carefully chosen pseudoprime in order to defeat the primality test. In such contexts, only the worst‐case error bound of 4k can be relied upon.
It is important to note that in many common applications of this algorithm, we are not interested in the error bound described above. The above error bound is the probability of a composite number being declared as a probable prime after k rounds of testing. We are often instead interested in the probability that, after passing k rounds of testing, the number being tested is actually a composite number. Formally, if we call the event of declaring n a probable prime after k rounds of Miller–Rabin Yk, and we call the event that n is composite X, then the above bound gives us, whereas we are interested in. Bayes' theorem gives us a way to relate these two conditional probabilities, namely
This tells us that the probability that we are often interested in is related not just to the 4k bound above, but also probabilities related to the density of prime numbers in the region near n.

Deterministic variants

Miller test

The Miller–Rabin algorithm can be made deterministic by trying all possible a below a certain limit. The problem in general is to set the limit so that the test is still reliable.
If the tested number n is composite, the strong liars a coprime to n are contained in a proper subgroup of the group *, which means that if we test all a from a set which generates *, one of them must lie outside the said subgroup, hence must be a witness for the compositeness of n. Assuming the truth of the generalized Riemann hypothesis, it is known that the group is generated by its elements smaller than, which was already noted by Miller. The constant involved in the Big O notation was reduced to 2 by Eric Bach. This leads to the following conditional primality testing algorithm, known as the Miller test:
Input: n > 1, an odd integer to be tested for primality
Output: “composite” if n is composite, “prime” otherwise
write n as 2r·d + 1 with d odd
WitnessLoop: for all a in the range :
xad mod n
if x = 1 or x = n − 1 then
continue WitnessLoop
repeat r − 1 times:
xx2 mod n
if x = n − 1 then
continue WitnessLoop
returncomposite
returnprime
The full power of the generalized Riemann hypothesis is not needed to ensure the correctness of the test: as we deal with subgroups of even index, it suffices to assume the validity of GRH for quadratic Dirichlet characters.
The running time of the algorithm is, in the soft-O notation, .
The Miller test is not used in practice. For most purposes, proper use of the probabilistic Miller–Rabin test or the Baillie–PSW primality test gives sufficient confidence while running much faster. It is also slower in practice than commonly used proof methods such as APR-CL and ECPP which give results that do not rely on unproven assumptions. For theoretical purposes requiring a deterministic polynomial time algorithm, it was superseded by the AKS primality test, which also does not rely on unproven assumptions.

Testing against small sets of bases

When the number n to be tested is small, trying all is not necessary, as much smaller sets of potential witnesses are known to suffice. For example, Pomerance, Selfridge, Wagstaff and Jaeschke have verified that
Using the work of Feitsma and Galway enumerating all base 2 pseudoprimes in 2010, this was extended, with the first result later shown using different methods in Jiang and Deng:
Sorenson and Webster verify the above and calculate precise results for these larger than 64‐bit results:
Other criteria of this sort, often more efficient than those shown above, exist. They give very fast deterministic primality tests for numbers in the appropriate range, without any assumptions.
There is a small list of potential witnesses for every possible input size. However, no finite set of bases is sufficient for all composite numbers. Alford, Granville, and Pomerance have shown that there exist infinitely many composite numbers n whose smallest compositeness witness is at least. They also argue heuristically that the smallest number w such that every composite number below n has a compositeness witness less than w should be of order

Variants for finding factors

By inserting greatest common divisor calculations into the above algorithm, we can sometimes obtain a factor of n instead of merely determining that n is composite. This occurs for example when n is a probable prime base a but not a strong probable prime base a. We can detect this case in the algorithm by comparing x in the inner loop not only to −1, but also to 1.
If at some iteration 1 ≤ i < r of the inner loop, the algorithm discovers that the value of the variable x is equal to 1, then, knowing that the previous value of the variable x has been checked to be different from ±1, we can deduce that x0 is a square root of 1 which is neither 1 nor −1. As this is not possible when n is prime, this implies that n is composite. Moreover:
From this we deduce that and are non‐trivial factors of n. Hence, if factoring is a goal, these GCD calculations can be inserted into the algorithm at little additional computational cost. This leads to the following pseudocode, where the added or changed code is highlighted:
Input #1: n > 3, an odd integer to be tested for primality
Input #2: k, the number of rounds of testing to perform
Output:
composite” if n is otherwise found to be composite,
probably prime” otherwise
write n as 2r·d + 1 with d odd
WitnessLoop: repeat k times:
pick a random integer a in the range
xad mod n
if x = 1 or x = n − 1 then
continue WitnessLoop
repeat r − 1 times:
x2 mod n



if x = n − 1 then
continue WitnessLoop
returncomposite
returnprobably prime
This algorithm does not yield a probabilistic factorization algorithm because it is only able to find factors for numbers n which are pseudoprime to base a. For other numbers, the algorithm only returns “composite” with no further information.
For example, consider n = 341 and a = 2. We have Then and This tells us that n is a pseudoprime base 2, but not a strong pseudoprime base 2. By computing a GCD at this stage, we find a factor of 341: Indeed,.
In order to find factors more often, the same ideas can also be applied to the square roots of −1.
This strategy can be implemented by exploiting knowledge from previous rounds of the Miller–Rabin test. In those rounds we may have identified a square root modulo n of −1, say R. Then, when, we can compare the value of x against R: if x is neither R nor nR, then and are non‐trivial factors of n.

Generation of probable primes

The Miller–Rabin test can be used to generate strong probable primes, simply by drawing integers at random until one passes the test. This algorithm terminates almost surely. The pseudocode for generating b‐bit strong probable primes is as follows:
Input #1: b, the number of bits of the result
Input #2: k, the number of rounds of testing to perform
Output: a strong probable prime n
while True:
pick a random odd integer n in the range
if the Miller–Rabin test with inputs n and k returns “probably primethen
return n
The error measure of this generator is the probability that it outputs a composite number. Using the fact that the Miller–Rabin test itself often has an error bound much smaller than 4k, Damgård, Landrock and Pomerance derived several error bounds for the generator, with various classes of parameters b and k. These error bounds allow an implementor to choose a reasonable k for a desired accuracy.
One of these error bounds is 4k, which holds for all b ≥ 2. Again this simple bound can be improved for large values of b. For instance, another bound derived by the same authors is:
which holds for all b ≥ 21 and k ≥. This bound is smaller than 4k as soon as b ≥ 32.
A number of cryptographic libraries use the Miller-Rabin test to generate probable primes. Albrecht, et al. were able to construct composite numbers that some of these libraries declared to be prime.