Skip to main content

Miller Rabin Primality Test Calculator

Test if large numbers are prime using the Miller-Rabin probabilistic primality test. Enter values for instant results with step-by-step formulas.

Share this calculator

Formula

Test: a^d mod n and a^(2^i * d) mod n for i = 0 to r-1

The Miller-Rabin test decomposes n-1 as 2^r * d (d odd), then for each test base a, checks if a^d mod n = 1 or if any a^(2^i * d) mod n = n-1. If neither holds, n is composite. Each test reduces the error probability by a factor of 4.

Worked Examples

Example 1: Testing 104729 for Primality

Problem: Use the Miller-Rabin test with 10 iterations to determine if 104729 is prime.

Solution: n = 104729, n-1 = 104728\nDecompose: 104728 = 2^3 x 13091, so r = 3, d = 13091\nTest base 2: 2^13091 mod 104729 = check sequence\nTest base 3: 3^13091 mod 104729 = check sequence\n... (continue for remaining bases)\nAll bases pass the primality conditions.\nError bound: (1/4)^10 = 9.5367e-7\nConfidence: 99.9999%

Result: Probably Prime | Confidence: 99.9999% | 10 iterations

Example 2: Testing 561 (Carmichael Number)

Problem: Test 561, the smallest Carmichael number, using Miller-Rabin.

Solution: n = 561, n-1 = 560\nDecompose: 560 = 2^4 x 35, so r = 4, d = 35\nTrial division: 561 / 3 = 187 (factor found!)\n561 = 3 x 11 x 17\nNote: 561 passes the Fermat test for all coprime bases\nbut Miller-Rabin detects it as composite.\nFermat test alone would incorrectly identify it as prime.

Result: Composite | Factor found: 3 | 561 = 3 x 187

Frequently Asked Questions

How reliable is the Miller-Rabin test and what is the probability of error?

The Miller-Rabin test has a worst-case error probability of at most 1/4 per iteration, meaning each additional random base tested reduces the probability of incorrectly declaring a composite number as prime by a factor of 4. With k iterations, the error probability is at most (1/4)^k. For 10 iterations, the error probability is less than 1 in a million. For 20 iterations, it drops below 1 in a trillion. In practice, the actual error rate is much lower than this theoretical bound because most composite numbers have many witnesses. The test never produces false negatives: if it says a number is composite, that determination is always correct. Only the probably prime result carries uncertainty.

What is the difference between deterministic and probabilistic primality testing?

Deterministic primality tests, such as the AKS algorithm discovered in 2002, produce a guaranteed correct answer for any input with zero probability of error. However, deterministic tests are generally slower for very large numbers. Probabilistic tests like Miller-Rabin trade absolute certainty for dramatically faster execution. The Miller-Rabin test runs in O(k * log^2(n)) time, making it practical for numbers with hundreds of digits. For numbers below certain thresholds, specific sets of bases make Miller-Rabin deterministic: testing bases 2, 3, 5, and 7 gives correct results for all numbers below 3,215,031,751. The practical reality is that probabilistic tests with sufficient iterations are reliable enough for cryptographic applications where they are used to generate RSA key pairs.

Why is primality testing important for cryptography and computer security?

Modern public-key cryptography, particularly the RSA algorithm, relies fundamentally on the difficulty of factoring large numbers that are the product of two large primes. Generating RSA keys requires finding two large prime numbers, typically 512 to 2048 bits each. The Miller-Rabin test is the industry-standard method for this because it can quickly verify primality of numbers with hundreds of digits. If primality testing were unreliable, cryptographic keys could be generated from non-prime numbers, making them trivially breakable. The security of internet banking, encrypted communications, and digital signatures all depend on fast, reliable primality testing. Most cryptographic libraries use 40 or more iterations of Miller-Rabin, making the error probability astronomically small.

What are witnesses and liars in the context of Miller-Rabin testing?

In Miller-Rabin terminology, a witness is a base value a that proves a number is composite. If testing base a against number n shows that n fails the primality conditions, then a is called a witness to the compositeness of n. A liar is a base that makes a composite number appear to pass the primality test. For any composite odd number n, at most 1/4 of all possible bases between 2 and n-2 are liars, which is why the error probability per iteration is at most 25 percent. Strong pseudoprimes are composite numbers that have many liars, making them harder to detect as composite. The smallest strong pseudoprime to base 2 is 2047, and it requires testing with base 3 as well to correctly identify it as composite.

How does the decomposition of n-1 into 2^r * d work in the Miller-Rabin test?

The decomposition of n-1 into the form 2^r * d where d is odd is a crucial first step of the Miller-Rabin algorithm. Since n is odd and greater than 2, n-1 is even and can be repeatedly divided by 2. The exponent r counts how many times n-1 is divisible by 2, and d is the remaining odd factor. For example, if n = 221, then n-1 = 220 = 4 * 55 = 2^2 * 55, giving r = 2 and d = 55. This decomposition allows the algorithm to check a sequence of modular exponentiations: a^d, a^(2d), a^(4d), up to a^(2^(r-1) * d). These values form a chain where each is the square of the previous one modulo n, and the pattern of these values reveals whether n is prime or composite.

What is the relationship between Fermat's Little Theorem and the Miller-Rabin test?

The Miller-Rabin test is a strengthened version of the Fermat primality test, which is based on Fermat's Little Theorem stating that if p is prime and a is not divisible by p, then a^(p-1) mod p = 1. The Fermat test simply checks whether a^(n-1) mod n equals 1, but it fails for Carmichael numbers, which are composite numbers that pass the Fermat test for all bases coprime to them. The smallest Carmichael number is 561. Miller-Rabin strengthens this by examining the intermediate values during the computation of a^(n-1) mod n, specifically looking at the square root chain derived from the 2^r * d decomposition. This additional analysis catches Carmichael numbers and dramatically reduces false positives compared to the basic Fermat test.

References