Generated by GPT-5-mini| Miller–Rabin primality test | |
|---|---|
| Name | Miller–Rabin primality test |
| Caption | Probabilistic primality testing |
| Inventor | Gary L. Miller; Michael O. Rabin |
| Introduced | 1976; 1980 |
| Related | AKS primality test; Fermat primality test; Solovay–Strassen primality test |
Miller–Rabin primality test The Miller–Rabin primality test is a probabilistic algorithm for distinguishing prime numbers from composite integers using modular exponentiation and randomized witnesses. It generalizes earlier work by Gary L. Miller and applies concepts related to Michael O. Rabin's probabilistic analysis to offer a practical, widely used primality check in cryptographic and computational settings. The test balances speed and reliability and is often used alongside deterministic algorithms such as the AKS primality test in production systems.
The test builds on the ideas of the Fermat primality test and the Solovay–Strassen primality test and draws from number theoretic results related to Évariste Galois's group concepts and properties of integers modulo odd primes described in work by Carl Friedrich Gauss and Leonhard Euler. Its theoretical basis involves factoring n−1 into a power of two times an odd part, a technique reminiscent of approaches used in results by Sophie Germain and Adrien-Marie Legendre in primality contexts. Practical adoption accelerated in environments influenced by developments at institutions such as Bell Labs, IBM, MIT, and industrial cryptography groups tied to RSA (cryptosystem). Implementations frequently appear in libraries maintained by teams at OpenSSL, LibreSSL, GNU Project, and academic projects at Stanford University and University of Cambridge.
The algorithm first writes n−1 = 2^s · d with odd d, a decomposition concept found in classical texts by Joseph-Louis Lagrange and later algorithmic expositions in work by Donald Knuth. For a chosen base a in the range 2 ≤ a ≤ n−2, compute x = a^d mod n using fast modular exponentiation techniques developed in algorithms research at MIT and Bell Labs. If x ≡ 1 or x ≡ n−1, the base a is called a nonwitness for compositeness; otherwise repeatedly square x up to s−1 times checking whether x becomes n−1, an idea related to squaring methods explored by Ada Lovelace in early computing concepts and refined by modern practitioners at Google and Microsoft Research. If none of the squarings produce n−1, declare n composite; otherwise the test is inconclusive for that base and further bases are tried, a randomized strategy used in randomized algorithms researched at IBM Research and Microsoft Research.
Correctness relies on properties of the multiplicative group modulo a prime p, linking to theorems of Galois theory and classic results by Évariste Galois and Carl Friedrich Gauss. For an odd composite n, at most a quarter of bases a are strong liars under certain conditions, a bound studied in analyses by Michael O. Rabin and improved in probabilistic number theory work at Princeton University and IHÉS. Error probability decreases exponentially with independent random bases, an analysis approach similar to probabilistic methods used by Paul Erdős and Alfréd Rényi. Deterministic variants for bounded ranges use results connected to the Generalized Riemann Hypothesis (GRH) explored by Bernhard Riemann, with conditional deterministic bounds proved by Gary L. Miller under GRH assumptions and unconditional deterministic lists of bases developed in work from Friedrich L. Bauer-style algorithmic number theory groups.
Deterministic selections of bases for 32-bit and 64-bit integers have been established through collaborations at institutions like University of Illinois at Urbana–Champaign and École Normale Supérieure, producing small fixed witness sets used in production code by OpenBSD and NetBSD. Hybrid approaches combine Miller–Rabin with the deterministic AKS primality test or with elliptic curve primality proving methods pioneered by researchers at CERIAS and INRIA; elliptic curve methods tie to work by A. O. L. Atkin. Improvements in random base selection, sieving composites using techniques from Atkin–Bernstein sieve research, and pretests such as small prime trial division are used in libraries maintained by Cryptography Research, Inc. and open source communities around Debian and Fedora Project.
Implementations rely on fast modular exponentiation and multiplication algorithms, influenced by algorithmic optimizations from Peter Montgomery and practices codified in textbooks by Donald Knuth and Thomas H. Cormen. Time complexity per base is O(k·log^3 n) with fast multiplication refinements from work at MIT and Google lowering exponents; space complexity depends on word-size operations used by architectures from Intel and ARM Holdings. Production implementations address side-channel resistance as explored by Bruce Schneier and Phil Zimmermann and constant-time arithmetic techniques developed in industrial cryptography teams at Amazon Web Services and Apple Inc..
Widespread use occurs in key generation for RSA (cryptosystem), Diffie–Hellman key exchange, and protocols standardized by organizations such as Internet Engineering Task Force and NIST. Large prime searching projects at GIMPS and academic collaborations at University of California, Berkeley employ Miller–Rabin as a filter before heavier proofs. Cryptographic libraries like OpenSSL, Bouncy Castle, and LibreSSL use Miller–Rabin for performance-sensitive primality testing in secure communications overseen by entities including IANA and standards bodies such as IEEE. Beyond cryptography, it assists computational number theory research at Princeton University and algorithmic projects at Massachusetts Institute of Technology.
Category:Primality tests