LLMpediaThe first transparent, open encyclopedia generated by LLMs

Miller–Rabin

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Number Theory Hop 4
Expansion Funnel Raw 67 → Dedup 6 → NER 3 → Enqueued 2
1. Extracted67
2. After dedup6 (None)
3. After NER3 (None)
Rejected: 3 (not NE: 1, parse: 2)
4. Enqueued2 (None)
Similarity rejected: 1
Miller–Rabin
NameMiller–Rabin
CaptionProbabilistic primality testing
FieldComputer science, Number theory
Introduced1976 (Miller), 1980 (Rabin)
InventorsGlenn H. Miller, Michael O. Rabin

Miller–Rabin Miller–Rabin is a probabilistic primality testing algorithm used in computational number theory, computational complexity, and cryptography. It complements deterministic algorithms like the AKS primality test and deterministic variants of the Fermat primality test and forms a practical backbone for key generation in systems derived from RSA and protocols influenced by Diffie–Hellman. The test is widely implemented in software projects such as OpenSSL, GNU Privacy Guard, and libraries used by Bitcoin and other Ethereum-based platforms for efficient verification of large primes.

Introduction

Miller–Rabin was inspired by earlier work in algorithmic number theory, building on concepts from the Fermat's little theorem tradition and deterministic tests explored in academic settings such as Stanford University and MIT. The procedure addresses primality for integers appearing in cryptographic contexts involving standards from NIST and implementations in Linux and Windows cryptographic subsystems. Its probabilistic nature parallels randomized algorithms studied in complexity classes like BPP and research from institutions including Princeton University, Harvard University, and Bell Labs.

Algorithm

The algorithm begins by expressing an odd integer n − 1 as 2^s · d with d odd, an approach reminiscent of decomposition methods used in Euclidean algorithm analyses at institutions like University of Cambridge and ETH Zurich. Given one or more random bases a with 2 ≤ a ≤ n − 2, the algorithm computes x = a^d mod n, leveraging modular exponentiation techniques linked to implementations from GNU Project and optimizations cited in work by researchers at IBM and Microsoft Research. The core loop checks whether x ≡ 1 mod n or x ≡ −1 mod n, and iteratively squares x up to s − 1 times, mirroring exponentiation by squaring strategies from software foundations at Bell Labs, Microsoft Research, and Intel. If no witness condition is met, n is declared composite; otherwise n is a probable prime, a notion framed in literature from University of California, Berkeley and Cornell University.

Correctness and Error Analysis

Correctness analyses reference number-theoretic results tied to Gauss and later formalizations by mathematicians connected to Princeton University and University of Oxford. For composite n, many bases a serve as witnesses to compositeness; Rabin proved that for odd composite n at most 1/4 of bases are nonwitnesses under certain conditions, a bound that influences error probabilities cited in standards by NIST and cryptographic research from RSA Laboratories and Stanford University. The error probability of k independent random bases is at most 4^(−k), a result used in protocols developed at MIT, Carnegie Mellon University, and University of Waterloo. Special-case deterministic guarantees for ranges of n depend on results from researchers affiliated with University of Michigan and University of Montreal who identified small witness sets and bounds used in deterministic compilations adopted by OpenBSD and Debian.

Computational Complexity and Implementation

Performance depends on modular exponentiation complexity and multiplication algorithms like Karatsuba and Schönhage–Strassen, whose analyses originate from groups at ETH Zurich, IBM Research, and University of Bonn. For an n-bit integer, a single Miller–Rabin round uses O(M(n) log n) bit operations where M(n) denotes multiplication cost; practical implementations in OpenSSL, GnuPG, and numerical libraries from NVIDIA and Google optimize this with assembly routines and Montgomery reduction techniques pioneered at AMD and Intel. Implementers must handle pseudorandom generation of bases via RNGs validated by NIST and entropy sources recommended by IETF and ISO. Real-world deployments in Tor Project, Signal (software), and blockchain clients balance rounds versus performance constraints present in environments like Amazon Web Services and embedded systems from ARM.

Variants and Improvements

Deterministic variants leverage results by researchers at University of Toronto and Ecole Polytechnique that identify finite sets of bases to test for n below large thresholds, integrating with deterministic algorithms such as AKS primality test and optimizations from Princeton University and ETH Zurich. Improvements include combined sieving and filtering methods used in projects like PrimeGrid and GIMPS, adaptations employing strong probable prime tests and base selection heuristics analyzed by teams at Stanford University and Microsoft Research, and hybrid approaches integrating elliptic curve primality proving (ECPP) methodologies from CNRS and University of Bordeaux.

Applications and Practical Use Cases

Miller–Rabin is central to public-key cryptography systems including RSA, DSA, and parameters generation in Elliptic Curve Cryptography standards promulgated by NIST and implemented in libraries like OpenSSL, LibreSSL, and BoringSSL. It is used in cryptocurrencies developed by Satoshi Nakamoto-inspired projects such as Bitcoin and smart-contract platforms like Ethereum for secure key generation. Academic and industrial research groups at Google, Facebook, and Microsoft Research use Miller–Rabin in large-scale cryptographic operations, while open-source communities including Debian, Fedora Project, and FreeBSD incorporate it into package toolchains and build systems. Its balance of speed and probabilistic assurance makes it suitable for cloud services from AWS and Azure and constrained devices from ARM and Intel ecosystems.

Category:Primality tests