Generated by GPT-5-mini| Solovay–Strassen primality test | |
|---|---|
| Name | Solovay–Strassen primality test |
| Inventor | Robert Solovay; Volker Strassen |
| Introduced | 1977 |
| Field | Number theory; Cryptography; Computational mathematics |
| Input | Integer n > 2 |
| Output | "composite" or "probable prime" |
| Time complexity | Randomized polynomial time |
Solovay–Strassen primality test The Solovay–Strassen primality test is a randomized algorithm for assessing whether an integer is prime, introduced by Robert M. Solovay and Volker Strassen in 1977. It combines ideas from Euler's criterion and the Jacobi symbol to produce one-sided composite evidence, and it influenced subsequent randomized tests such as Miller–Rabin primality test and deterministic algorithms like the AKS primality test. The test plays a historical role in the development of probabilistic primality testing used in applications including RSA (cryptosystem) and public-key infrastructure employed by institutions like MIT and companies such as RSA Security.
The test arises in the context of algorithmic primality testing problems investigated by researchers including John von Neumann and Alan Turing, and it leverages arithmetic properties studied by mathematicians like Carl Friedrich Gauss and Adrien-Marie Legendre. Solovay and Strassen framed a randomized decision procedure that, for a given odd integer n, uses modular exponentiation and the Jacobi symbol—linked to work by Carl Gustav Jacobi—to detect nontrivial deviations from primality. The innovation followed contemporaneous advances by theorists connected with institutions such as Bell Labs and University of Illinois Urbana–Champaign, and it became part of the toolkit for cryptographers at Bellcore and academic groups led by figures like Ronald Rivest.
For an odd integer n > 2 the algorithm repeats the following k times: pick a random integer a with 1 < a < n−1, compute a^{(n−1)/2} mod n using fast exponentiation methods associated with algorithms by Peter Montgomery and implementations influenced by Donald Knuth, compute the Jacobi symbol (a|n) using algorithms rooted in Euclid's algorithm and reductions popularized by Édouard Lucas, then compare the congruence a^{(n−1)/2} ≡ (a|n) (mod n). If the congruence fails for any chosen a, the algorithm outputs "composite"; otherwise, after k independent trials it outputs "probable prime". Practical implementations rely on modular multiplication techniques used in libraries developed at organizations like GNU Project and Intel Corporation.
For a prime n, Fermat's little theorem and Euler's criterion guarantee that the congruence holds for all a coprime to n, a fact connected to classical results by Évariste Galois on multiplicative groups and to the group-theoretic treatment of residues used by Émile Borel. For a composite n, Solovay and Strassen proved that at least half of the bases a coprime to n are "witnesses" that violate the congruence, so each random trial detects compositeness with probability at least 1/2; this probabilistic bound parallels error analyses in work by Michael Rabin and Gary Miller. Consequently, after k independent iterations the error probability that a composite is declared "probable prime" is at most 2^{-k}, a bound that influenced security parameters in standards promulgated by organizations such as National Institute of Standards and Technology.
The cost per iteration is dominated by modular exponentiation and Jacobi symbol computation; using fast exponentiation and binary GCD methods inspired by Joseph Brillhart and Henry S. Warren, Jr., the runtime is polynomial in the bit-length of n. In bit-complexity terms, a single test typically takes O((log n)^3) bit operations with classical multiplication, improving with techniques developed by Arnold Schönhage and Volker Strassen (mathematician) for fast multiplication and asymptotic methods from Peter L. Montgomery. Implementers target constant-time routines to mitigate side-channel attacks studied by researchers at Microsoft Research and DARPA, and practical code appears in cryptographic libraries maintained by communities around OpenSSL and LibreSSL.
The Solovay–Strassen test motivated the Miller test and its probabilistic variant by Gary L. Miller and Michael O. Rabin, and it prompted deterministic criteria for special classes of integers explored by Dirichlet and computational results by Joseph H. Silverman. Improvements include hybrid strategies combining sieving techniques from Atkin–Morain elliptic curve primality proving and deterministic reductions such as those used in the AKS primality test by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena. Researchers at institutions like Princeton University and University of California, Berkeley have proposed optimizations that reduce average-case work using pretests and fast residue arithmetic inspired by G. H. Hardy's analytic techniques.
Although supplanted in many production systems by Miller–Rabin primality test due to performance and implementation considerations, the Solovay–Strassen test remains a canonical example in curricula at universities such as Harvard University and Stanford University and in textbooks by authors like Thomas H. Cormen and Richard E. Borcherds. It informed cryptographic key-generation procedures for schemes like Diffie–Hellman key exchange and guided standards for key lengths adopted by agencies such as European Telecommunications Standards Institute. The test also finds use in exploratory software by academic groups at University of Cambridge and in experimental cryptographic toolkits developed by startups influenced by researchers including Adi Shamir and Leonard Adleman.
Category:Algorithms