Generated by GPT-5-mini| Bateman–Horn | |
|---|---|
| Name | Bateman–Horn conjecture |
| Field | Number theory |
| Introduced | 1962 |
| Proponents | Paul T. Bateman; Roger A. Horn |
| Status | Open problem |
Bateman–Horn is a conjecture in Number theory proposing an asymptotic density for integers that make one or several given irreducible polynomials simultaneously prime. It generalizes conjectures about primes in arithmetic progressions, prime constellations, and values of polynomials, and connects to themes in work by Euclid, Euler, Dirichlet, Legendre, and Bunyakovsky. The conjecture has influenced computational projects by teams at Princeton University, Massachusetts Institute of Technology, University of Cambridge, and Institute for Advanced Study.
The conjecture gives an asymptotic formula for the count of integers n ≤ x for which k irreducible polynomials f1, f2, …, fk with integer coefficients take prime values simultaneously. It asserts that the counting function is asymptotic to a product of a constant C(f1,...,fk) depending on local obstructions and an integral of a reciprocal power of log, paralleling formulas in the work of Prime Number Theorem proofs by Hadamard and de la Vallée Poussin. The constant incorporates local densities at primes p akin to factors appearing in Dirichlet's theorem on arithmetic progressions and in conjectures by Hardy and Littlewood. The statement subsumes special cases like conjectures on twin primes associated to Viggo Brun and polynomials studied by Bunyakovsky.
Motivation traces through classical problems about primes of specific forms studied by Fermat, Mersenne, and Sophie Germain; systematic modern framing arose from heuristics by Hardy and Littlewood on prime k-tuples and from investigations by Schinzel and Bunyakovsky. Bateman and Horn synthesized prior heuristics and local-global analysis influenced by methods in analytic number theory developed at institutions such as Cambridge University and University of Chicago. The conjecture was proposed in the early 1960s, paralleling computational advances at Bell Labs and theoretical outlooks from figures like Atle Selberg, Paul Erdős, and G. H. Hardy.
Let f1,...,fk be distinct irreducible integer polynomials with positive leading coefficients and no fixed prime divisor. Define π_f(x) as the number of n ≤ x with all fi(n) prime. The conjecture predicts π_f(x) ∼ C(f1,...,fk) ∫_2^x dt / (log t)^k, where C is an explicit product over primes p of a correction factor reflecting how often the product ∏fi(n) is divisible by p. Examples include k=1 with f(n)=n giving the Prime Number Theorem; k=1 with f(n)=an+b connecting to Dirichlet's theorem on arithmetic progressions; k=2 with f1(n)=n and f2(n)=n+2 yielding the classical twin prime conjecture; and polynomial examples like f(n)=n^2+1 tied to conjectures studied by Euler and Heegner-related investigations.
Heuristics combine independence assumptions for primality events for different polynomials with local correction factors at each prime p. The model treats evaluating fi(n) as behaving like a random integer of size about fi(n), invoking ideas from probabilistic arguments in work by Cramér, Hardy, Littlewood, and Granville. Local densities derive from counting residues modulo p that make ∏fi(n) ≡ 0 (mod p), a technique familiar from Hasse principle style local-global reasoning and from density computations used by Chebotarev in different contexts. The integral arises from approximating prime occurrence probabilities by 1/log t as in the Prime Number Theorem frameworks.
If true, the conjecture implies infinitely many primes in many specific families: infinitely many twin primes, infinitely many Sophie Germain primes, infinitely many prime values of quadratic polynomials like n^2+1, and prime k-tuples matching admissible patterns studied by Hardy and Littlewood. It yields asymptotic counts for primes in polynomial sequences that would refine results of sieve methods such as those developed by Atle Selberg, Brun, Dimitris Koukoulopoulos, and Goldston-related collaborations. It interacts with conjectures about maximal gaps between primes studied by Maier, Pintz, and Zhang and would constrain distributions examined in computational records held by groups at University of Georgia and University of Tennessee.
Extensive computations have tested many cases: twin primes computations by projects at University of Tennessee and PrimeGrid; searches for primes of form n^2+1 by teams using resources at CERN and Lawrence Livermore National Laboratory; and databases curated by researchers at OEIS contributors and at Wolfram Research. Numerical data for small-degree polynomials and admissible k-tuples show good agreement with predicted densities up to computational limits documented by groups at MIT, Stanford University, University of Illinois Urbana-Champaign, and Los Alamos National Laboratory. Large-scale verifications often rely on primality proving algorithms developed by Atkin and Morain and on sieving infrastructure from GIMPS-style collaborations.
The conjecture is heuristic and unproven; it relies on unproven independence hypotheses analogous to assumptions in Cramér model discussions and in heuristic derivations by Hardy and Littlewood. Limitations include sensitivity to exceptional algebraic relations among polynomials and to deep issues in analytic continuation of associated L-functions as explored in work by Langlands, Davenport, and Iwaniec. Open problems include establishing any nontrivial case beyond known linear polynomial results from Dirichlet and conditional progress via Vinogradov-type methods, sieve breakthroughs like those by Goldston, Pintz, Yıldırım, and the bounded gaps results by Zhang and subsequent refinements by Maynard and Tao. Proving the conjecture or constructing counterexamples would likely require breakthroughs linking polynomial value distribution to deep properties of L-functions, automorphic forms studied in the Langlands program, or new sieve and analytic techniques from researchers at institutions such as Princeton University and ETH Zurich.