Generated by GPT-5-mini| ETH (exponential time hypothesis) | |
|---|---|
| Name | ETH (exponential time hypothesis) |
| Field | Computational complexity theory |
| Proposed | 1990s |
| Proposer | Impagliazzo and Paturi |
| Related | P versus NP problem, NP-completeness |
ETH (exponential time hypothesis) The exponential time hypothesis is a conjecture in computational complexity theory asserting that certain NP-complete decision problems cannot be solved in subexponential time. It refines the P versus NP problem and informs lower bounds across algorithm design, parameterized complexity, and proof complexity. The hypothesis interacts with many results and conjectures in theoretical computer science and has influenced research at institutions such as Massachusetts Institute of Technology, Princeton University, Stanford University, University of California, Berkeley, and University of Cambridge.
The ETH states that there exists c>0 such that 3-SAT cannot be decided in time 2^{cn} on deterministic Turing machines for inputs of n variables, linking to the study of Boolean satisfiability problem and to the classification of NP-complete problems. It is often formalized relative to sparse encodings used in complexity theory explored at centers like Bell Labs, Microsoft Research, Google Research, and IBM Research. The hypothesis refines older conjectures like those raised by Cook–Levin theorem authors and connects to seminal work by researchers at Carnegie Mellon University, École Normale Supérieure, Harvard University, and University of Oxford on hardness-preserving reductions. ETH is framed against the backdrop of major theorems such as the Time hierarchy theorem and the Exponential time hypothesis for k-SAT generalizations.
Assuming ETH leads to conditional lower bounds for many problems studied in algorithmic research groups at University of Toronto, ETH Zurich, École Polytechnique, National University of Singapore, and KTH Royal Institute of Technology. It implies that problems like Hamiltonian Cycle, Clique, and Chromatic Number require essentially exponential time under standard encodings, informing hardness results linked to the Cook reduction, the Karp reduction, and frameworks developed in collaborations with Simons Institute visitors and researchers from Institute for Advanced Study. ETH also yields implications for parameterized complexity classes such as W[1], W[2], and connections to classical conjectures including the Unique Games Conjecture and the Strong Exponential Time Hypothesis from work influenced by institutions like University of California, San Diego and Princeton Institute for Advanced Study. Consequences extend to cryptography research at RSA Laboratories and standards discussions involving National Institute of Standards and Technology, affecting assumptions in hardness-of-approximation results studied at DIMACS and Institute for Mathematics and its Applications.
Several variants refine ETH, notably the Strong Exponential Time Hypothesis (SETH), which quantifies limits for k-SAT families, and hypotheses concerning counting versions like #P-hardness and counting-SAT bounds influenced by studies at Microsoft Research New England and Google Brain. Related conjectures include assumptions used in fine-grained complexity connecting to the Orthogonal Vectors Problem, the 3SUM problem, and the All-Pairs Shortest Paths problem, with cross-pollination from researchers at Brown University, Columbia University, Yale University, and University of Illinois Urbana–Champaign. Other formulations consider randomized or nondeterministic models inspired by the Probabilistic Method and work from labs at Los Alamos National Laboratory and Lawrence Berkeley National Laboratory. Historical antecedents tie to landmark contributions like the Cook–Levin theorem and to reductions studied alongside the Kadison–Singer problem in combinatorial contexts.
Evidence for ETH comes from a lack of subexponential algorithms for 3-SAT despite intensive algorithmic advances by researchers at Bell Labs, DIMACS, Carnegie Mellon University, and Max Planck Institute for Informatics. Partial results include lower bounds under stronger complexity assumptions, hardness-of-approximation results by teams at Microsoft Research, Google Research, and IBM Research, and conditional separations in circuit complexity influenced by work from Princeton University and MIT. Parameterized lower bounds conditioned on ETH have been proved for problems like Dominating Set and Longest Path by authors affiliated with University of Warsaw, Tel Aviv University, and Technion – Israel Institute of Technology. Empirical algorithm development at Amazon Web Services and theoretical breakthroughs at Weizmann Institute of Science continue to test the boundaries, while techniques from proof complexity and algebraic methods spearheaded at University of Illinois, Cornell University, and University of Washington provide structural insights consistent with ETH.
ETH serves as a lens for fine-grained complexity analyses used in designing algorithms and proving tight lower bounds; it has guided work on approximation algorithms at Columbia University, exact algorithms at University of Bonn, and parameterized algorithms at University of Liverpool and University of Edinburgh. It underpins conditional hardness for dynamic problems studied at Google Research, streaming lower bounds explored at Facebook AI Research, and communication complexity investigations from University College London and University of Birmingham. ETH-based assumptions inform cryptographic security parameters considered at NIST and influence computational practice in fields ranging from computational biology at Broad Institute to operations research at INFORMS-affiliated groups. The hypothesis remains central to ongoing collaborations among researchers at Simons Institute, Institute for Advanced Study, Paris-Saclay University, Sloan Research Fellowship awardees, and many university departments shaping modern complexity theory.