Generated by GPT-5-mini| 3-SAT | |
|---|---|
| Name | 3-SAT |
| Input | A Boolean formula in conjunctive normal form with three literals per clause |
| Question | Is there a satisfying truth assignment? |
| Complexity | NP-complete |
3-SAT 3-SAT is a decision problem about satisfiability of Boolean formulas in conjunctive normal form, and it is a central problem in theoretical computer science, logic, and combinatorics. The problem influenced work by researchers connected to Stephen Cook, Richard Karp, and institutions such as Bell Labs, Princeton University, and the IBM T.J. Watson Research Center. 3-SAT appears in literature alongside landmark results like the Cook–Levin theorem, the P versus NP problem, the Gödel Prize, and the NP-completeness corpus.
The formal statement asks whether a propositional formula, expressed as a conjunction of clauses each containing exactly three literals, admits a truth assignment that makes the entire formula true. The definition descends from the general Boolean satisfiability problem in the work of Stephen Cook and was developed further in reductions used by Richard Karp in his 1972 list of NP-complete problems, which intersected with research at University of California, Berkeley and Bell Labs. Instances are often encoded as formulas related to constructions in papers from conferences like STOC and FOCS and journals such as the Journal of the ACM and SIAM Journal on Computing.
3-SAT is NP-complete, following from the Cook–Levin theorem and standard polynomial-time reductions that transform arbitrary Boolean formulas into 3-CNF form, as formalized by Richard Karp and discussed in textbooks by authors associated with MIT Press and Cambridge University Press. Its complexity status is tightly connected to the P versus NP problem, the Exponential Time Hypothesis, and complexity classes studied at venues like ICALP and institutions like Carnegie Mellon University. Hardness proofs often cite reductions to problems analyzed in the context of the Travelling Salesman Problem and the Clique problem, and relate to counting versions such as #P-completeness discussed by researchers affiliated with Harvard University and Stanford University.
Exact algorithms include the DPLL procedure, conflict-driven clause learning, and backtracking heuristics that were advanced in work at IBM Research, University of Waterloo, and groups around the Max Planck Institute for Informatics. Practical SAT solvers such as those developed by teams at University of California, Berkeley, University of Oxford, and industry labs like Microsoft Research implement optimizations inspired by branching strategies from papers in CAV and SAT conferences. Heuristic and randomized algorithms draw on techniques from researchers affiliated with Massachusetts Institute of Technology, ETH Zurich, and Princeton University, while parallel and portfolio-based solvers reflect engineering efforts at Google and Amazon.
Standard NP-completeness proofs reduce problems such as Vertex Cover, Hamiltonian path problem, and 3-dimensional matching to 3-SAT using gadget constructions found in monographs published by Springer and course materials from Columbia University and Yale University. Many classical reductions leverage combinatorial constructions that reference results from Paul Erdős and reductions discussed at symposia like ICALP and STOC, with proofs appearing in compendia alongside contributions by Michael Sipser and Christos Papadimitriou. The use of 3-SAT as a base problem enables completeness results for optimization problems studied by research groups at ETH Zurich and University of Toronto.
3-SAT is used to show hardness for scheduling problems studied at INFORMS conferences and for verification tasks in model checking at events like CAV, with practical impact on tools developed at Siemens and Intel. Variants include planar 3-SAT, quantified 3-SAT linked to results on PSPACE from researchers at Rutgers University, and MAX-3-SAT connected to approximation algorithms investigated at Bell Labs and universities like UC Berkeley. The problem informs cryptographic hardness assumptions referenced in work at RSA Laboratories and theoretical studies at École Polytechnique Fédérale de Lausanne.
Empirical evaluation uses benchmark suites assembled by communities around the SAT Competition, hosted by research groups at CWI, Freiburg, and Dresden University of Technology, and reported in proceedings of SAT Conference and IJCAI. Solver performance comparisons reference datasets curated by teams at University of Helsinki and Kyoto University, and results are cited in challenge reports involving participants from Facebook AI Research and Google DeepMind. Benchmark designs and empirical methodology relate to standards promulgated by organizations like ACM and IEEE and are discussed in follow-up studies by researchers at Cornell University.