Generated by GPT-5-mini| NP (complexity) | |
|---|---|
![]() Behnam Esfahbod · CC BY-SA 3.0 · source | |
| Name | NP |
| Type | Decision complexity class |
| Introduced | 1971 |
| Notable people | Stephen Cook, Leonid Levin, Richard Karp, Albert R. Meyer, Juris Hartmanis |
| Related classes | P (complexity), co-NP, PSPACE, EXPTIME, BPP |
NP (complexity) is the class of decision problems for which a "yes" answer can be verified in polynomial time by a deterministic Turing machine given a suitable certificate. It plays a central role in theoretical computer science, connecting work by Stephen Cook, Leonid Levin, and Richard Karp to practical inquiries pursued at institutions like Massachusetts Institute of Technology, Bell Labs, and IBM. NP underpins major research programs at conferences such as STOC, FOCS, and ICALP and motivates awards including the Gödel Prize.
NP consists of languages L for which there exists a polynomial p and a deterministic polynomial-time verifier V such that for every string x, x ∈ L if and only if there exists a certificate y with |y| ≤ p(|x|) satisfying V(x,y)=accept. The formalization uses models developed in the work of Alan Turing and later refined through the machine-based frameworks of John von Neumann-era computing and the complexity theoretic foundations advanced by Juris Hartmanis and Richard Stearns. Alternate characterizations include nondeterministic polynomial-time Turing machines introduced in texts associated with Michael Sipser and the nondeterministic computation interpretations explored at Princeton University and Stanford University.
Canonical NP languages include Boolean satisfiability, the classical problem from Stephen Cook's theorem and central to the SAT literature; Hamiltonian path problem, associated with studies at Bell Labs and algorithmic graph theory research led by Richard Karp; Clique problem and Vertex cover problem studied in the context of combinatorial optimization at Princeton University; Subset sum problem which ties to cryptographic work at RSA Security and number-theoretic investigations by Andrew Yao; and 3-SAT as used in benchmarks at DIMACS Challenges. Other frequently cited NP problems include Graph coloring problem examined in research from University of Cambridge and Knapsack problem leveraged in applied work at AT&T Labs. These examples recur in textbooks authored by Donald Knuth, Christos Papadimitriou, and Michael Garey.
A language L is NP-complete if L is in NP and every language in NP is polynomial-time reducible to L. This notion was formalized in the seminal results of Stephen Cook and Leonid Levin, with practical amplification and a list of 21 NP-complete problems provided by Richard Karp in his influential 1972 paper. Polynomial-time many-one reductions (Karp reductions) and Turing reductions are central tools used across studies at Bell Labs, IBM Research, and university groups including MIT CSAIL. NP-completeness underpins complexity-theoretic classifications deployed in competitions such as the ACM International Collegiate Programming Contest and serves as the basis for hardness-preserving transformations in cryptography research at IACR.
The primary open relationship concerns whether NP equals P (complexity), a question formalized in the list of Millennium Problems by the Clay Mathematics Institute. Other containment relations include NP ⊆ PSPACE and NP ⊆ EXPTIME, with separations and inclusions studied in works at University of California, Berkeley and Princeton University. The complementary class co-NP raises symmetry questions exemplified by problems such as integer factorization studied by Carl Pomerance and Peter Shor; randomized classes like BPP and quantum classes like BQP feature in comparisons pursued at IBM Quantum and Google Quantum AI. Structural results such as the Karp–Lipton theorem and relativization barriers were investigated by researchers at Stanford University and groups led by Scott Aaronson.
Despite worst-case NP hardness, many NP problems admit useful exact, approximation, or heuristic methods. Exact exponential-time algorithms were advanced by researchers including Richard Karp and used in optimization work at Bell Labs; approximation algorithms with provable guarantees emerged from research by Vijay Vazirani and Éva Tardos and are applied in industry settings at Microsoft Research and Amazon; heuristic and parameterized methods such as fixed-parameter tractability (FPT) were developed by communities around Rod Downey and Michael Fellows and are used in bioinformatics labs at Broad Institute and European Bioinformatics Institute. Practical solver technology for SAT and related problems originates from toolchains maintained at DIMACS Challenges, SAT competitions, and projects at SRI International.
The central open problem—whether NP equals P (complexity)—has profound implications for cryptography frameworks like RSA, for algorithmic game theory investigated at Princeton University, and for optimization domains studied at INFORMS conferences. Resolution could affect verification techniques used by companies such as Google and Microsoft and shift foundational assumptions in proofs awarded by the Gödel Prize or recognized by the Fields Medal-adjacent theoretical mathematics community. Related open questions include NP versus BQP in quantum contexts, structural properties of PH explored by teams at University of Toronto and University of Oxford, and fine-grained complexity conjectures driving research at ETH Zurich and Max Planck Institute.