LLMpediaThe first transparent, open encyclopedia generated by LLMs

cobham–edmonds thesis

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: random-access machine Hop 5
Expansion Funnel Raw 73 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted73
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
cobham–edmonds thesis
NameCobham–Edmonds thesis
FieldTheoretical computer science
Introduced1960s–1970s
ContributorsCobham; Edmonds
Key conceptsPolynomial time; feasible computation

cobham–edmonds thesis

The cobham–edmonds thesis asserts that the class of computational problems solvable in deterministic polynomial time captures the informal notion of efficient or feasible computation. It connects the work of Alan Turing and Alonzo Church with later developments by Jack Edmonds and Alan Cobham and situates polynomial-time bounds alongside concepts from the P versus NP problem, David Hilbert's Entscheidungsproblem, and the Lambda calculus tradition. The thesis informs research directions in Stephen Cook's complexity theory, Richard Karp's reductions, and Leslie Valiant's algebraic complexity models.

Definition and statement

The thesis proposes that decision problems solvable by deterministic machines within time bounded by a polynomial function of input size should be considered feasibly computable, linking formal models such as the Turing machine, Random Access Machine, and abstract machines used by John von Neumann and Marvin Minsky to the intuitive notion of efficiency. It identifies the class now called P as the canonical feasible class and contrasts it with classes like NP and EXPTIME invoked by Leonid Levin and Jurij Matijasevič in the context of intractability. The statement influenced definitions of efficient algorithms in works by Donald Knuth, Michael Rabin, and Gordon Plotkin.

Historical background and contributors

Origins trace to independent observations by Alan Cobham and Jack Edmonds in the 1960s and 1970s that algorithmic procedures with polynomial-time bounds appear practically viable, building on earlier formalism by Alan Turing and critiques by Alonzo Church. Cobham's axiomatic account and Edmonds's practical criteria were debated alongside results by John Hopcroft, Juris Hartmanis, and Richard Stearns on time hierarchy theorems. Subsequent formal discussion engaged researchers including Stephen Cook, who framed NP-completeness; Richard Karp, who cataloged NP-complete problems; Leslie Valiant, for algebraic counterparts; and Shafi Goldwasser and Silvio Micali for cryptographic implications. Institutional contexts included seminars at MIT, Princeton University, and conferences like STOC and FOCS.

Formal work surrounding the thesis connects P to machine models such as the Turing machine, Random Access Machine, and circuit families studied by Venkatesan Guruswami and Noam Nisan. Related classes include NP, co-NP, PSPACE, EXPTIME, and probabilistic classes like BPP examined by Michael Sipser and László Babai. Alternate formalizations involve descriptive complexity from Neil Immerman and Moshe Vardi, relating P to fragments of first-order logic with fixed-point operators and connections to work by Edsger Dijkstra in program semantics. Algebraic complexity classes such as VP and VNP developed by Leslie Valiant provide an analogue in arithmetic circuit complexity.

Implications and applications

Adopting polynomial time as feasible underpins algorithm design in areas influenced by Donald Knuth and Ronald Rivest, computational practice in Google-scale systems inspired by Jeff Dean, and formal cryptographic assumptions used by Silvio Micali and Adi Shamir. It guides hardness reductions pioneered by Richard Karp and the classification of combinatorial problems like Traveling Salesman Problem and Boolean satisfiability problem central to SAT solvers and industrial verification by companies such as Intel and IBM. In computational biology, methods for sequence alignment and phylogenetics draw on feasibility notions promoted by the thesis and used in projects at Broad Institute and European Bioinformatics Institute. The thesis also shapes complexity-theoretic foundations for machine learning algorithms studied by Yann LeCun and Geoffrey Hinton.

Criticisms and alternative formulations

Critics note that polynomial time does not capture practical efficiency for high-degree polynomials or large constants, as argued in contexts involving Practical algorithmics and empirical software engineering at Carnegie Mellon University and Stanford University. Alternatives propose refined measures: fixed-parameter tractability by Rod Downey and Mike Fellows, average-case complexity per Leonid Levin, smoothed analysis by Daniel Spielman and Shang-Hua Teng, and resource-bounded reducibility frameworks examined by Harry Lewis and Christos Papadimitriou. Philosophical critiques reference debates between Hilary Putnam and W.V.O. Quine on computational metaphors and relate to foundational work by Warren McCulloch and Walter Pitts on neural vs. symbolic computation.

Category:Theoretical computer science