Generated by GPT-5-mini| Maximum Satisfiability | |
|---|---|
| Name | Maximum Satisfiability |
| Other names | MaxSAT |
| Domain | Theoretical computer science, Combinatorial optimization |
| Introduced | 20th century |
| Notable contributors | John Schaefer, Pierluigi Crescenzi, Carlo Ghezzi, Christos Papadimitriou |
Maximum Satisfiability is an optimization problem in propositional logic that asks for an assignment maximizing the number of satisfied clauses in a Boolean formula in conjunctive normal form, connecting areas of P versus NP question, Cook–Levin theorem, Garey–Johnson style NP-completeness results and practical solver engineering exemplified by competitions and industrial tools. The problem sits at the interface of theoretical landmarks like SAT solving progress, algorithmic paradigms from Branch and bound to approximation techniques inspired by results such as the Probabilistic method and hardness via reductions used in proofs related to the PCP theorem.
Formally, given a set of Boolean variables and a collection of clauses in conjunctive normal form, the task is to find a truth assignment that maximizes the number of clauses that evaluate to true; weighted variants assign nonnegative weights to clauses so the objective becomes the sum of weights of satisfied clauses. The canonical decision formulation asks whether there exists an assignment satisfying at least k clauses (or weight k), linking the problem statement to classical completeness results like those established by Richard Karp and completeness frameworks described by Stephen Cook and Leonid Levin. Variants constrain clause size (for example, k-CNF) and tie into parameterized analyses associated with researchers such as Rodney G. Downey and Michael R. Fellows.
Maximum Satisfiability is NP-hard and, in its decision form, NP-complete for general instances, a conclusion derived using techniques akin to the reductions employed in the Cook–Levin theorem and hardness frameworks used by Jon Kleinberg and Éva Tardos in algorithmic intractability expositions. Approximation limits for weighted and unweighted forms exploit inapproximability results connected to the PCP theorem and hardness of approximation proofs provided by scholars like Subhash Khot and Uriel Feige, yielding specific thresholds that cannot be exceeded unless the Polynomial hierarchy collapses. Parameterized complexity classifications identify fixed-parameter tractable cases under certain restrictions, following the methodology of Christos Papadimitriou and later work by Stefan Szeider on structural parameters such as treewidth and backdoor sets.
Exact algorithms use branch-and-bound, backtracking, and DPLL-style search augmented with clause learning and conflict-driven techniques inspired by the success of solvers like those emerging from teams at DIMACS challenges and SAT Competition. Local search heuristics such as GSAT and WalkSAT, originally influenced by experimental work at institutions like IBM Research and Bell Labs, remain effective for large instances, while approximation algorithms leverage linear programming and semidefinite programming relaxations developed in the tradition of Primal–dual method and influences from Noga Alon's combinatorial constructions. Advanced preprocessing, kernelization strategies, and exploitation of hybrid exact-heuristic frameworks reflect contributions from groups at MIT, Stanford University, and University of California, Berkeley.
Common variants include Weighted Maximum Satisfiability, Partial MaxSAT (separating hard and soft clauses), and Max-k-SAT restricting clause size, each studied in literature alongside multicriteria and guarded formulations appearing in research led by teams at Carnegie Mellon University and ETH Zurich. Extensions introduce constraints like cardinality, XOR clauses, or optimization over quantified Boolean formulas, drawing connections to generalized satisfiability landscapes examined by researchers at Princeton University and University of Toronto. Structural extensions exploit graph representations and backdoor decompositions, relating to treewidth and clique-width analyses developed by communities around Courcelle's theorem and the Graph Minor theorem.
MaxSAT solvers are applied to diverse real-world tasks such as electronic design automation problems tackled by engineers from Intel Corporation and Qualcomm Incorporated; software package dependency resolution as addressed in systems produced by Red Hat and Canonical (company); and automated planning and scheduling studied at NASA and European Space Agency. In computational biology and bioinformatics, MaxSAT formulations assist in haplotype inference and network reconstruction in projects affiliated with Cold Spring Harbor Laboratory and Broad Institute. Further applications include configuration management in products by Siemens AG, test-case generation in companies like Google and Microsoft Corporation, and formal verification efforts in teams at Bell Labs and Hewlett-Packard.
Category:Computational complexity Category:Combinatorial optimization