LLMpediaThe first transparent, open encyclopedia generated by LLMs

Max-Cut

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Partition problem Hop 5
Expansion Funnel Raw 64 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted64
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Max-Cut
NameMax-Cut
FieldTheoretical computer science; Combinatorial optimization; Graph theory
Introduced1960s
Notable solutionsGoemans–Williamson algorithm; branch-and-cut; semidefinite programming

Max-Cut Max-Cut is a fundamental combinatorial optimization problem on graphs studied across Alan Turing-era computation, John von Neumann-influenced optimization, and modern European Research Council-funded complexity research. It asks for a partition of vertices maximizing edge weight across the partition, linking topics in Claude Shannon-style information theory, Kurt Gödel-adjacent logic, and algorithmic developments at institutions such as Massachusetts Institute of Technology, Stanford University, Princeton University, and University of California, Berkeley.

Definition and problem statement

Formally, given a graph with vertex set and weighted edges, the task is to find a cut separating vertices into two disjoint subsets that maximizes total weight of crossing edges; this formalization appears in early work at Bell Labs and in publications connected to Richard Karp's list of NP-complete problems. The problem is defined for simple graphs, multigraphs, and weighted graphs in contexts ranging from Erdős–Rényi random graph models to structured instances arising at IBM Research and Google Research. Practical formulations use adjacency matrices, Laplacian-like representations used in papers affiliated with Courant Institute and Microsoft Research.

Computational complexity

Max-Cut is NP-hard, a status tied to foundational complexity results including reductions employed by Stephen Cook and Leonid Levin and influenced by the P versus NP problem debates involving researchers at Clay Mathematics Institute and discussions at the Simons Institute. The decision variant is NP-complete under Karp reductions, with hardness results strengthened by PCP theorem developments from contributors at University of California, San Diego, Cornell University, and Rutgers University. Inapproximability bounds connect to work by researchers affiliated with Princeton University and University of Illinois at Urbana-Champaign and rely on conjectures like the Unique Games Conjecture discussed by scholars at Columbia University and Rutgers University.

Approximation algorithms and heuristics

Semidefinite programming relaxations yield the celebrated Goemans–Williamson approximation algorithm originating from work at Massachusetts Institute of Technology and Princeton University, offering performance guarantees derived from geometric rounding techniques similar to optimization methods used at Bell Labs. Heuristic strategies include spectral methods inspired by research at ETH Zurich and clustering approaches utilized at Yahoo! Research and Facebook AI Research, while local search and simulated annealing draw on traditions from Los Alamos National Laboratory and Bell Labs. Approximation-preserving reductions and algorithmic improvements have been contributed by teams at Carnegie Mellon University, University of Toronto, University of Washington, and University College London.

Exact algorithms and solvers

Exact methods combine branch-and-bound, branch-and-cut, and integer programming formulations implemented in commercial and academic solvers such as IBM CPLEX, Gurobi, and research prototypes from Zuse Institute Berlin and INRIA. Fixed-parameter tractable approaches exploit parameters studied at Microsoft Research and Tel Aviv University while exponential-time algorithms with improved bases arise from collaborations at University of Cambridge and Technion – Israel Institute of Technology. Computational experiments are run on infrastructures like National Energy Research Scientific Computing Center and clusters at Lawrence Berkeley National Laboratory.

Connections and applications

Max-Cut connects to statistical physics problems studied by researchers at Los Alamos National Laboratory and Santa Fe Institute, mapping to Ising spin glass models analyzed in the context of Nobel Prize-related condensed matter work. In computer vision, researchers at MIT Media Lab and Adobe Research use Max-Cut formulations for segmentation tasks; in VLSI design and electronic design automation, teams at Intel and Qualcomm apply cut-based partitioning. Applications extend to social network analysis in studies by Facebook and Twitter data teams, to computational biology projects at Broad Institute, and to finance optimization explored at Goldman Sachs quant groups.

Variants and generalizations

Variants include the weighted, unweighted, directed, and multicut problems studied by scholars at Yale University and Brown University, the constrained Max-Cut with side constraints investigated at Stanford University and Harvard University, and geometric generalizations linked to discrepancy theory in work at Imperial College London. Generalizations relate to quadratic boolean optimization problems, binary quadratic programming studied in collaborations with École Polytechnique Fédérale de Lausanne and University of Oxford, and to semidefinite relaxations used in approximation schemes developed at University of Pennsylvania and Duke University.

Category:Graph theory Category:Combinatorial optimization Category:Theoretical computer science