LLMpediaThe first transparent, open encyclopedia generated by LLMs

MAX-CUT

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: APX Hop 5
Expansion Funnel Raw 56 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted56
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
MAX-CUT
NameMAX-CUT
TypeCombinatorial optimization problem
FieldTheoretical computer science
First formulated1960s
Notable contributorsJohn Nash; Richard Karp; Michel Goemans; David Williamson; Uriel Feige; Subhash Khot
Related problemsMinimum cut; Minimum s–t cut; Graph coloring; Independent set; Partition problem
ComplexityNP-hard; APX-hard

MAX-CUT MAX-CUT is a combinatorial optimization problem seeking a partition of the vertex set of a graph that maximizes the weight or number of edges crossing the partition. It arises in graph theory, combinatorics, and theoretical computer science and connects to practical tasks in networking, statistical physics, and circuit layout. The problem has deep links to algorithm design, approximation algorithms, semidefinite programming, and computational complexity theory.

Problem statement

Given a graph with vertex set and edge weights (or unweighted edges), the task is to find a bipartition of the vertices that maximizes the sum of weights of edges crossing between the two parts. Formally, for a graph G = (V, E) with weight function w: E → R≥0, select S ⊂ V to maximize Σ_{(u,v)∈E, u∈S, v∉S} w(u,v). The decision version asks whether there exists a cut of weight at least K for a given threshold K. The problem can be expressed as a quadratic binary optimization and is equivalent under polynomial-time reductions to several classic problems studied by John Nash, Richard Karp, Michael Garey, David Johnson, and other pioneers in NP-completeness. Variants include weighted, unweighted, signed, directed, and hypergraph versions studied in the literature around Leonid Levin and Stephen Cook.

Complexity and computational hardness

MAX-CUT is NP-hard and its decision variant is NP-complete, with hardness results developed in the tradition of reductions by Richard Karp and Michael Garey. The problem is APX-hard, and PCP-theorem related inapproximability results involving researchers like Umesh Vazirani, Subhash Khot, and Moses Charikar show limits on approximation ratios under assumptions such as the Unique Games Conjecture associated with Subhash Khot. Seminal hardness proofs draw on techniques from probabilistically checkable proofs developed by groups including Madhu Sudan and Aravind Srinivasan. Complexity classifications connect MAX-CUT to constrained optimization frameworks explored by Christos Papadimitriou and Eugene Lawler.

Algorithms and approximations

Exact algorithms include exhaustive search and branch-and-bound approaches refined by ideas from Donald Knuth and algorithm engineering groups at institutions like Bell Labs and IBM Research. For large instances, approximation algorithms dominate: the classic 0.5-approximation via random cut or greedy methods is elementary, while the celebrated 0.87856... approximation by semidefinite programming (SDP) relaxation and randomized rounding was developed by Michel Goemans and David Williamson. The Goemans–Williamson algorithm uses SDP solved with interior-point methods popularized by researchers at INRIA and Bell Labs and randomized hyperplane rounding, with analysis drawing on work by Kurt Gödel-era probability theorists and modern optimization scholars like Yurii Nesterov. Improvements and hardness of approximation have been studied by Oded Goldreich, Johan Håstad, and Emanuel Szemerédi; practical heuristics include local search, simulated annealing inspired by Kenneth Wilson-influenced statistical physics, and spectral methods linked to research by Alon Yuster and Fan Chung. SDP-based algorithms connect to polyhedral combinatorics studied by Jack Edmonds and integer programming techniques developed at Rutgers University and MIT.

Special cases and exact solutions

Certain graph classes admit polynomial-time exact solutions: planar graphs permit reductions to perfect matching exploited by algorithms of Kasteleyn and later combinatorialists at Princeton University; bipartite graphs trivialize the objective since maximum cuts equal total edge weight, a fact used in studies at Stanford University and Harvard University. Trees and series-parallel graphs yield dynamic programming solutions found in works by Robert Tarjan and John Hopcroft. For dense graphs, approximate max-cut can be obtained via sampling and regularity lemmas pioneered by Endre Szemerédi and algorithmic implementations by Noga Alon. Exact exponential-time algorithms with improved base constants arise from measure-and-conquer analyses by researchers such as Fedor V. Fomin and Sergei Fomin at HSE University and European algorithmic groups.

MAX-CUT models and methods are applied in VLSI circuit partitioning studied at Bell Labs and Caltech, statistical physics models like the Ising model analyzed by Lars Onsager and Philippe Martin, community detection efforts in social networks researched at Facebook and Google, and clustering problems encountered in bioinformatics at Broad Institute and Cold Spring Harbor Laboratory. Related combinatorial problems include Minimum cut and Minimum s–t cut with foundational contributions from Lancelot,Jack Edmonds and flow theory by Lloyd R. Ford Jr. and D. R. Fulkerson, Graph coloring investigated by Paul Erdős and Reed Solomon-era coding theorists, and the Independent set problem central to complexity studies by Edsger Dijkstra-adjacent research groups. Algorithmic paradigms cross-fertilize with constraint satisfaction problems studied by Leonid Levin and optimization frameworks advanced at INRIA and Microsoft Research.

Category:Combinatorial optimization