LLMpediaThe first transparent, open encyclopedia generated by LLMs

Semidefinite programming

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Sanjeev Arora Hop 5
Expansion Funnel Raw 163 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted163
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Semidefinite programming
NameSemidefinite programming
FieldOptimization
Introduced1990s

Semidefinite programming is a class of convex optimization problems where a linear function is optimized over the cone of positive semidefinite matrices subject to linear constraints. It unifies and generalizes linear programming and has deep connections to mathematical programming, functional analysis, and combinatorics. Originating from work linking matrix theory and convex geometry, semidefinite programming has been developed and applied across operations research, control theory, and theoretical computer science.

Introduction

Semidefinite programming traces conceptual roots to studies by David Hilbert, John von Neumann, Issai Schur, John Nash, Joseph-Louis Lagrange and later modern contributors such as Michel Balinski, George Dantzig, László Lovász, Nikolai N. Nikolaev, Yurii Nesterov, Arkadi Nemirovski, Mihalis Yannakakis, Andrew V. Goldberg, Robert E. Bixby, H. William Hoke, John B. Lasserre, Jean Bernard Lasserre, Alexandra M. Mourrain, Paul J. Schweitzer, Jon Lee, Stephen Boyd, Lieven Vandenberghe, Michel X. Goemans, David P. Williamson, Eugene Lawler, Christine Teng, Richard M. Karp, Michael Sipser, Sanjeev Arora and Umesh Vazirani. Early algorithmic and theoretical advances were shaped by interactions among researchers at institutions including Bell Labs, AT&T, IBM Research, Stanford University, Massachusetts Institute of Technology, Princeton University, University of California, Berkeley, Yale University, Harvard University, Columbia University, Cornell University, University of Oxford, University of Cambridge, École Polytechnique, École Normale Supérieure, University of Paris, ETH Zurich, Max Planck Institute for Mathematics and Institute for Advanced Study.

Mathematical Formulation

A standard semidefinite program is specified by a symmetric matrix variable X constrained by X ⪰ 0, linear maps defined by matrices A_i, and a linear objective with coefficient matrix C. Foundational matrix analysis concepts from Carl Friedrich Gauss, Augustin-Louis Cauchy, Arthur Cayley, James Joseph Sylvester, John von Neumann, Marcel Riesz and Issai Schur underpin feasibility conditions and spectral decompositions. Duality theory invokes results related to Johann Friedrich Pfaff, Leonhard Euler, Joseph Fourier, Bernhard Riemann and matrix trace inequalities connected to the legacies of Richard Courant and David Hilbert. Semidefinite constraints naturally express convex cones studied in work by Hermann Minkowski, Stefan Banach, John von Neumann, and Nicolas Bourbaki-influenced functional analysis. The formulation leverages algebraic geometry techniques from Alexander Grothendieck, David Hilbert, Emmy Noether, Bernhard Riemann, and optimization frameworks developed at SIAM conferences and workshops at INRIA and Fields Institute.

Algorithms and Solvers

Interior-point methods adapted from linear and quadratic programming by researchers at Bell Labs, Stanford University, and Princeton University dominate practical SDP solving. Seminal algorithmic contributions came from work involving N. Karmarkar, Yurii Nesterov, Arkadi Nemirovski, Michael J. Todd, Donald Goldfarb, James Renegar, Jean-Baptiste Hiriart-Urruty, J. W. Chinneck, David E. Goldberg and solver implementations at IBM Research, AT&T Laboratories, MathWorks, GAMS Development Corporation, NEOS Server, and open-source projects influenced by Eclipse Foundation and Apache Software Foundation. Popular solvers include packages inspired by libraries from Ivy League research groups and codebases developed at University of California, Los Angeles, University of Waterloo, University of Michigan, Massachusetts Institute of Technology, and ETH Zurich. First-order methods and operator-splitting techniques were extended by researchers associated with Yale University, University of Chicago, Columbia University, University of Toronto, Carnegie Mellon University, National Institute of Standards and Technology, and startups spun off from Silicon Valley incubators. Software engineering, numerical linear algebra, and high-performance computing contributions draw on advances from Intel Corporation, NVIDIA, Cray Research, Lawrence Berkeley National Laboratory, and Argonne National Laboratory.

Applications

Semidefinite programming has enabled approximations and exact formulations in combinatorial optimization problems studied by László Lovász, Michel X. Goemans, David P. Williamson, Umesh Vazirani, Avi Wigderson, Richard Karp, Michael Sipser, Leslie Valiant, and Richard Lipton. It underpins relaxations for the Max-Cut problem, Graph coloring, Sensor network localization, Phase retrieval, Quantum state tomography, Quantum key distribution, and quantum information tasks connected to work at Perimeter Institute, Institute for Quantum Computing, D-Wave Systems, Microsoft Research, and IBM Quantum. Control-theoretic uses reference contributions from Rudolf E. Kalman, Lotfi Zadeh, J. B. Lasserre, Roger Brockett, Tomas Kailath, and Karl Johan Åström, influencing robust control and model reduction at NASA, European Space Agency, and Boeing. Signal processing and machine learning applications build on theory from Yann LeCun, Geoffrey Hinton, Vladimir Vapnik, Trevor Hastie, Robert Tibshirani, Ian Goodfellow, Andrew Ng, and research groups at Google Research, Facebook AI Research, DeepMind, OpenAI, Adobe Research and Apple Machine Learning Research.

Complexity and Duality

Complexity results relate semidefinite programming to classes characterized by research from Stephen Cook, Richard Karp, Leonid Levin, Leslie Valiant, Sanjeev Arora, Scott Aaronson, László Lovász, Neil Robertson, Paul Seymour, and Robin Thomas. Connections to the Polynomial hierarchy and approximation bounds invoke seminal theorems developed at Princeton University, Institute for Advanced Study, Courant Institute, and Bell Labs. Duality theory for semidefinite programs generalizes linear programming duality as studied by George Dantzig, John von Neumann, Tjalling Koopmans, Leonid Kantorovich, Kurt Gödel-era linear algebra, and modern convex analysis advanced by R. Tyrrell Rockafellar, Jean-Jacques Moreau, Frank H. Clarke and Hiriart-Urruty. Strong duality conditions utilize Slater-type conditions and spectral theorems echoing results from David Hilbert and John von Neumann.

Extensions and Variants

Extensions include structured cone programs such as second-order cone programming with roots in work by R. Tyrrell Rockafellar and Yurii Nesterov, polynomial optimization hierarchies like the Lasserre hierarchy by Jean Bernard Lasserre and sums-of-squares techniques linked to Emmanuel Candès, Terence Tao, Pablo Parrilo, Bernd Sturmfels, Gábor Szegő, and A. P. Calderón. Matrix completion, low-rank recovery, and nuclear norm minimization draw on literature involving Emmanuel Candès, Terence Tao, Joel A. Tropp, Benjamin Recht, and software inspired by labs at Courant Institute and MIT CSAIL. Noncommutative and free analogues of semidefinite programming connect to operator algebra research by John von Neumann, Alain Connes, Edward Nelson, Dan-Virgil Voiculescu, and recent work at University of California, Berkeley and Institute for Advanced Study. Emerging variants integrate stochastic programming concepts developed at Stanford Graduate School of Business and robust optimization traditions from INSEAD collaborations.

Category:Optimization