Generated by GPT-5-mini| Mathematical optimization | |
|---|---|
![]() IkamusumeFan · CC BY-SA 4.0 · source | |
| Name | Mathematical optimization |
| Caption | Geometric illustration of a constrained optimization problem |
| Domain | Mathematics |
| Subdisciplines | Operations research; Numerical analysis; Control theory |
| Notable people | John von Neumann; George Dantzig; Leonid Kantorovich; Richard Bellman; Turing Award |
Mathematical optimization Mathematical optimization is the study of selecting the best element from a set of available alternatives under specified criteria and constraints. It unifies methods developed across Princeton University, Stanford University, IBM, Bell Labs, and University of Cambridge and informs advances in fields such as NASA, Siemens, McKinsey & Company, National Aeronautics and Space Administration, and General Electric. The subject interfaces with classical work by Isaac Newton, Leonhard Euler, Carl Friedrich Gauss, and modern contributions from John von Neumann, George Dantzig, and Leonid Kantorovich.
Optimization emerged from problems posed at institutions like University of Göttingen and Harvard University and matured through wartime and industrial research at RAND Corporation, Bell Labs, and Los Alamos National Laboratory. Foundational results connect to theorems and techniques associated with Joseph-Louis Lagrange, Pierre-Simon Laplace, Augustin-Louis Cauchy, and Gustav Kirchhoff. Practical optimization permeates projects by Boeing, Airbus, Procter & Gamble, Toyota, and Shell and is foundational to achievements recognized by prizes such as the Turing Award and the Nobel Memorial Prize in Economic Sciences.
An optimization problem is typically stated by specifying an objective function, decision variables, and constraints; classical formulations relate to the calculus of variations introduced by Leonhard Euler and later formalized in contexts like the Lagrangian (mechanics) and the Hamiltonian (quantum mechanics). Terms such as feasible region, optimum, local optimum, and global optimum are tied to results from Karl Weierstrass and Bernhard Riemann. Constrained problems often invoke Lagrange multipliers and Karush–Kuhn–Tucker conditions, which evolved from work by Harold W. Kuhn and Albert W. Tucker and connect to duality concepts explored by John von Neumann and Richard Bellman.
Problems are categorized by structure: linear programs associated with simplex methods of George Dantzig; integer programs linked to combinatorial work by Edsger W. Dijkstra and Claude Shannon in information contexts; nonlinear programs tied to contributions from David Hilbert and Andrey Kolmogorov; convex programs related to convexity theory advanced at University of Chicago and California Institute of Technology; stochastic programs developed in studies by Wassily Leontief and Kenneth Arrow; and dynamic programs originating in Richard Bellman's work. Specialized families include quadratic programming studied in projects at MIT, semidefinite programming connected to research at University of Waterloo, and combinatorial optimization problems central to results by Paul Erdős and Avi Wigderson.
Algorithmic paradigms span deterministic and probabilistic approaches: the simplex algorithm by George Dantzig and interior-point methods developed by researchers at AT&T Bell Laboratories; branch-and-bound and cutting-plane methods used in industrial settings such as McKinsey & Company and Ford Motor Company; gradient-based methods rooted in Augustin-Louis Cauchy's steepest descent; Newton and quasi-Newton methods with historical links to Isaac Newton and Carl Friedrich Gauss; dynamic programming from Richard Bellman; and metaheuristics popularized by practitioners at Los Alamos National Laboratory and Sandia National Laboratories, drawing on ideas from simulated annealing inspired by Sackur–Tetrode analogies and genetic algorithms influenced by John Holland. Modern large-scale solvers arise from collaborations among IBM Research, Google, Microsoft Research, and academic centers such as Stanford University and University of California, Berkeley.
Complexity classifications for optimization problems reference the theory developed by Alan Turing, Stephen Cook, and Richard Karp, with NP-completeness results guiding expectations about tractability for problems like the traveling salesman problem studied by Karp and Garey and Johnson. Approximation algorithms trace back to work at Bell Labs and centers such as Princeton University, while hardness results build on reductions introduced by Cook and Levin. Probabilistic analysis of algorithms leverages contributions from Andrey Kolmogorov, Paul Erdős, and Alfred Rényi, and connections to polyhedral combinatorics reflect advances by Gerard Debreu and Maurice Allais in economic theory.
Applications span engineering, science, and policy: trajectory optimization in projects at NASA and European Space Agency; portfolio optimization in banking institutions like Goldman Sachs and JPMorgan Chase using mean-variance theory linked to Harry Markowitz; supply chain and logistics optimizations practiced by UPS and DHL; energy dispatch and unit commitment problems addressed by General Electric and national grid operators such as National Grid (Great Britain); and machine learning model training at Google and OpenAI that employs optimization routines originally developed at Bell Labs and AT&T. Case studies include aircraft design at Boeing, drug discovery collaborations with Pfizer, and network flow optimization in projects by Cisco Systems.