LLMpediaThe first transparent, open encyclopedia generated by LLMs

Revised simplex method

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 66 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted66
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Revised simplex method
NameRevised simplex method
Introduced1970s
FieldMathematical optimization
RelatedSimplex method, Linear programming, Interior-point method

Revised simplex method is an algorithmic refinement of the simplex method for solving linear programming problems that focuses on efficient matrix computations and sparse data structures. It reformulates tableau operations to maintain and update a basis inverse implicitly, enabling large-scale implementations used in industrial Bell Labs, AT&T, IBM optimization packages and academic software from Stanford University and Massachusetts Institute of Technology. Developed alongside advances in numerical linear algebra at institutions such as Bell Labs, University of California, Berkeley, and Princeton University, the method underpins many commercial solvers from FICO and Gurobi as well as open-source efforts like COIN-OR.

Introduction

The Revised simplex method reinterprets classical tableau-based steps through basis-oriented linear algebra, storing a basis matrix and computing necessary vectors by solving systems rather than explicitly maintaining full tableaux. Pioneering work in numerical algorithms by researchers at Bell Labs, IBM, and AT&T Bell Laboratories catalyzed adoption, while theoretical foundations draw on contributions from John von Neumann and George Dantzig in linear programming. Implementations interact with sparse matrix libraries developed at Lawrence Livermore National Laboratory, Argonne National Laboratory, and Sandia National Laboratories to handle industrial-scale problems from General Motors, Boeing, and Airbus.

Algorithmic Formulation

The method represents a feasible basis B and its complement N, performing pivot selection by computing reduced costs and direction vectors via triangular solves with the basis matrix. Core operations include solving Bx = b for basic variables, computing y = c_B^T B^{-1} to obtain reduced costs, and selecting entering and leaving variables with ratio tests that reference columns of the constraint matrix A. Early algorithmic variants were documented in literature from University of California, Los Angeles, Carnegie Mellon University, and University of Waterloo researchers, and formalized in textbooks used at Harvard University and Oxford University.

Computational Implementation and Complexity

Practical implementations exploit sparse LU or Cholesky factorizations of the basis, and update factorizations incrementally using techniques introduced at Bell Labs and in research from ETH Zurich. Complexity analysis relates to pivot counts proven in worst-case constructions like those by Klee–Minty and theoretical work by N. Karmarkar on polynomial-time alternatives, while average-case and practical complexity are influenced by problem structure arising in applications at Shell, ExxonMobil, and Deutsche Bank. Software packages from FICO, Gurobi, CPLEX, and MOSEK illustrate engineering trade-offs in memory, flop counts, and I/O when handling constraint matrices from Siemens and Siemens AG projects.

Numerical Stability and Pivoting Strategies

Numerical robustness requires careful pivot selection and basis maintenance; researchers at INRIA, Los Alamos National Laboratory, and Max Planck Society contributed pivoting heuristics and refinement techniques. Strategies such as partial pivoting, relative tolerance tests, and iterative refinement use linear algebra routines standardized by BLAS and LAPACK, with verification and scaling methods influenced by studies at NIST and National Institute of Standards and Technology. Ill-conditioned bases arising in problems from Goldman Sachs, Morgan Stanley, and Barclays motivate preprocessing and post-solve refinement techniques documented in conferences at SIAM and Mathematical Programming Society meetings.

Variants and Extensions

Extensions include dual revised simplex, primal-dual strategies, and crash procedures for finding an initial feasible basis; these were advanced in collaborations among Cornell University, University of Cambridge, and California Institute of Technology. Hybrid methods combine revised simplex pivots with barrier or interior-point phases influenced by work of Karmarkar and implementations in IBM and FICO products. Specialized adaptations address network flow problems studied at MIT and ETH Zurich, and decomposition schemes such as Dantzig–Wolfe used by firms like McKinsey & Company and projects at World Bank.

Applications and Performance in Practice

The Revised simplex method remains a practical choice for many industrial linear programs in transportation logistics for UPS and FedEx, energy market modeling for E.ON and Exelon, and financial portfolio optimization at JPMorgan Chase and Goldman Sachs. Benchmarks reported by DIMACS and at COMAP competitions compare revised simplex against interior-point solvers on instances from MIPLIB and real-world datasets from Google and Facebook. Commercial solvers from Gurobi, CPLEX, and MOSEK typically implement highly optimized revised simplex engines alongside interior-point methods to leverage strengths in sparse, degenerate, or warm-started scenarios encountered at Amazon and Walmart.

Category:Linear programming methods