Generated by GPT-5-mini| Fourier–Motzkin elimination | |
|---|---|
| Name | Fourier–Motzkin elimination |
| Type | Algorithm |
| Input | System of linear inequalities |
| Output | Equivalent projection of feasible region |
| Introduced | 19th century |
| Authors | Joseph Fourier; Theodore Motzkin |
Fourier–Motzkin elimination is a method for eliminating variables from systems of linear inequalities to produce an equivalent system in fewer variables. It projects a polyhedron onto a coordinate subspace, relating to work by Joseph Fourier, Theodore Motzkin, Hermann Minkowski, Georg Cantor, and later developments connected to George Dantzig, John von Neumann, Egon Balas, and Alan Turing. The technique underlies connections among polyhedral theory, linear programming, and combinatorial optimization studied at institutions such as Princeton University, Massachusetts Institute of Technology, and University of Cambridge.
Fourier–Motzkin elimination operates on a finite system of linear inequalities with coefficients in a field, typically the rational numbers, as in problems examined by Carl Friedrich Gauss, Évariste Galois, and Augustin-Louis Cauchy. Given variables x1,...,xn, the goal is to eliminate xn to obtain an equivalent system in x1,...,x_{n-1}. The method partitions inequalities into those with positive, negative, or zero coefficient of xn, then combines pairs from positive and negative sets to form bounds on the remaining variables. This projection perspective links to the convexity results of Branko Grünbaum and the separation theorems of László Lovász and Kurt Gödel in mathematical programming.
Start with system A x ≤ b, reminiscent of formulations by George Dantzig in linear programming and the simplex method developed at RAND Corporation and Stanford University. For eliminating variable xn: - Partition inequalities into P = {a·x ≤ α with coefficient of xn > 0}, N = {a·x ≤ α with coefficient of xn < 0}, Z = {a·x ≤ α with coefficient of xn = 0}. - For each p in P and n in N, form a new inequality by eliminating xn via linear combination, paralleling elimination techniques from Joseph-Louis Lagrange and matrix operations studied by Arthur Cayley. - Retain Z unchanged and take the union of all new inequalities; this yields a system in n−1 variables equivalent to the projection, as in geometric constructions by Hermann Weyl and Paul Erdős.
Implementations often employ rational arithmetic as advised by practitioners at IBM and Bell Labs to preserve exactness, and are compared with alternative procedures such as the simplex algorithm at Stanford Graduate School of Business or Fourier analysis in contexts explored by Jean-Baptiste Joseph Fourier.
Correctness follows from linearity and the transitive closure of implication among inequalities: any solution of the original system projects to a solution of the eliminated system, and conversely any solution of the projected system can be extended, mirroring arguments used in proofs by David Hilbert and Emmy Noether. Termination is guaranteed because each elimination reduces the number of variables by one; this finite descent mirrors arguments in algorithmic number theory studied by Srinivasa Ramanujan and Andrey Kolmogorov. However, the number of inequalities may grow, an effect also observed in the study of exponential blowup in logic by Alonzo Church and Alan Turing.
Fourier–Motzkin elimination suffers worst-case combinatorial explosion: eliminating a variable can square the number of inequalities, yielding double-exponential behavior in naive sequences, a phenomenon analyzed by researchers at Bell Labs and IBM Research. Complexity comparisons reference results by Khachiyan (ellipsoid method), Vidyasagar (control theory), and Karmarkar (interior-point methods). Practical use often requires redundancy removal and heuristics from polyhedral computation literature tied to work at INRIA and Zuse Institute Berlin. Numerical stability issues motivate mixed rational-floating implementations used in software from Wolfram Research and Microsoft Research.
Extensions include block elimination and projection algorithms inspired by the Farkas lemma as studied by Jules Farkas and duality theory by John von Neumann. Variants incorporate Fourier–Motzkin within resolution frameworks in automated theorem proving at Carnegie Mellon University and quantifier elimination over the reals akin to methods from Alfred Tarski and George Collins. Other adaptations connect to convex hull computations by Francis Su, integer programming ideas of Ralph Gomory, and symbolic preprocessing tools developed at ETH Zurich and University of California, Berkeley.
Applications span linear programming feasibility checks as in operations research by George Dantzig, derivation of polyhedral descriptions in combinatorics as in work by Richard Stanley and Miklós Simonovits, and quantifier elimination in real algebraic geometry influenced by René Thom and David Hilbert. Example uses include eliminating variables in network flow constraints studied by László Lovász and Jack Edmonds, deriving tight bounds in scheduling problems associated with Jack D. G.-style formulations, and preprocessing in mixed integer programming as in developments by Ralph Gomory and Egon Balas. Computational geometry applications tie to convex polytopes researched by Branko Grünbaum and optimization routines at National Aeronautics and Space Administration.
Category:Algorithms