Generated by GPT-5-mini| Ky Fan minimax theorem | |
|---|---|
| Name | Ky Fan minimax theorem |
| Field | Functional analysis; Game theory; Topology |
| Introduced | 1953 |
| Author | Ky Fan |
Ky Fan minimax theorem The Ky Fan minimax theorem is a foundational result in Functional analysis and Game theory asserting minimax equality under convexity and compactness conditions for functions on product spaces. It generalizes classical results such as the von Neumann minimax theorem and connects to fixed-point theorems of Brouwer and Kakutani and to variational principles used in Euler–Lagrange equation contexts. The theorem has influenced developments in Nash equilibrium theory, optimization problems in Hilbert space and Banach space settings, and equilibrium existence results in economic models associated with Arrow–Debreu frameworks.
Ky Fan's original formulation considers a function f defined on the product of two convex sets C and D in topological vector spaces such as a Hilbert space or Banach space. Under assumptions that C is compact and convex (often a subset of a Euclidean space or a locally convex topological vector space), that f(x,y) is lower semicontinuous in x and upper semicontinuous in y, and that f is quasi-convex in x and quasi-concave in y, the theorem guarantees the equality sup_{y in D} inf_{x in C} f(x,y) = inf_{x in C} sup_{y in D} f(x,y). This statement extends the von Neumann result for finite-dimensional matrix games and complements earlier minimax-like assertions by Sion, linking to compactness results of Tychonoff and separation theorems associated with Hahn–Banach.
Ky Fan introduced the theorem in the context of mid-20th-century advances across Topology, Functional analysis, and mathematical Economics where existence of equilibria and saddle points became central. Influences include foundational work by John Nash on equilibrium concepts, John von Neumann on game theory, and fixed-point contributions from Lefschetz and Brouwer. The theorem addressed limitations of finite-dimensional techniques in infinite-dimensional settings encountered in research at institutions like Princeton University, University of California, Berkeley, and Institute for Advanced Study. It built on earlier convexity and continuity ideas explored by Hermann Minkowski, Stefan Banach, and Frigyes Riesz, and resonated with variational methods from David Hilbert and Emmy Noether-inspired functional frameworks.
Proofs of Ky Fan's minimax theorem use tools from Topological vector space theory, fixed-point theorems such as Kakutani and Brouwer, and convex analysis techniques pioneered by Hermann Weyl and L. N. Trefethen. Alternative routes employ separation theorems like Hahn–Banach and selection theorems by Michael to handle upper/lower semicontinuity hypotheses. Notable variants include Sion's minimax theorem, Fan–Sion extensions to noncompact domains, and generalizations to set-valued maps linked to the Glicksberg fixed-point framework and to equilibrium existence results attributed to Arrow and Debreu. Other generalizations adapt assumptions to L^p spaces studied by Steinhaus and to operator-theoretic settings examined by Weyl and Gelfand.
The Ky Fan minimax theorem underpins existence proofs for equilibria in models of Nash equilibrium, market equilibria in the Arrow–Debreu model, and saddle-point existence in variational inequalities that arise in Euler–Lagrange equation problems and in control theory associated with Pontryagin's maximum principle. It informs stability analyses in Hamiltonian mechanics formulations and optimization algorithms used in numerical linear algebra contexts influenced by John von Neumann and Alonzo Church-era computation theory. In economics, it supports welfare theorems and bargaining solutions connected to the work of John Nash and Kenneth Arrow. In game theory and operations research, applications extend to zero-sum games of the type studied by Oskar Morgenstern and John von Neumann, to variational inequalities in Frank Arrow-inspired models, and to convex-concave programming methods used in modern machine learning architectures developed at institutions like Massachusetts Institute of Technology and Stanford University.
Closely related results include the von Neumann minimax theorem, Sion's minimax theorem, and fixed-point theorems by Brouwer and Kakutani. Generalizations cover set-valued and infinite-dimensional extensions linked to work by Glicksberg, selections theorems by Michael, and variational principles developed by Ekeland and Clarke. Operator-theoretic and spectral generalizations relate to contributions from Gelfand, Kolmogorov, and Weyl, while stochastic and measure-theoretic extensions draw on ergodic theory by Birkhoff and measure-theoretic foundations due to Andrey Kolmogorov. Modern computational generalizations intersect with convex optimization research by Boyd and Vandenberghe and with equilibrium computation methods advanced at Princeton University and Stanford University.
Category:Theorems in functional analysis