LLMpediaThe first transparent, open encyclopedia generated by LLMs

Minimax theorem

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Game Theory Hop 4
Expansion Funnel Raw 57 → Dedup 16 → NER 13 → Enqueued 10
1. Extracted57
2. After dedup16 (None)
3. After NER13 (None)
Rejected: 3 (not NE: 3)
4. Enqueued10 (None)
Similarity rejected: 1
Minimax theorem
NameMinimax theorem
FieldGame theory
Introduced1928
Key peopleJohn von Neumann
Related conceptsZero-sum game, Nash equilibrium, Linear programming

Minimax theorem

The Minimax theorem is a foundational result in Game theory that equates the optimal value a player can guarantee when minimizing the maximum possible loss with the value the opponent can force when maximizing the minimum payoff. It formalizes optimal strategies in two-player zero-sum games, connects to equilibrium concepts such as Nash equilibrium, and provides bridges to Linear programming and functional analysis.

Statement

In modern form the theorem states that for a finite two-player zero-sum game with payoff matrix A, there exist mixed strategies p for the row player and q for the column player such that min_p max_q p^T A q = max_q min_p p^T A q. This asserts equality between the lower value and the upper value of the game, guaranteeing the existence of saddle-point strategies. The statement is tightly connected to convexity results like Von Neumann's minimax theorem (original form) and separation theorems in functional analysis used in proofs.

Proofs and Variants

Von Neumann's original proof employed convexity and combinatorial reasoning and was later reinterpreted using the Hahn–Banach theorem from functional analysis, providing an analytic proof for infinite-dimensional extensions. Another proof uses duality in Linear programming by casting mixed-strategy optimization as a pair of primal-dual linear programs; complementary slackness then yields optimal strategies. Minimax-type results also appear in the Sion's minimax theorem for compact topological vector spaces and in the Kakutani fixed-point theorem approach that links to equilibrium existence proofs such as Nash equilibrium. Game-theoretic proofs often reference work by John von Neumann, Oskar Morgenstern, and subsequent expansions by John Nash and Lloyd Shapley; functional-analytic variants cite Stefan Banach and Hugo Steinhaus. Alternate formulations include the matrix saddle-point property, the Ky Fan minimax theorem in functional settings, and probabilistic proofs that use martingale convergence as in the work of Joseph Doob.

Applications

The Minimax theorem underpins algorithmic guarantees in computer science problems such as adversarial learning, regret minimization, and online algorithms, with links to the Minimax algorithm in artificial intelligence for game-playing agents in tournaments like the World Computer Chess Championship. In economics, it informs bargaining models studied by John Nash and strategic decision models in institutions like the Federal Reserve and international negotiations such as the Bretton Woods Conference. In control theory and robust optimization it appears in H∞ control developed at institutions like MIT and Caltech. In statistics it supports decision-theoretic approaches from scholars at Columbia University and Stanford University, influencing hypothesis testing frameworks and estimators studied by recipients of awards like the Nobel Memorial Prize in Economic Sciences. Engineering applications include signal processing at organizations such as Bell Labs and communication systems originally developed by researchers at AT&T and DARPA. The theorem also finds use in military strategy modeling from historical analyses of conflicts like the Battle of Midway and strategic frameworks at institutes such as the RAND Corporation.

Historical Development

The result was first proved by John von Neumann in 1928 in correspondence that preceded the publication of Theory of Games and Economic Behavior coauthored with Oskar Morgenstern. The theorem's development intersected with advances in functional analysis by mathematicians such as Stefan Banach and Einar Hille, and with linear programming milestones led by George Dantzig. Subsequent contributions by John Nash broadened equilibrium theory in the 1950s, while later mathematical formalization involved work by Ky Fan, Miroslav Fiedler, and André Weil in extension contexts. Institutional support from universities like Princeton University and research centers such as the Institute for Advanced Study helped propagate applications across disciplines and into government research laboratories including Los Alamos National Laboratory.

Generalizations and Extensions

Generalizations extend the finite-matrix result to infinite action sets via measurable selection theorems and to non-zero-sum settings through equilibrium concepts like the Correlated equilibrium and refinements studied by Reinhard Selten. Functional-analytic extensions use the Hahn–Banach theorem and Riesz representation theorem to handle topological vector spaces; game-theoretic extensions relate to stochastic games studied by Lloyd Shapley and differential games introduced by Isaac Newton? (note: differential games historically by Rufus Isaacs). Computational extensions connect to the Ellipsoid method and interior-point algorithms discovered by N. Karmarkar and Yurii Nesterov. Modern machine learning leverages minimax formulations in adversarial networks pioneered by research groups at University of Montreal and companies like Google and Facebook for generative adversarial networks, with theoretical analysis informed by optimization work from Yann LeCun and Yoshua Bengio.

Category:Game theory