LLMpediaThe first transparent, open encyclopedia generated by LLMs

Majority Is Stablest theorem

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Subhash Khot Hop 5
Expansion Funnel Raw 9 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted9
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Majority Is Stablest theorem
NameMajority Is Stablest theorem
FieldTheoretical computer science; Probability theory
Proven2010
AuthorsMossel, O'Donnell, Oleszkiewicz
RelatedInvariance principle; Boolean function analysis

Majority Is Stablest theorem The Majority Is Stablest theorem is a result in discrete probability and theoretical computer science connecting Boolean function analysis, Gaussian isoperimetry, and hardness of approximation. The theorem formalizes when the majority function optimally maximizes noise stability under constraints on individual influences, linking researchers in combinatorics, analysis, and computational complexity. Key contributors include Elchanan Mossel, Ryan O'Donnell, and Krzysztof Oleszkiewicz, and the result influenced work by Sanjeev Arora, Subhash Khot, and others on approximation algorithms.

Introduction

The theorem arose from interactions among research programs at institutions like the Massachusetts Institute of Technology, the Princeton University, and the Institute for Advanced Study and from problems posed in venues such as the STOC and FOCS conferences. It builds on earlier developments by Joseph Kahn, Gil Kalai, and Nathan Linial and by influences from Michel Talagrand and Charles Fefferman in functional inequalities. Connections to Gaussian isoperimetric results trace to work by Borell and to classical results by Hermann Minkowski and Friedrich Riesz, while the analytic framework draws on techniques associated with Charles Stein and Edward Nelson. The theorem played a pivotal role in advances connected to the Unique Games Conjecture and to hardness of approximation results explored by Umesh Vazirani, Johan Håstad, and David Zuckerman.

Statement of the Theorem

Informally, the Majority Is Stablest theorem asserts that among low-influence Boolean functions with a given expectation, the majority function maximizes noise stability for small correlations. Precise formulations reference the influences defined by Kahn–Kalai–Linial and the noise operator studied in harmonic analysis on the Boolean cube, with extremal comparisons against threshold (majority) functions previously considered by Ehud Friedgut and Gil Kalai. The formal statement uses parameters introduced in work by Ryan O'Donnell and Michael Saks and references Gaussian analogues proven by Christer Borell; it leverages invariance principles that translate discrete problems to Gaussian space, as in the work by Mossel and collaborators.

Proof Sketch and Techniques

The proof combines discrete Fourier analysis on the Boolean cube, hypercontractivity inequalities originally developed by Edward Nelson and Leonard Gross, and an invariance principle linking multilinear polynomials to Gaussian chaoses, extending ideas from the Central Limit Theorem as treated by Sergey N. Bernstein and Paul Lévy. Key technical ingredients include anticoncentration bounds from Littlewood and Offord-type results, influences formalized by Kahn, Kalai, and Linial, and the use of Gaussian isoperimetry attributed to Christer Borell and Vladimir Sudakov. The argument synthesizes methods used in approximation theory by Zygmund and contemporary combinatorial techniques employed by Noga Alon and Béla Bollobás. The work also draws on hypercontractive semigroup methods related to the Ornstein–Uhlenbeck semigroup studied by Leonard Gross and researchers in stochastic analysis like Daniel Stroock.

Applications and Consequences

The theorem has direct implications for the theory of hardness of approximation, informing inapproximability results tied to the Unique Games Conjecture and to PCP theorems developed by Johan Håstad and Sanjeev Arora. It influenced algorithmic lower bounds analyzed by Subhash Khot, Guy Kindler, and Ryan O'Donnell, and impacted social choice theory by formalizing why majority voting behaves robustly under small perturbations, complementing earlier political science models by Kenneth Arrow and Amartya Sen. In learning theory, links exist to work by Leslie Valiant and Michael Kearns on noise-tolerant learning. The theorem also intersects with Gaussian comparison inequalities used in probability by Michel Ledoux and Stéphane Boucheron, and with optimization landscapes studied in convex geometry by Grigori Perelman and Pavel Erdős.

Subsequent research generalized the result to non-uniform product measures, to functions on domains studied by Maté Haiman and Doron Zeilberger, and to stability bounds in Gaussian space extending Borell's inequality. Related theorems include majority-like extremal results by Ehud Kalai, Friedgut's Junta theorem, and invariance-principle developments by Elchanan Mossel and Joseph Neeman. Work by Irit Dinur, Oded Regev, and Raghu Meka explored algorithmic consequences and derandomization variants, while connections to isoperimetric problems linked to Federbush and Almgren influenced geometric measure theory perspectives. Ongoing studies involve collaborations across the Princeton University, the University of California, Berkeley, and the University of Toronto mathematics and computer science communities.

Category:Theorems in theoretical computer science