LLMpediaThe first transparent, open encyclopedia generated by LLMs

Kolmogorov's zero–one law

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Borel–Cantelli lemma Hop 4
Expansion Funnel Raw 38 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted38
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Kolmogorov's zero–one law
NameKolmogorov's zero–one law
FieldProbability theory
DiscovererAndrey Kolmogorov
Year1930s
RelatedBorel–Cantelli lemma, Hewitt–Savage zero–one law, Lévy zero–one law

Kolmogorov's zero–one law Kolmogorov's zero–one law is a foundational result in Andrey Kolmogorov's axiomatic framework for Probability theory that asserts certain tail events in infinite sequences of independent random variables have probability either zero or one. Introduced during Kolmogorov's development of measure-theoretic probability, the law influences modern work in Paul Lévy's stochastic processes, Émile Borel's normal number theory, and applications across Albert Einstein-era statistical physics, John von Neumann-style functional analysis, and Norbert Wiener's stochastic calculus.

Statement

The law states that for a sequence of independent sigma-algebras or independent random variables indexed by the natural numbers, any event measurable with respect to the tail sigma-algebra has probability 0 or 1. In Kolmogorov's formulation within his axiomatization of Andrey Kolmogorov's measure-theoretic framework, the tail sigma-algebra is the intersection of sigma-algebras generated by variables from some index onward, so events invariant under finite changes in the sequence are trivial. The statement is typically presented alongside related results by Émile Borel and Émile Lévy in early 20th-century probability, and it underpins rigorous treatments in texts influenced by Paul Erdős, Alfréd Rényi, and Joseph Doob.

Proofs

Kolmogorov's original proof uses independence, sigma-algebra manipulations, and conditional expectation; it appears in the context of Kolmogorov's work alongside measure-theoretic developments by Henri Lebesgue and axiomatization contributions by Andrey Kolmogorov himself. Alternative proofs invoke the martingale convergence theorem as developed by Joseph Doob or apply the Borel–Cantelli lemma associated with Émile Borel and Felix Cantelli. A typical argument shows that any tail-measurable bounded random variable is independent of itself, hence almost surely constant; versions of this argument are found in expositions influenced by William Feller, Patrick Billingsley, and Kiyoshi Itô. Measure-theoretic proofs often reference structural results from John von Neumann's operator theory or use combinatorial methods resonant with the work of Paul Erdős and Alfréd Rényi.

Examples and applications

Classic examples include the almost-sure behavior of coin-toss sequences studied by Émile Borel in normal number theory and the determination of limsup and liminf events for independent trials as used in formulations by Felix Cantelli and Paul Erdős. Applications span the percolation thresholds considered in Harry Kesten's work, phase transitions in models influenced by Ludwig Boltzmann and Lev Landau, and ergodic-type results considered by George B. Dantzig and John von Neumann. In statistical mechanics and the study of Gibbs measures developed by J. Willard Gibbs and extended by Lars Onsager, tail-triviality ensures that certain macroscopic observables are deterministic almost surely; similar reasoning appears in stochastic process theory of Norbert Wiener and limit theorems proved by Andrey Kolmogorov and Aleksandr Khinchin. In randomized algorithms and probabilistic combinatorics inspired by Donald Knuth and Paul Erdős, tail events govern almost-sure termination or structural properties, while in information theory building on Claude Shannon tail sigma-algebras inform analyses of infinite sequences and coding limits.

Relation to other zero–one laws

Kolmogorov's law is one of several zero–one laws; it is distinct from but related to the Hewitt–Savage zero–one law proved by Edwin Hewitt and Leonard J. Savage which treats exchangeable sequences, and to the Lévy zero–one law associated with Paul Lévy concerning sigma-algebras generated by future increments in stochastic processes. Connections are drawn in treatments by William Feller and comparative studies involving results of Andrey Kolmogorov, Paul Erdős, and Joseph Doob. While Kolmogorov's law applies to independent sequences, the Hewitt–Savage law applies to infinite exchangeable sequences and de Finetti-type representations related to Bruno de Finetti. Lévy's law often appears in contexts explored by Kiyoshi Itô and Norbert Wiener for Brownian motion and martingale limits.

Generalizations and extensions

Generalizations extend Kolmogorov's conclusion to dependent structures under mixing conditions studied by Harald Cramér and Andrey Kolmogorov's successors, and to ergodic-theoretic frameworks influenced by George David Birkhoff and John von Neumann. Extensions include tail-triviality results for exchangeable arrays in the work surrounding Aldous-Hoover representations (connected to David Aldous), zero–one phenomena in percolation and random graph theory as developed by Bollobás and Paul Erdős, and noncommutative analogues in operator algebras linked to John von Neumann and Alain Connes. Recent research connects Kolmogorov-type triviality to concentration inequalities popularized by Sergey Bernstein-style results and to large-deviation principles formalized by Svensson-type frameworks, while applications continue in areas influenced by Terence Tao and Timothy Gowers in probabilistic combinatorics and theoretical computer science.

Category:Probability theory