Generated by GPT-5-mini| Radon–Nikodym theorem | |
|---|---|
| Name | Radon–Nikodym theorem |
| Field | Measure theory |
| Introduced | 1930s |
| Named after | Johann Radon; Otto Nikodym |
Radon–Nikodym theorem
The Radon–Nikodym theorem is a fundamental result in measure theory that gives conditions under which one measure is absolutely continuous with respect to another and therefore has a density (derivative) with respect to it. The theorem connects integration theory developed by Henri Lebesgue with earlier work of Johann Radon and Otto Nikodym and underpins parts of modern probability theory, functional analysis, and mathematical statistics. Its scope reaches into applications in ergodic theory, financial mathematics, and representation theory.
Let (X, Σ) be a measurable space and let μ and ν be σ-finite measures on (X, Σ). If ν is absolutely continuous with respect to μ, then there exists a Σ-measurable function f : X → [0, ∞), called the Radon–Nikodym derivative, such that for every A in Σ, ν(A) = ∫_A f dμ. Conversely, when such an f exists ν is absolutely continuous with respect to μ. In the context of signed measures or complex measures one uses the Lebesgue decomposition theorem to split a measure into an absolutely continuous part and a singular part, and the Radon–Nikodym derivative applies to the absolutely continuous component. The theorem is commonly formulated for σ-finite measures to ensure uniqueness of f up to μ-almost everywhere equality.
The theorem synthesizes work by Johann Radon and Otto Nikodym and builds on integration concepts from Henri Lebesgue. Early antecedents include decomposition ideas present in work of Émile Borel and measure constructions considered by Andrey Kolmogorov in probability foundations. Nikodym published his definitive formulation in the 1930s, influenced by developments in Göttingen and the broader analysis community involving figures such as David Hilbert, Stefan Banach, and John von Neumann. The result was incorporated into the axiomatic framework that supported later advances by Andrey Kolmogorov in probability theory and by Maurice Fréchet in functional analysis. Subsequent dissemination occurred through textbooks by Paul Halmos, Shizuo Kakutani, and Edmund Landau, making the theorem a staple in curricula influenced by institutions like Princeton University, University of Göttingen, and Cambridge University.
Standard proofs use the monotone class theorem and properties of L^1 spaces, often relying on the Hahn decomposition for signed measures and the Lebesgue decomposition theorem. Alternate approaches invoke the Riesz representation theorem as used by researchers at University of Chicago and in expositions by John L. Kelley; other proofs employ convexity and optimization ideas linked to George Dantzig's linear programming developments. Functional-analytic proofs leverage duality in L^p spaces and the Radon–Nikodym property in Banach spaces, studied by Boris Mazur and Stefan Banach. Extensions to non-σ-finite settings and operator-valued measures have motivated work by mathematicians at Massachusetts Institute of Technology and University of California, Berkeley; these variants use tools from operator algebras pioneered by John von Neumann and Israel Gelfand.
In probability theory the theorem underlies change-of-measure techniques such as Girsanov's theorem developed in the context of stochastic calculus by Igor Girsanov and applied in mathematical finance at institutions like Goldman Sachs and Chicago Mercantile Exchange. In statistics it justifies likelihood ratios and Radon–Nikodym derivatives are central in hypothesis testing frameworks attributed to Jerzy Neyman and Egon Pearson. In ergodic theory and dynamical systems the theorem supports invariant measure comparisons in work by Ya. B. Pesin and Anatole Katok. In quantum probability and noncommutative integration it informs modular theory initiated by Tomita and developed by Masamichi Takesaki within the setting of von Neumann algebras studied at Institute for Advanced Study. Signal processing and inverse problems at institutions like Bell Labs use measure-change ideas traceable to the theorem. In economics, change-of-measure methods inspired by the theorem appear in risk-neutral pricing derived from models by Robert Merton and Fischer Black.
The Lebesgue decomposition theorem, the Hahn decomposition theorem, and the Riesz representation theorem are closely related; each figure prominently in texts by Paul Halmos and Walter Rudin. The Radon–Nikodym property for Banach spaces, studied by Boris Levin and Robert C. James, characterizes spaces where vector-valued analogues hold. Noncommutative generalizations produce versions for weights on von Neumann algebras associated with Tomita–Takesaki theory and work by Murray and von Neumann on operator algebras. Other extensions include versions for signed measures, complex measures, and conditional expectations in the setting of Andrey Kolmogorov's probability axioms. Connections exist with the Daniell integral approach of Percy John Daniell and developments by Norbert Wiener in Gaussian measures.
Classical examples include absolutely continuous measures on the real line where ν has density f with respect to Lebesgue measure introduced by Henri Lebesgue. Probability densities used in statistical models by Ronald Fisher and Jerzy Neyman furnish explicit Radon–Nikodym derivatives. A canonical counterexample shows failure without σ-finiteness: there exist measures on uncountable spaces constructed by set-theorists at Princeton University and University of Bonn where no derivative exists despite other similarities; such pathologies relate to constructions by Felix Hausdorff and independence results studied in the context of Paul Cohen's forcing. Vector-valued measure counterexamples demonstrating failure of the Radon–Nikodym property were developed by Bourbaki-influenced analysts and by Per Enflo in Banach space theory.