Generated by GPT-5-mini| Erdős–Rényi | |
|---|---|
![]() | |
| Name | Erdős–Rényi |
| Field | Graph theory; Probability theory |
| Notable contributors | Paul Erdős, Alfréd Rényi |
Erdős–Rényi
The Erdős–Rényi models are foundational frameworks in Paul Erdős and Alfréd Rényi's work on random graphs, forming a cornerstone of modern graph theory and probability theory. These models underpin research in Stanislaw Ulam-style combinatorics, influence methods from the Hungarian Academy of Sciences tradition, and connect to results in Solomon Golomb's combinatorial designs and Andrey Kolmogorov's limit theorems. The frameworks have been applied across studies by scholars at institutions such as Princeton University, University of Cambridge, and the University of Chicago.
The origin traces to collaborative correspondence between Paul Erdős and Alfréd Rényi in the late 1950s, influenced by work of Erdős with Mark Kac, George Pólya, and discussions at the Mathematical Institute of the Hungarian Academy of Sciences. Early definitions were formalized alongside contemporaneous developments by Philippe Flajolet and Donald Knuth in algorithmic combinatorics and by researchers at Bell Labs. The models were introduced to capture random structures akin to ensembles studied in Andrey Kolmogorov's probability theory and to parallel asymptotic enumerative work by Paul Turán and Egon Schulte. Definitions specify random processes on a fixed vertex set, inspired by counting approaches from Paul Erdős's extremal combinatorics and renormalization ideas discussed in seminars at Institute for Advanced Study.
The two canonical formulations are denoted G(n, M) and G(n, p), defined on an n-vertex labeled set commonly associated in literature with examples from Queen Mary University of London seminars and expositions by Béla Bollobás and Svante Janson. In G(n, M) one chooses uniformly at random a graph with exactly M edges, a viewpoint developed in parallel with counting techniques of Paul Erdős and Alfréd Rényi and treated rigorously in texts by Béla Bollobás, Fan Chung, and László Lovász. In G(n, p) each of the possible n(n−1)/2 edges is included independently with probability p, a probabilistic construction linked to foundational concepts from Andrey Kolmogorov and exploited in probabilistic method expositions by Joel Spencer and Noga Alon. Authors such as Elias M. Stein and Charles Fefferman have discussed analytic perspectives relevant to convergence in these ensembles, and comparisons across G(n, M) and G(n, p) appear in surveys by Svante Janson.
A central result is the emergence of a phase transition for the size of the largest connected component near p ≈ 1/n, paralleling critical phenomena studied by Lars Onsager and statistical mechanics perspectives from Richard Feynman and Lev Landau. The subcritical regime (p = c/n, c < 1) yields only small components, described in limit theorems by Erdős-era combinatorialists and refined by Béla Bollobás and Oliver Riordan. The supercritical regime (c > 1) exhibits a unique giant component, an effect analyzed alongside branching-process approximations originating in work by Galton–Watson and later formalized by Svante Janson and Joel Spencer. Near-critical scaling windows and fluctuations connect to universal distributions studied by John Cardy and probabilists such as Grigori Perelman in geometric contexts and by Kurt Gödel-era logicians in method analogies.
Key parameters include degree distribution, clustering coefficients, diameter, chromatic number, and presence of subgraphs, topics treated in monographs by Béla Bollobás, Noga Alon, and Fan Chung. Degree sequences in G(n, p) converge to binomial and Poisson limits under regimes analyzed by Andrey Kolmogorov and Roland L. Dobrushin; the chromatic number asymptotics were studied by Béla Bollobás and Michael Molloy with techniques inspired by work of Paul Erdős and Jeff Kahn. Limiting distributions for counts of small subgraphs (triangles, cycles) are governed by normal and Poisson approximations developed by Cariolaro-style probabilists and rigorously treated by Svante Janson and Oded Schramm. Diameter and connectivity thresholds tie to earlier connectivity investigations by E.T. Bell and combinatorial enumerations by Harold Davenport.
Algorithms for generating G(n, p) and G(n, M) and for detecting components are central in algorithmic studies by Donald Knuth, Robert Tarjan, and John Hopcroft; simulations are routinely implemented in libraries influenced by work at Los Alamos National Laboratory and software by researchers at MIT. Applications span modeling in network science as in studies by Albert-László Barabási, epidemiological models relating to Ronald Ross-style infection dynamics, and theoretical computer science problems addressed by Richard Karp and Leslie Valiant. Practical deployments appear in analyses at Google, Facebook, and Microsoft Research where random graph ensembles inform benchmarking and resilience studies.
Numerous generalizations extend the Erdős–Rényi frameworks: configuration models developed by Molloy and Reed and studied by Miklós Molnár; preferential attachment models by Barabási–Albert; inhomogeneous random graphs by Bollobás–Janson–Riordan and graphons formalized by Lovász–Szegedy; stochastic block models used in community detection literature by Santo Fortunato and Mark Newman; and geometric random graphs linked to work by Pál Erdős collaborators and spatial network studies by Matthieu Latapy. Connections to percolation theory from H. Kesten and to random matrix theory investigated by Terence Tao and László Erdős highlight interdisciplinary bridges to statistical mechanics and spectrum analysis by Eugene Wigner.