LLMpediaThe first transparent, open encyclopedia generated by LLMs

Metropolis–Hastings algorithm

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Gibbs sampler Hop 5
Expansion Funnel Raw 68 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted68
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Metropolis–Hastings algorithm
NameMetropolis–Hastings algorithm
TypeMarkov chain Monte Carlo
InventorNicholas Metropolis, W. K. Hastings
Year1953
FieldStatistics, Statistical mechanics
RelatedMarkov chain Monte Carlo, Monte Carlo method

Metropolis–Hastings algorithm The Metropolis–Hastings algorithm is a Markov chain Monte Carlo method for obtaining a sequence of samples from a probability distribution, used to approximate integrals in computational problems encountered by researchers in Statistics, Physics, Chemistry, Econometrics, and Biology. It builds on ideas introduced by Nicholas Metropolis and generalized by W. K. Hastings, and it underpins modern computational tools developed at institutions such as Los Alamos National Laboratory and universities like Princeton University and University of Cambridge. The algorithm connects to the theoretical frameworks of Andrey Kolmogorov and Andrey Markov while being widely used alongside methods from Alan Turing's era and contemporary software maintained by projects at Google, Microsoft Research, Stanford University, and Massachusetts Institute of Technology.

Introduction

The algorithm emerged from work at Los Alamos National Laboratory and is rooted in statistical mechanics traditions exemplified by Enrico Fermi and Richard Feynman, adopting Monte Carlo ideas popularized by John von Neumann, Stanislaw Ulam, and Nicholas Metropolis. It addresses sampling problems faced in analyses in Bayesian statistics practiced at institutions like Harvard University and University of Chicago and connects to computational advances from DARPA and efforts led by figures such as John Tukey and Bradley Efron. The method is central in applied research at centers such as Bell Labs, IBM Research, and labs associated with University of California, Berkeley.

Algorithm

The procedure constructs a Markov chain with transition proposals drawn from a proposal distribution q, then applies an acceptance criterion derived from detailed balance conditions introduced by Ludwig Boltzmann and formalized by J. Willard Gibbs. Implementations often rely on pseudorandom generators inspired by work at RAND Corporation and algorithms developed by Donald Knuth and Rickard Sandberg. Algorithmic steps are implemented in software stacks maintained by teams at Google, Amazon Web Services, NumPy contributors, and academic groups at University of Oxford and University of Toronto.

Properties and Convergence

Convergence properties invoke results in Markov chain theory associated with Andrey Kolmogorov and mixing time analyses influenced by work at Princeton University and Institute for Advanced Study. Ergodicity and detailed balance conditions are tied to mathematical developments by Perron and Frobenius and to spectral gap analyses employed by researchers at Bell Labs and Courant Institute. Theoretical guarantees relate to limit theorems advanced by Andrey Markov and probabilists at Cambridge University and Imperial College London.

Practical Considerations and Implementation

Practical tuning (proposal scale, burn-in length, thinning) is informed by empirical studies from research groups at Stanford University, Columbia University, Yale University, and University of Michigan. Efficient implementations exploit linear algebra libraries originating from work at Argonne National Laboratory and numerical techniques promoted by John von Neumann and researchers at Lawrence Berkeley National Laboratory. High-performance computing adaptations have been developed by teams at Oak Ridge National Laboratory and Los Alamos National Laboratory, and integration with probabilistic programming languages is pursued by groups at Carnegie Mellon University and University College London.

Variants and Extensions

Numerous extensions include componentwise updates and adaptive schemes developed by investigators at University of Toronto and University of Washington, Hamiltonian methods influenced by Richard Feynman and formalized in work at Princeton University and Harvard University, and population-based techniques studied at University of Cambridge and California Institute of Technology. Connections exist to sequential methods from S. Kullback and R. A. Fisher-inspired inference, and hybrid approaches are implemented in software from Google and academic collaborations with Y Combinator-backed startups.

Applications

Applications span computational studies in Statistical mechanics at Los Alamos National Laboratory, inference problems in Bayesian statistics at Harvard University and Columbia University, molecular modeling in Lawrence Berkeley National Laboratory and California Institute of Technology, image reconstruction in projects at NASA and European Space Agency, and econometric analyses practiced at Massachusetts Institute of Technology and London School of Economics. The algorithm informs contemporary research collaborations involving National Institutes of Health, National Science Foundation, European Research Council, and industry teams at Google, Microsoft Research, IBM Research, and Amazon Web Services.

Category:Algorithms