Generated by GPT-5-mini| persistent homology | |
|---|---|
| Name | Persistent homology |
| Field | Algebraic topology, Computational topology, Topological data analysis |
| Introduced | 2000s |
| Researchers | Gunnar Carlsson; Herbert Edelsbrunner; Afra Zomorodian; Dmitriy Morozov; Robert Ghrist |
persistent homology Persistent homology is an algebraic technique in Algebraic Topology and Computational Topology that quantifies multiscale topological features in data. It originated through interactions among researchers at institutions such as Stanford University, Duke University, and Microsoft Research and has influenced work across Applied Mathematics, Computer Science, and Data Science.
Persistent homology studies how homological features—connected components, cycles, and voids—appear and disappear across a filtration parameter associated with a space or dataset. Foundational contributors include Gunnar Carlsson, Herbert Edelsbrunner, Afra Zomorodian, Dmitriy Morozov, and Robert Ghrist, linked to programs at Institute for Advanced Study, Princeton University, and University of Illinois at Urbana–Champaign. The method connects to invariants from Homology (mathematics), stability results motivated by work at Institut des Hautes Études Scientifiques, and computational frameworks influenced by software projects at Google and IBM Research.
The formalism uses filtrations of simplicial complexes such as Vietoris–Rips complexes and Čech complexes built from metric spaces like point clouds sampled from manifolds studied in Differential Geometry and Riemannian Geometry. Homology groups over fields—often coefficients in Z/2Z—give vector spaces whose ranks define Betti numbers, a concept refined by work at Columbia University and Massachusetts Institute of Technology. Persistence modules are representations parameterized by the real line and connect to the structure theorem for finitely generated modules over principal ideal domains, a topic traced to results from Emmy Noether-inspired algebra and developments at University of Göttingen and University of Leipzig. The algebraic decomposition yields barcodes and persistence diagrams, concepts formalized in seminars at Courant Institute and conferences at International Congress of Mathematicians.
Algorithmic computation employs matrix reduction similar to Gaussian elimination and discrete Morse theory optimizations influenced by work at ETH Zurich and École Polytechnique Fédérale de Lausanne. Key algorithmic contributions originate from teams at Brown University, University of Illinois at Urbana–Champaign, and University of Utah implementing persistence via boundary matrices, pairing algorithms, and union-find data structures inspired by research at Bell Labs and Carnegie Mellon University. Enhancements include sparsification, parallelization deployed on architectures from NVIDIA GPUs, and streaming algorithms advanced at Microsoft Research and Facebook AI Research. Complexity analyses reference reductions to matrix multiplication algorithms studied at Stanford University and Princeton University.
Persistent homology has been applied to pattern recognition problems in projects at Stanford University and California Institute of Technology, to sensor network coverage problems researched at University of California, Berkeley and Yale University, and to materials science collaborations with Argonne National Laboratory and Lawrence Berkeley National Laboratory. In biology it informs studies at Broad Institute and Wellcome Trust Sanger Institute for single-cell genomics and protein folding; in neuroscience it complements work at Harvard University and Max Planck Institute for Human Cognitive and Brain Sciences on connectome analysis. Other applications intersect with finance groups at New York University and London School of Economics, climate modeling teams at NASA and NOAA, and robotics labs at MIT and Georgia Institute of Technology for motion planning. Cross-disciplinary projects have been hosted by Simons Foundation, NSF, and ERC funding programs.
Stability theorems ensure small perturbations in input data produce controlled changes in persistence diagrams, a line of work developed in collaborations involving Gunnar Carlsson, Herbert Edelsbrunner, and researchers at University of Chicago and ETH Zurich. Interleaving distance and bottleneck distance connect to categorical perspectives studied at Category Theory seminars at University of Cambridge and University of Oxford and to geometric measure theory topics pursued at Stanford University. Extensions to multidimensional persistence relate to representation theory advances from groups at University of Pennsylvania and to computational hardness results discussed at Institute for Pure and Applied Mathematics. Recent theory explores links with sheaf theory and cosheaves investigated at University of Notre Dame and University of Illinois at Chicago.
Widely used software implementations include libraries developed at University of Illinois at Urbana–Champaign and Duke University, commercial and open-source projects affiliated with Google Research and Microsoft Research, and community tools maintained by groups at University of Washington, Max Planck Institute for Mathematics in the Sciences, and Imperial College London. Notable packages have been integrated into ecosystems supported by Python Software Foundation and R Project for Statistical Computing, with contributions from developers connected to GitHub repositories and computational platforms at Amazon Web Services and Google Cloud Platform. Training workshops and tutorials have been offered at meetings of the Society for Industrial and Applied Mathematics, Association for Computing Machinery, and International Society for Computational Biology.