LLMpediaThe first transparent, open encyclopedia generated by LLMs

Simmons-Duffin

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 51 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted51
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Simmons-Duffin
NameSimmons-Duffin
FieldsPhysics; Applied Mathematics; Engineering
Known forConformal bootstrap techniques; numerical spectral analysis; bounds methods
Notable worksSimmons-Duffin 2015; numerical bootstrap reviews

Simmons-Duffin

Simmons-Duffin is a methodological framework and body of work in theoretical and computational physics associated with the development of numerical conformal bootstrap techniques and functional-analytic bounds. Originating in the mid-2010s, it synthesized ideas from conformal field theory, spectral theory, and optimization to produce rigorous constraints on operator dimensions and correlation functions in quantum field theories. The approach has influenced research across high-energy physics, statistical mechanics, and condensed matter physics, interfacing with lattice gauge theory, AdS/CFT correspondence, and critical phenomena.

History

The framework grew out of earlier efforts in the conformal bootstrap program pioneered by figures linked to the revival of nonperturbative methods such as Alexander Polyakov, Sergio Ferrara, Gerard Mack, and later numerical implementations influenced by Rattazzi et al. and Sheer El-Showk. The specific techniques attributed to the name emerged through work building on functional/analytic perspectives found in papers by Slava Rychkov and computational advances inspired by solvers used in Columbia University and Princeton University research groups. Early computational proofs-of-concept connected to studies of the Ising model critical exponents, the O(N) model, and constraints relevant to N=4 supersymmetric Yang–Mills theory and the 3D Ising critical point.

Principles and Mechanics

At its core the method leverages axioms of local conformal invariance encoded in crossing symmetry and unitarity, combined with convex optimization and semidefinite programming techniques developed in numerical analysis and operations research. It constructs linear functionals acting on conformal block decompositions—objects studied by Francis Dolan and Hugh Osborn—to derive rigorous bounds on scaling dimensions and operator product expansion coefficients. Implementations employ conformal blocks for scalar, spin, and supersymmetric representations computed using recursion relations related to work by Matthijs Hogervorst and Filip Kos. Numerical stability and spectral gap extraction rely on high-precision arithmetic and eigenvalue solvers developed in the numerical linear algebra tradition traced to algorithms from Lapack and iterative methods linked to Arnoldi and Lanczos approaches.

Applications in Science and Engineering

The method has been applied to problems in theoretical particle physics such as constraining operator spectra in theories relevant to the AdS/CFT correspondence, informing holographic model-building in contexts influenced by Juan Maldacena and Edward Witten. In statistical physics it tightened estimates for critical exponents in models like the Ising model, XY model, and O(N) model, complementing Monte Carlo studies associated with Ulli Wolff and lattice field theory efforts at institutions such as CERN and Brookhaven National Laboratory. Condensed-matter applications include constraints on quantum critical points relevant to work on Kondo effect variants and entanglement properties linked to studies by Pasquale Calabrese. Engineering-adjacent numerical techniques informed by this framework have cross-pollinated with optimization practices in computational electromagnetics and signal processing communities tied to research from MIT and Stanford University.

Notable Experiments and Results

Key numerical milestones include precise determinations of scaling dimensions at the three-dimensional Ising fixed point, competitive with results from Monte Carlo methods and high-order epsilon expansion computations by groups connected to Jean Zinn-Justin. Other landmarks are bounds on operator dimensions in supersymmetric theories that validated perturbative and holographic expectations from Nathan Seiberg and in some cases ruled out proposed spectra motivated by model-building in beyond-Standard-Model physics. Collaborative studies combining bootstrap constraints with lattice results from teams at University of Oxford and University of Cambridge produced cross-validated estimates for universal quantities in critical phenomena. Software toolchains implementing these calculations often cite influences from open-source numerical libraries originating at Netlib and solver ecosystems developed in academic consortia.

Criticisms and Limitations

Critiques focus on computational scalability and the challenge of extending rigorous bounds to theories with large internal symmetry groups or intricate operator mixing, issues also faced in large-N expansions pioneered by Edward Witten and Gerard ’t Hooft. The necessity for high-precision arithmetic and heavy semidefinite programs makes some problems intractable without significant computational resources available at centers like NERSC or national supercomputing facilities. Interpretational limits arise when relating bounds to dynamical mechanisms in nonconformal settings studied by Leo Kadanoff and when attempting to incorporate nonunitary theories relevant to certain statistical models investigated by John Cardy.

Related approaches include analytic bootstrap developments tracing to Mellin transform techniques used by researchers such as Joao Penedones, Lorentzian inversion formulae introduced by Simon Caron-Huot, and modular bootstrap strategies used in two-dimensional conformal field theory influenced by John Cardy and Modular invariance studies. Extensions incorporate supersymmetry representations drawing on classification work by Werner Nahm and applications to scattering amplitudes connected to the S-matrix program legacy including Richard Eden. Numerical and algorithmic synergies continue with convex optimization research from institutions like Princeton University and software advances in semidefinite programming developed by communities around SDPA and MOSEK.

Category:Theoretical physics