LLMpediaThe first transparent, open encyclopedia generated by LLMs

Dimensional regularization

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Yukawa coupling Hop 4
Expansion Funnel Raw 94 → Dedup 24 → NER 18 → Enqueued 13
1. Extracted94
2. After dedup24 (None)
3. After NER18 (None)
Rejected: 6 (not NE: 6)
4. Enqueued13 (None)
Similarity rejected: 5
Dimensional regularization
NameDimensional regularization
FieldTheoretical physics
Introduced1970s
Inventors't Hooft, Veltman, Bollini, Giambiagi

Dimensional regularization is a technique in theoretical physics used to handle divergent integrals arising in perturbative expansions by analytically continuing spacetime dimension. It provides a systematic way to render ultraviolet and infrared divergences finite by treating the number of dimensions as a complex parameter, enabling renormalization in gauge theories and other quantum field theories.

Introduction

Dimensional regularization was developed to control divergences in perturbative calculations encountered in quantum electrodynamics, quantum chromodynamics, and electroweak theory, and it is widely used in the context of the Standard Model, perturbative string theory, and effective field theories. Pioneering contributors include Gerard 't Hooft, Martinus Veltman, Carlos Bollini, and Juan Giambiagi, while subsequent adoption involved communities around CERN, Princeton University, MIT, and Stanford University. The method interacts with foundational results from Paul Dirac, Richard Feynman, Julian Schwinger, and influences from mathematical physics literature connected to Henri Poincaré, Bernhard Riemann, and Laurent Schwartz.

Mathematical formulation

The procedure replaces integrals over n-dimensional momentum space with integrals over d = 4 − ε complex dimensions; this analytic continuation leverages techniques from complex analysis associated with Bernhard Riemann and Henri Lebesgue integration theory. Loop integrals are expressed using Feynman parameterization influenced by methods introduced by Richard Feynman and evaluated via Gamma functions studied by Leonhard Euler and Adrien-Marie Legendre. Dimensional continuation employs analytic properties akin to constructions in the work of Maurice René Fréchet and the theory of distributions developed by Laurent Schwartz; regularized integrals are expanded around ε = 0 in Laurent series, exposing simple and higher-order poles connected to renormalization group flows described by Kenneth Wilson and Miguel Virasoro-related structures. The algebraic handling of gamma-matrix traces uses identities traced back to Wolfgang Pauli and Enrico Fermi frameworks, while tensor integrals use projections analogous to techniques from Felix Klein and Élie Cartan.

Applications in quantum field theory

Dimensional regularization is standard in calculations of radiative corrections within Quantum Electrodynamics, Quantum Chromodynamics, and the Electroweak interaction sector of the Standard Model of particle physics. It underlies precision predictions tested at CERN Large Hadron Collider, SLAC National Accelerator Laboratory, Fermi National Accelerator Laboratory, and guided analyses from collaborations like ATLAS, CMS, and ALEPH. The method is integral to perturbative studies influencing the work of Steven Weinberg, Sheldon Glashow, Abdus Salam, and Peter Higgs, and is used in computation of beta functions, anomalous dimensions, and cross sections compared against measurements by LEP and Tevatron. It also appears in effective field theory approaches developed by Howard Georgi, Steven Weinberg (physicist), and in matching computations for chiral perturbation theory associated with Steven Weinberg and Gerard 't Hooft.

Renormalization and poles in ε

Renormalization procedures using dimensional regularization isolate divergences as poles in ε that correspond to logarithmic and power-law divergences identified in the renormalization program of Wolfgang Pauli-era field theory and later formalized by Gerard 't Hooft and Kenneth Wilson. Subtraction schemes such as Minimal Subtraction and Modified Minimal Subtraction (MS and MS-bar) introduced by Goroff and Sagnotti communities, and formalized with contributions from William Bardeen and Claude Itzykson, remove these poles and define renormalized couplings; resulting renormalization group equations reflect the work of Nicholas K. Bogoliubov and Oskar Klein-influenced scaling ideas. Anomalies, such as the chiral anomaly studied by Adler and Bell & Jackiw, require careful treatment under dimensional continuation, relating to methods discussed by Stephen L. Adler and John S. Bell.

Variants and extensions

Extensions include dimensional reduction developed to preserve supersymmetry in regularized computations used in studies by Pierre Fayet, Edward Witten, and Luis Álvarez-Gaumé, while analytic regularization and Pauli–Villars regularization remain alternatives originating from Wolfgang Pauli and Felix Villars. Techniques mixing dimensional continuation with lattice methods relate to the Wilson lattice program of Kenneth Wilson and numerical work at Brookhaven National Laboratory and Riken. Modern amplitude methods incorporate dimensional regularization with unitarity cuts pioneered by Zvi Bern, Lance Dixon, and David Kosower; bootstrap-style approaches connect to developments by Nima Arkani-Hamed and the amplituhedron program related to Andrew Hodges.

Examples and computations

Concrete examples include one-loop vacuum polarization in Quantum Electrodynamics, vertex corrections in Quantum Chromodynamics, and self-energy computations for the Higgs boson within the Standard Model of particle physics; these calculations are central to precision electroweak fits by groups at CERN, SLAC, and Fermilab. Multi-loop computations up to three or four loops employ symbolic algebra systems developed at Caltech, Imperial College London, and DESY, using algorithms influenced by Gertjan 't Hooft's diagrammatics and methods codified by Vladimir Smirnov and Klaus Chetyrkin. Dimensional-regularization integrals are evaluated using integration-by-parts identities introduced by Chetyrkin and Tkachov, and reduction techniques implemented in packages originating from John Vermaseren and Jos Vermaseren.

Historical development and reception

The technique emerged in the early 1970s from parallel work by Carlos Bollini and Juan Giambiagi in Argentina and by Gerard 't Hooft and Martinus Veltman in The Netherlands and at CERN, generating discussions across the European Organization for Nuclear Research community and the American Physical Society readership. The approach gained rapid acceptance through its effectiveness in taming divergences in non-Abelian gauge theories and its compatibility with renormalization schemes used by Gershon Goldstein-adjacent groups and analysis by Steven Weinberg; it has been incorporated into standard textbooks by authors like Peskin and Schroeder and review articles appearing in Reviews of Modern Physics and Physics Reports. Some debates concerned treatment of γ5 and chiral anomalies, engaging researchers including John Collins, Paolo di Vecchia, and Laurent Lellouch, but the method remains a cornerstone of perturbative quantum field theory.

Category:Quantum field theory