LLMpediaThe first transparent, open encyclopedia generated by LLMs

Passarino–Veltman reduction

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 56 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted56
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Passarino–Veltman reduction
NamePassarino–Veltman reduction
FieldQuantum field theory
InventorGiampiero Passarino, Mauro Veltman
Year1979
Related conceptsFeynman diagram, One-loop amplitude, Tensor integral

Passarino–Veltman reduction. In quantum field theory, it is a systematic algebraic method for decomposing one-loop Feynman diagram amplitudes into a basis of known scalar integrals. Developed by Giampiero Passarino and Mauro Veltman in 1979, the technique transforms complicated tensor integrals, which contain loop momenta in the numerator, into linear combinations of simpler integrals with only propagators in the denominator. This reduction is fundamental for performing perturbative calculations in theories like the Standard Model, enabling the extraction of finite, gauge-invariant results for scattering amplitudes and decay widths.

Introduction and definition

The necessity for the Passarino–Veltman reduction arose from the computational challenges in quantum electrodynamics and later quantum chromodynamics, where evaluating one-loop amplitudes directly was prohibitively complex. Prior techniques often relied on cumbersome Feynman parameter integrations or suffered from ambiguities related to gauge invariance. The method provides a clear algorithm to express any one-loop tensor integral in terms of a standard set of scalar integrals: the tadpole (A), bubble (B), triangle (C), and box (D) functions. This decomposition is crucial for efficient computation in high-energy physics experiments, such as those at the Large Hadron Collider at CERN.

Mathematical formulation

The formalism begins by considering a general one-loop tensor integral of rank *R*, which involves an integral over the loop momentum with *R* factors of the momentum in the numerator and *N* propagators in the denominator. The core idea is to write the loop momentum as a sum of external momenta, which are fixed, and a residual component. By imposing Lorentz covariance, the tensor integral can be expressed as a sum of all possible tensors constructed from the available external momenta and the Minkowski metric tensor, with coefficients that are scalar functions of the Lorentz invariant Mandelstam variables. These coefficients are then systematically solved for by contracting both sides of the equation with appropriate external momenta, yielding a linear system of equations.

Algorithmic procedure

The algorithmic procedure involves several distinct steps. First, the tensor integral is written in its most general covariant form. Next, one contracts the integral with each of the independent external momenta and also with the metric tensor. This contraction produces a set of linear equations where the unknowns are the scalar coefficient functions. Solving this linear system, typically via Cramer's rule or matrix inversion, yields the coefficients explicitly in terms of the simpler scalar integrals. The procedure is implemented in many computer algebra system packages, such as FeynCalc and FORM, and forms the backbone of automated one-loop computation tools like MadGraph and FeynArts.

Applications in particle physics

The reduction technique has been extensively applied in precision calculations for collider physics. It was instrumental in computing the radiative corrections to the W boson and Z boson masses, a critical test of the electroweak theory developed by Steven Weinberg, Abdus Salam, and Sheldon Glashow. It is also vital for calculating next-to-leading order (NLO) predictions for processes like Drell–Yan production, Higgs boson production via gluon fusion, and top quark pair production at the Tevatron and the Large Hadron Collider. These precise calculations allow for stringent tests of the Standard Model and searches for new physics.

While the original Passarino–Veltman reduction is for one-loop integrals with standard propagators, several generalizations have been developed. The Ossola–Papadopoulos–Pittau (OPP) reduction method provides a numerically efficient, integrand-level approach for one-loop amplitudes. For integrals involving massive particles or higher loops, techniques like integration by parts identities, used in conjunction with the Laporta algorithm, and differential equation methods have become standard. Furthermore, the development of unitarity cut methods, pioneered by Ruth Britto, Freddy Cachazo, and Bo Feng, offers a complementary, on-shell approach to amplitude calculation that bypasses traditional Feynman diagram expansions. Category:Quantum field theory Category:Perturbation theory Category:Theoretical physics