LLMpediaThe first transparent, open encyclopedia generated by LLMs

Lattice QCD

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: quark–gluon plasma Hop 5
Expansion Funnel Raw 82 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted82
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Lattice QCD
NameLattice Quantum Chromodynamics
FieldQuantum Chromodynamics; Theoretical physics; Computational physics
Introduced1974
FounderKenneth G. Wilson

Lattice QCD

Lattice QCD is a non-perturbative, computational framework for studying Quantum Chromodynamics using a discretized spacetime grid. It enables ab initio calculations of hadronic spectra, matrix elements, and thermodynamic properties by combining ideas from Kenneth G. Wilson, Paul Dirac's quantization precedents, and numerical methods developed in Brookhaven National Laboratory, CERN, and Fermilab. Prominent collaborations such as the MILC Collaboration, RBC Collaboration, HPQCD Collaboration, ETM Collaboration, and JLQCD drive large-scale calculations on supercomputers like Fugaku, Summit (supercomputer), and LHC Computing Grid installations.

Introduction

The formulation maps Quantum Chromodynamics—the theory of quarks and gluons—onto a finite lattice to render path integrals amenable to Monte Carlo evaluation using importance sampling methods pioneered by Kenneth G. Wilson, Markov chain Monte Carlo techniques from Nicholas Metropolis, and algorithmic advances from Michael Creutz. Core goals include predicting masses of proton, neutron, pion, kaon, and heavy-flavor hadrons such as charm (quark) and bottom quark bound states, and computing weak matrix elements relevant to CKM matrix determinations and tests of Standard Model symmetries at experiments like LHC and Belle II.

Formulation and Lattice Discretization

The lattice formulation defines gauge fields as link variables on a hypercubic lattice in Euclidean time following Wilson’s gauge action and alternatives like the Symanzik-improved action introduced in contexts related to Kenneth G. Wilson and Paul Peter Weisz. Fermions on the lattice confront the fermion doubling problem addressed by formulations such as Wilson fermions (associated historically with Kenneth G. Wilson), staggered (Kogut–Susskind) fermions tied to John Kogut and Leonard Susskind, domain-wall fermions linked to David Kaplan, overlap fermions arising from Herbert Neuberger, and twisted mass fermions used by the ETM Collaboration. Gauge fixing choices reference concepts developed at Faddeev–Popov and techniques used at CERN and Brookhaven National Laboratory. Observables are constructed from correlation functions whose asymptotic behavior yields hadron masses used by experimental programs at Jefferson Lab and Brookhaven National Laboratory.

Numerical Methods and Algorithms

Practical calculations exploit Markov chain Monte Carlo, Hybrid Monte Carlo introduced by Duane, multigrid solvers inspired by methods from Richard Feynman's numerical lineage, and Krylov-subspace linear solvers such as conjugate gradient and BiCGStab used in projects at Oak Ridge National Laboratory and Argonne National Laboratory. Deflation and low-mode acceleration techniques trace intellectual links to Martin Lüscher and collaborations at CERN and DESY. Algorithmic developments like Rational Hybrid Monte Carlo have provenance connected to work at Brookhaven National Laboratory and IBM supercomputing centers. Autocorrelation, ergodicity concerns, and topological freezing are active research themes pursued by teams at RIKEN, University of Tokyo, and MIT.

Physical Results and Applications

Lattice calculations provide precise determinations of hadron masses (e.g., proton mass, neutron mass, pion), decay constants relevant to Belle II and LHCb phenomenology, and weak matrix elements informing CKM matrix unitarity tests tied to kaon physics and B meson processes studied at BaBar and Belle. Thermodynamic studies of the quark–gluon plasma interface with heavy-ion programs at RHIC and ALICE and predictions for the QCD phase diagram connect to CERN and Brookhaven National Laboratory experiments. Inputs to searches for physics beyond the Standard Model—including hadronic contributions to the muon g-2 anomaly measured at Fermilab Muon g-2 experiment—depend critically on lattice determinations of vacuum polarization and light-by-light scattering amplitudes.

Systematic Errors and Continuum Extrapolation

Controlled extrapolations address finite lattice spacing, finite volume effects, and unphysical quark masses by taking the continuum limit and using chiral perturbation theory frameworks linked to Steven Weinberg and effective field theory approaches influential at institutions like University of Cambridge and Harvard University. Renormalization techniques employ nonperturbative Rome–Southampton methods originating from collaborations at Rome, perturbative matching references work at SLAC, and operator mixing requires insights developed at Caltech and Princeton University. Systematic uncertainties are scrutinized by global efforts including the Flavour Lattice Averaging Group and flagship lattice collaborations.

Computational Resources and Software

Large-scale computations run on supercomputers sponsored by agencies such as DOE and NSF and national centers like Oak Ridge National Laboratory and Argonne National Laboratory. Community software suites include implementations from the USQCD collaboration, the Chroma software developed at Jefferson Lab, the CPS code from Columbia University, and GRID-enabled toolchains employed by CERN experiments. GPU acceleration paradigms leverage hardware from NVIDIA and software stacks maintained at Lawrence Livermore National Laboratory and RIKEN.

Historical Development and Future Directions

Origins trace to foundational proposals by Kenneth G. Wilson in the 1970s and numerical experiments by Michael Creutz and early groups at Cornell University and MIT. Subsequent progress followed from algorithmic and hardware revolutions linked to IBM and national supercomputing initiatives, and from collaborative networks across Europe and Japan such as CERN, DESY, and RIKEN BNL Research Center. Future directions emphasize exascale computing efforts coordinated with DOE and international centers, precision tests of Standard Model flavor physics for LHCb and Belle II, real-time dynamics with quantum simulation experiments at Google and IBM Quantum, and synergy with experimental programs at J-PARC and Jefferson Lab.

Category:Quantum chromodynamics