LLMpediaThe first transparent, open encyclopedia generated by LLMs

MMHT

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: top quark Hop 4
Expansion Funnel Raw 77 → Dedup 5 → NER 5 → Enqueued 3
1. Extracted77
2. After dedup5 (None)
3. After NER5 (None)
4. Enqueued3 (None)
MMHT
NameMMHT

MMHT is a computational framework and model family developed for high-precision parton distribution fitting and theoretical predictions in perturbative quantum field calculations. The project integrates perturbative inputs, experimental datasets, and numerical optimization to produce parameterizations used in particle physics phenomenology, collider physics, and global analyses. MMHT is widely used alongside other global fitting efforts and is cited in studies involving hadron collider observables, electroweak measurements, and precision tests of the Standard Model.

History

The MMHT program emerged from efforts by research groups active in phenomenology at institutions such as CERN, Fermilab, SLAC National Accelerator Laboratory, DESY, and Brookhaven National Laboratory following developments in global fits by collaborations like CTEQ, NNPDF, and HERAPDF. Early roots trace to global analyses performed in the 1990s at University of Oxford and University of Cambridge groups that collaborated with authors associated with experiments including ATLAS, CMS, Tevatron, and HERA. Milestones in its evolution parallel improvements in perturbative calculations exemplified by results from Quantum Chromodynamics workshops, higher-order computations presented at the Moriond conferences, and benchmarking efforts reported to committees such as the Particle Data Group.

Over successive releases the MMHT methodology incorporated theoretical advances from computations by teams at Max Planck Institute for Physics, Institute for Nuclear Theory, California Institute of Technology, and Princeton University; incorporated datasets from experiments at Large Hadron Collider, Tevatron Collider, HERA collider, and fixed-target experiments like COMPASS; and responded to theoretical developments reported in journals supported by American Physical Society, Institute of Physics, and Elsevier. The project has been cited in global fits compared at workshops hosted by Les Houches and in reviews by panels convened by European Organization for Nuclear Research advisory boards.

Design and Algorithm

MMHT employs parameterized functional forms for parton distribution functions constrained by perturbative inputs from Deep Inelastic Scattering measurements, Drell–Yan production, prompt-photon data, and jet production cross sections measured by collaborations such as ZEUS, H1, CDF, , LHCb, ALICE, ATLAS, and CMS. The fitting engine uses minimization techniques related to methods popularized by groups at Stanford Linear Accelerator Center and by statistical toolkits developed at CERN such as ROOT. Regularization strategies draw on ideas from analyses by researchers at Imperial College London and University of Edinburgh to stabilize extrapolations at small and large momentum fractions.

The theoretical backbone integrates calculations at next-to-leading order and next-to-next-to-leading order, leveraging coefficient functions and splitting kernels computed within frameworks associated with Dimensional Regularization, techniques refined in collaborations involving MIT, Rutgers University, Yale University, and University of Manchester. Heavy-flavor treatments reference schemes discussed in literature from ACOT proponents and groups around SACOT and other matched variable-flavor-number approaches used by teams at University of Hamburg and LPNHE. Evolution is performed with DGLAP equations whose kernels have been cross-checked against implementations by the Les Houches Accord and by independent codes developed at Université Paris-Sud.

Applications

MMHT parameterizations are used to predict cross sections and kinematic distributions for processes measured by ATLAS, CMS, LHCb, and by legacy experiments at Tevatron. Phenomenological studies employing MMHT inform theoretical interpretations of measurements such as Higgs boson production studied by collaborations like ATLAS and CMS, electroweak precision fits used by groups at CERN and Fermilab, and searches for beyond-Standard-Model signals pursued by teams at SLAC and JLAB. Global analyses employing MMHT inputs are referenced in reports by the Particle Data Group and in theory-experiment comparison papers presented at venues including ICHEP and EPS-HEP.

MMHT sets are interfaced to Monte Carlo generators and tools widely used in collider simulations developed by groups behind PYTHIA, HERWIG, SHERPA, and matrix-element providers like MadGraph and MCFM. They contribute to uncertainty estimates for luminosity determinations used by LHC experiments, to proton structure studies relevant for neutrino experiments such as MINERvA and NOvA, and to cosmic-ray interaction modeling used by collaborations like Pierre Auger Observatory.

Performance and Benchmarks

Benchmarking of MMHT fits has been performed against alternative global fits such as those from CTEQ-TEA, NNPDF Collaboration, and ABMP. Comparisons report consistency within quoted uncertainties for many observables measured by ATLAS and CMS but highlight differences in specific parton flavors at particular kinematic ranges probed by HERA and fixed-target experiments like SLAC measurements. Validation studies presented at Les Houches workshops compare predictions for Drell–Yan rapidity distributions, jet cross sections recorded by CDF and , and Higgs production rates compiled by LHC Higgs Cross Section Working Group.

Numerical performance depends on implementation choices and available computational resources from centers such as NERSC, CERN openlab, and national supercomputing facilities like PRACE nodes. MMHT fits scale with dataset size and theoretical order, with typical global fits requiring compute campaigns comparable to those reported by analyses at Fermilab and computations performed on clusters at University of Oxford.

Limitations and Criticism

Critiques of MMHT mirror broader debates in parton fitting: sensitivity to dataset selection raised by analysts at HERA collaborations, differences in methodological choices highlighted by teams behind NNPDF Collaboration and CTEQ-TEA, and treatment of theoretical uncertainties discussed in reports by Les Houches and panels convened by European Research Council. Specific points include dependence on chosen functional forms noted by researchers at Imperial College London, the handling of heavy-quark masses debated by groups at DESY and University of Hamburg, and the propagation of experimental systematic uncertainties critiqued by statisticians affiliated with University of Cambridge and Columbia University.

Further limitations concern extrapolation to kinematic regions probed by future facilities such as Electron-Ion Collider and sensitivity to small-x dynamics explored by theorists at Princeton University and Institute for Advanced Study. Ongoing work addresses these criticisms through cross-comparisons with fits from CTEQ-TEA, NNPDF Collaboration, and reanalyses motivated by new data from ATLAS and CMS.

Category:Parton distribution functions