Generated by GPT-5-mini| Feldman–Cousins | |
|---|---|
| Name | Feldman–Cousins method |
| Caption | Confidence belt construction in a Poisson counting experiment |
| Introduced | 1998 |
| Authors | Gary J. Feldman, Robert D. Cousins |
| Area | Statistical inference |
| Applications | High Energy Physics, Particle detector analyses, Neutrino oscillation experiments |
Feldman–Cousins
The Feldman–Cousins procedure is a unified frequentist technique for constructing confidence intervals and upper limits in counting experiments and parameter estimation, developed to address anomalous interval behavior in searches for rare processes. It was introduced by Gary J. Feldman and Robert D. Cousins and quickly influenced analyses at facilities such as CERN, Fermilab, and SLAC National Accelerator Laboratory. The method is widely cited in the context of analyses by collaborations including Particle Data Group, Super-Kamiokande, Daya Bay, MINOS, and KamLAND.
The method arose partly in response to debates at institutions like Brookhaven National Laboratory and Los Alamos National Laboratory about proper frequentist coverage when measuring small signals with background, as in searches for processes reported by experiments at Large Hadron Collider programs like ATLAS and CMS. Feldman and Cousins critiqued conventional procedures used by groups such as CERN NA31 and pointed to paradoxes encountered in results reported by experiments including E787 and KTeV. Their motivation referenced earlier statistical developments from thinkers at Johns Hopkins University, University of Chicago, and Columbia University and engaged with methodologies discussed in meetings of the American Physical Society and publications of the Institute of Physics.
The core construct is a confidence belt built from likelihood ordering to produce intervals with correct frequentist coverage; the approach uses ordering principles related to tests developed at Neyman–Pearson and refinements associated with statisticians from University of Cambridge and Stanford University. For each hypothesized parameter value one computes test-statistic rankings, often likelihood ratios inspired by methods used at Harvard University and Massachusetts Institute of Technology, and includes outcomes until the desired confidence level, a practice adopted in analyses at Belle (experiment) and BaBar (detector). The belt is analogous to constructions used in classic works from Karl Pearson and Jerzy Neyman but applies an ordering criterion that prevents intervals from flipping between two-sided and one-sided in an ad hoc fashion, a problem noted in historical contexts involving results from CERN ISR and SLAC E137.
Practitioners implement the procedure for Poisson processes with background, common in searches at Gran Sasso National Laboratory and reactor experiments such as Double Chooz, by simulating datasets using detector models developed at KEK and TRIUMF. Collaborations like IceCube Collaboration and Pierre Auger Observatory employ the belt to derive limits on signal strengths and cross-sections, integrating systematic uncertainties treated in ways similar to approaches from European Organization for Nuclear Research analyses. Software implementations draw on numerical routines originating from groups at CERN ROOT and computational frameworks used at National Institute of Standards and Technology, often combining Feldman–Cousins intervals with profile-likelihood calculations pioneered at Fermilab.
The method guarantees frequentist coverage across parameter space under the assumed model, addressing coverage anomalies discussed by statisticians at University of Oxford and University of California, Berkeley. Its likelihood-ratio ordering yields intervals with good power properties for alternative hypotheses considered by collaborations such as LIGO Scientific Collaboration and VIRGO Collaboration. Critics from academic centers including Princeton University and University of Michigan have debated interpretational aspects versus Bayesian credible intervals used in groups like INPE and Los Alamos National Laboratory; proponents emphasize the method's objective, repeatable properties respected by committees at International Committee for Future Accelerators.
Compared with classical Neyman constructions used in historical analyses at CERN and simple "flip-flopping" prescriptions criticized in reviews from Particle Data Group, Feldman–Cousins avoids ambiguous switching between one-sided and two-sided intervals, a point debated in symposia at Royal Society and European Physical Society. Against Bayesian techniques favored in parts of Astrophysical Journal and by researchers at Max Planck Society, it contrasts by not requiring priors as practiced in works from University of Chicago and Caltech. Compared to hybrid or likelihood-ratio based profile intervals employed at DESY and in searches by HERA, Feldman–Cousins offers a prescription with guaranteed coverage but can be computationally heavier, an issue addressed by software groups at Lawrence Berkeley National Laboratory and Argonne National Laboratory.
Notable applications include upper limits on rare decays reported by collaborations such as NA62, neutrino mixing parameter constraints from T2K and NOvA, and dark-matter direct-detection bounds by experiments like XENON and LUX-ZEPLIN. It has been used in precision measurements at LEP and in searches for neutrinoless double beta decay by experiments at Gran Sasso National Laboratory and SNOLAB. Case studies appearing in conference proceedings of International Conference on High Energy Physics and analyses endorsed by the Particle Physics and Astronomy Research Council illustrate both practical implementations and debates about computational cost versus interpretive clarity.
Category:Statistical methods