LLMpediaThe first transparent, open encyclopedia generated by LLMs

Good Judgment Project

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 49 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted49
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Good Judgment Project
NameGood Judgment Project
Formation2011
FoundersPhilip Tetlock; Barbara Mellers
TypeForecasting tournament; research consortium
HeadquartersPhiladelphia, Pennsylvania
FieldsProbabilistic forecasting; decision science

Good Judgment Project

The Good Judgment Project was a forecasting research initiative that applied structured probabilistic judgment and crowd aggregation to improve prediction accuracy on geopolitical, financial, and scientific events. It emerged from academic collaborations and high-profile forecasting tournaments, producing influential findings on human judgment, crowd wisdom, and training interventions. The project interfaced with academic institutions, private firms, and government bodies to translate forecasting insights into applied forecasting platforms.

History

The project began as a response to methodological debates sparked by the IARPA forecasting tournaments, drawing on prior work by scholars associated with the University of Pennsylvania, University of California, Berkeley, and University of Pennsylvania School of Arts and Sciences. Early collaborators included researchers linked to the Carnegie Mellon University and the University of Pennsylvania Law School, and the initiative gained visibility through reports and briefings for agencies such as the Office of the Director of National Intelligence and programs within the United States Intelligence Community. Founders with backgrounds connected to the Princeton University and the University of Chicago helped scale forecasting tools used in subsequent challenge rounds run by IARPA and partner organizations.

Methodology

The project combined individual judgment elicitation, training modules derived from research in judgment and decision-making at institutions like the Annenberg School for Communication and the Wharton School, and algorithmic aggregation methods influenced by work at the Santa Fe Institute and the Massachusetts Institute of Technology. Core practices included structured question design influenced by norms from the Rand Corporation and statistical calibration approaches taught in collaboration with researchers from the University of Michigan and the London School of Economics. Aggregation algorithms drew on techniques similar to ensemble forecasting used by groups at the National Oceanic and Atmospheric Administration and probabilistic modeling strategies developed by teams at Google and Microsoft Research.

Notable Tournaments and Results

The project’s performance in the multi-year IARPA ACE Program and other forecasting tournaments attracted attention for outperforming baseline models used by analysts in organizations comparable to the Central Intelligence Agency and the National Security Agency. Results were reported alongside other competitive entries from teams associated with the University of Oxford, Harvard University, Stanford University, and independent forecasting platforms inspired by initiatives at The Brookings Institution. Top-performing forecasters who participated had affiliations with institutions such as Columbia University, Yale University, and New York University.

Organizational Structure and Key People

Key academic leaders came from departments and centers at University of Pennsylvania, University of Michigan, and the University of California, Berkeley, working with staff from partner organizations analogous to consulting firms like McKinsey & Company and research nonprofits comparable to RAND Corporation. Senior contributors included psychologists and political scientists who had published in venues associated with American Political Science Association conferences and journals tied to the American Psychological Association. Operational roles often interfaced with policy stakeholders linked to the White House and advisory bodies connected to the National Academies.

Impact and Applications

Findings influenced forecasting practices in policy institutions reminiscent of the United Nations, financial groups similar to Goldman Sachs, and technology companies modeled on Amazon and IBM. Techniques pioneered in the project informed training programs at think tanks like CSIS and influenced platform designs for commercial forecasting ventures connected to accelerators and incubators like Y Combinator. Academic uptake occurred in departments at Princeton University, Yale University, and the London School of Economics, while methods fed into risk assessment workflows at foundations comparable to the Bill & Melinda Gates Foundation.

Criticisms and Limitations

Critiques addressed limits noted by scholars from the University of Oxford and commentators in outlets associated with the Brookings Institution and Council on Foreign Relations, including concerns about question framing, external validity for rare events, and dependence on motivated volunteer participants. Methodological debates referenced literature from the Royal Society and statistical critiques similar to those discussed by researchers at the Institute for Operations Research and the Management Sciences. Applicability to high-stakes, classified forecasting in agencies like the Defense Intelligence Agency raised questions about reproducibility and institutional adoption.

Category:Forecasting