LLMpediaThe first transparent, open encyclopedia generated by LLMs

Reproducibility Project (Psychology)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Open Science Framework Hop 4
Expansion Funnel Raw 60 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted60
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Reproducibility Project (Psychology)
NameReproducibility Project (Psychology)
CountryUnited States
FieldPsychology
Published2015
AuthorsBrian A. Nosek, Mahzarin R. Banaji, Daniel Kahneman

Reproducibility Project (Psychology) was a large-scale collaborative effort to assess the replicability of empirical findings in Journal of Personality and Social Psychology, Science (journal), Nature, and other outlets by attempting direct replications of published experiments. Initiated by researchers connected to University of Virginia, University of Virginia School of Medicine, and the Center for Open Science, the project sought to quantify reproducibility, standardize replication protocols, and promote transparent practices across laboratories including contributors from Princeton University, Harvard University, Stanford University, University of California, Berkeley, and Yale University.

Background and objectives

The project emerged amid concerns following influential work by figures such as Daniel Kahneman, Diederik Stapel, Brian Wansink, and incidents at institutions like University of Illinois Urbana–Champaign and University of Amsterdam that highlighted fabrication, questionable research practices, and publication bias. Advocates including researchers at the Center for Open Science, Open Science Framework, and journals like Psychological Science pursued a coordinated effort to replicate a representative sample of studies from high-impact venues including Journal of Experimental Psychology: General and Journal of Personality and Social Psychology. Objectives included estimating the reproducibility rate, evaluating effect size attenuation, and encouraging adoption of practices promoted by groups associated with National Science Foundation, National Institutes of Health, and policy advisors from National Academy of Sciences.

Methods and replication protocol

The project employed a registered replication report model promoted by editors at journals such as Perspectives on Psychological Science and organizers connected to Center for Open Science and the Open Science Collaboration. Teams pre-registered protocols modeled on infrastructure like Open Science Framework and followed guidelines echoed by advocates including John Ioannidis, Paul Meehl, and Andrew Gelman. Replications sampled experiments from 2008 publications in venues including Psychological Science and Journal of Experimental Psychology: Learning, Memory, and Cognition, using power analyses referencing standards from American Statistical Association and methods discussed by researchers at University College London and Columbia University. Coordinated laboratories from institutions like University of Texas at Austin, University of Michigan, University of Wisconsin–Madison, Duke University, and University of North Carolina at Chapel Hill executed protocols with blinded data procedures inspired by practices used at Massachusetts Institute of Technology and Max Planck Society centers.

Results and key findings

The project reported that a substantial fraction of replications produced smaller or non-significant effects compared to originals, prompting comparisons to meta-research by John Ioannidis and discussions in outlets like Science (journal) and Nature. Analyses referenced statistical frameworks from Ronald Fisher-influenced traditions and Bayesian critiques associated with scholars at Princeton University and University of Cambridge. Findings influenced debates involving scholars affiliated with Columbia University, University of California, Los Angeles, University of Pennsylvania, and Brown University about effect size heterogeneity and the role of publication practices exemplified in cases at Duke University and Cornell University. The reported reproducibility rate sparked comparisons to replication efforts in fields represented by institutions such as European Research Council grantees and initiatives within Harvard Medical School.

Criticisms and debates

Critiques emerged from researchers at University of Chicago, University of Oxford, University of Edinburgh, University of Toronto, and University of British Columbia arguing that direct replications sometimes failed to capture contextual moderators identified by original teams including those at Stanford University and Yale University. Methodological debates referenced positions by Daniel Kahneman, Mahzarin Banaji, Richard Nisbett, and statisticians from University of California, Berkeley and Carnegie Mellon University about sampling, fidelity, and null hypothesis testing. Editorial responses in Psychological Science and commentary from parties associated with Association for Psychological Science and American Psychological Association highlighted tensions between reproducibility standards and concepts advanced in classic work by Sigmund Freud, B.F. Skinner, and experimental traditions at Princeton University. Additional critiques from scholars at University of Amsterdam and University of Cambridge focused on selection of targets, inference thresholds, and the influence of pre-registration policies advocated by Center for Open Science.

Impact on psychology and research practices

The project accelerated reforms including broader adoption of pre-registration at platforms like Open Science Framework, data sharing policies at journals such as Psychological Science and Nature, and methodological training influenced by curricula at Massachusetts Institute of Technology, Stanford University, and Harvard University. Funders including National Institutes of Health, European Research Council, and Wellcome Trust emphasized reproducibility in grant criteria, while professional societies like American Psychological Association and Association for Psychological Science updated ethical and reporting guidelines. Subsequent initiatives at institutions such as University of Amsterdam, University of Oxford, Max Planck Society, and consortia modeled after the project pursued replication projects across disciplines intersecting with teams at Imperial College London and ETH Zurich, shaping ongoing dialogues involving figures like John Ioannidis, Andrew Gelman, and Daniel Kahneman about robustness, transparency, and cumulative science.

Category:Psychology