Generated by GPT-5-mini| Reproducibility Project | |
|---|---|
| Name | Reproducibility Project |
| Formation | 2010s |
| Purpose | Assessment of reproducibility in empirical research |
| Headquarters | N/A |
| Region served | Global |
| Field | Scientific research |
Reproducibility Project
The Reproducibility Project is a collective effort to assess the replicability of empirical findings across fields such as psychology, biomedicine, and economics. Initiatives affiliated with the Project involve collaborations among scholars, journals, funders, and institutions to test whether published results can be independently reproduced. The Project influenced debates among policymakers, editors, and laboratory directors about research transparency, statistical practices, and publication standards.
The Project emerged amid high-profile discussions involving American Psychological Association, National Institutes of Health, Wellcome Trust, European Research Council, and academic centers such as Harvard University, Stanford University, University of Oxford, University of Cambridge, and University of California, Berkeley. Founders and contributors included researchers associated with Open Science Framework, Center for Open Science, Psychological Science Accelerator, and networks tied to journals like Science (journal), Nature (journal), and PNAS. Goals emphasized verifying influential findings from labs led by investigators connected to institutions such as Princeton University, Yale University, Columbia University, Massachusetts Institute of Technology, and University of Chicago; improving protocols used in work by teams at Max Planck Society, CNRS, Karolinska Institutet, and Australian National University; and informing funders including Gates Foundation and Wellcome Trust about reproducibility risks.
Major coordinated efforts included large-scale projects led by consortia at Center for Open Science and the Psychological Science Accelerator, replication networks organized through platforms like Open Science Framework, and targeted efforts such as the reproducibility checks in cancer research labs tied to institutions like Dana-Farber Cancer Institute, Johns Hopkins University, and MD Anderson Cancer Center. High-profile replication campaigns encompassed projects testing literature from journals such as Journal of Personality and Social Psychology, Nature Neuroscience, The Lancet, and Cell (journal), and discipline-focused efforts linked to societies like the Society for Neuroscience and American Economic Association.
Protocols used pre-registered methods on platforms including Open Science Framework and registered reports submitted to journals such as Royal Society publications, Elsevier titles, and American Association for the Advancement of Science. Teams adopted statistical approaches involving power analysis techniques used in guidance from International Committee of Medical Journal Editors, meta-analytic methods associated with authors from Cochrane Collaboration, and quality assurance practices drawn from laboratories at National Institute of Standards and Technology, NIH, and FDA. Collaborative replications often followed multi-lab designs similar to projects coordinated by Psychological Science Accelerator and used blinding, randomization, and standardized materials developed with contributors from Stanford University, Princeton University, and University College London.
Large-scale replication efforts found mixed outcomes: some high-impact results from teams at Princeton University, Harvard University, and Stanford University replicated, while others from groups at University of Amsterdam, University of Illinois Urbana-Champaign, and University of Toronto failed to replicate. Notable replication studies included multi-lab attempts of classic findings published in Psychological Science, Nature Human Behaviour, Science (journal), and PNAS; targeted replication in preclinical oncology associated with centers like Memorial Sloan Kettering Cancer Center and Fred Hutchinson Cancer Research Center; and registered-report driven projects coordinated with journals such as eLife and Royal Society Open Science. Meta-analyses by teams with affiliations to Cochrane Collaboration and authors publishing in BMJ highlighted heterogeneity, publication bias, and analytic flexibility as recurring contributors to non-reproducibility.
Critiques arose from investigators at institutions including University of Oxford, University of Cambridge, Yale University, and Princeton University arguing that replication attempts sometimes misapplied original protocols, overlooked contextual moderators recognized by laboratories at Max Planck Society and Karolinska Institutet, or reduced exploratory findings to confirmatory tests favored by editorial policies at Nature (journal) and Science (journal). Methodological limits were noted by statisticians affiliated with University of California, Los Angeles, Columbia University, and University College London, while ethicists at Georgetown University and University of Toronto highlighted consent and data-sharing constraints. Funding and incentive structures involving National Institutes of Health, European Research Council, and private foundations were also cited as limiting scalable replication.
The Project prompted policy responses from agencies such as National Institutes of Health, Wellcome Trust, European Commission, and publishers including Springer Nature and Elsevier. Changes included wider adoption of registered reports at journals like Nature Human Behaviour, expanded data-sharing mandates at PLOS, and training updates at universities such as Harvard University, Stanford University, and Massachusetts Institute of Technology. Funders including Gates Foundation and Wellcome Trust altered grant requirements to emphasize reproducibility standards promoted by organizations like Center for Open Science and Open Science Framework.
Future priorities highlighted by stakeholders at National Institutes of Health, European Research Council, Wellcome Trust, and research universities include strengthening infrastructure at Open Science Framework, expanding multi-lab networks such as Psychological Science Accelerator, improving statistical education at institutions like University of Oxford and University of Cambridge, and integrating reproducibility checks into peer review systems used by Science (journal), Nature (journal), and PNAS. Recommendations include incentivizing registered reports at professional societies like American Psychological Association and American Economic Association, supporting replication funding through agencies like NIH and European Commission, and fostering collaborations among centers such as Center for Open Science, Cochrane Collaboration, and major research hospitals.
Category:Scientific reproducibility