Generated by GPT-5-mini| Many Labs | |
|---|---|
| Name | Many Labs |
| Type | Collaborative replication consortium |
| Founded | 2012 |
| Founders | Brian A. Nosek, Christopher F. Chabris, Daniel Kahneman? |
| Location | Multi-site, international |
| Disciplines | Psychology, Social psychology, Cognitive psychology |
| Notable projects | Many Labs 1, Many Labs 2, Many Labs 3 |
Many Labs is a large-scale, multi-site collaborative replication initiative designed to test the robustness and generalizability of experimental findings across diverse samples and settings. The project brought together researchers from universities, research institutes, and laboratories worldwide to reproduce classic and contemporary psychology findings under harmonized protocols. Many Labs contributed to debates involving reproducibility crises, meta-science, and methodological reform in fields such as social psychology, cognitive psychology, and personality psychology.
Many Labs originated as a coordinated response to concerns raised by high-profile replication failures and controversies in psychology and related fields. It built on earlier collaborative projects and initiatives from institutions like the Center for Open Science, drawing participants from universities such as Princeton University, University of Virginia, Harvard University, Stanford University, University of Oxford, University of Cambridge, University of California, Berkeley, Yale University, and University of Michigan. Many Labs aimed to assess effect heterogeneity across countries including United States, United Kingdom, Australia, Canada, Germany, Netherlands, Sweden, and Japan by replicating experiments originally reported in journals such as Psychological Science, Journal of Personality and Social Psychology, and Nature Human Behaviour.
The consortium involved laboratories and research groups from major centers including University of Pennsylvania, Columbia University, Duke University, University of Chicago, New York University, University of Toronto, McGill University, Australian National University, University of Melbourne, University of Sydney, Erasmus University Rotterdam, KU Leuven, University of Amsterdam, Max Planck Society, and Karolinska Institutet. Collaborators included individual researchers affiliated with organizations like the Society for Personality and Social Psychology, the Association for Psychological Science, and the Royal Society. Many Labs employed a decentralized coordination model inspired by consortia such as the Human Genome Project and large collaborations in neuroscience, relying on shared protocols, common materials, and centralized data aggregation hosted by platforms associated with the Center for Open Science and university repositories at institutions like University of North Carolina and University of California, Los Angeles.
Major outputs included replication waves commonly labeled Many Labs 1, Many Labs 2, and subsequent iterations that targeted phenomena such as the false consensus effect reported in social psychology, the ego depletion literature connected to Roy Baumeister's work, the framing effect from Daniel Kahneman and Amos Tversky's research program, effects related to priming studied by scholars at University College London and Princeton University, and replication tests of classic findings appearing in outlets like Science and Nature. Results demonstrated that some effects, for example certain anchoring phenomena and specific order effects, showed robust replication across many sites, while others, including some priming and ego depletion effects, exhibited substantial heterogeneity or failed to replicate consistently. These findings influenced meta-analyses and systematic reviews conducted by teams at Stanford University, University of Oxford, University College London, and the Max Planck Institute.
A hallmark was strict protocol standardization with pre-registered designs, pre-specified exclusion criteria, and shared materials distributed to sites. Methodological innovations drew on techniques used in large-scale projects from genetics and epidemiology, employing power analyses, hierarchical models, and multi-level meta-analytic approaches developed by statisticians at University of Oxford, Harvard University, Columbia University, and University of Washington. Data management and open data practices leveraged infrastructure promoted by the Center for Open Science and data curators affiliated with University of Michigan and Carnegie Mellon University. Preregistration practices referenced guidelines from the Open Science Framework and reporting standards advocated by editorial boards of journals like Psychological Science and Journal of Experimental Psychology: General.
The consortium catalyzed reforms in publication norms, encouraging journals such as Psychological Science, Nature Human Behaviour, PLOS ONE, Royal Society Open Science, and Journal of Personality and Social Psychology to adopt more rigorous replication and transparency standards. It influenced practices promoted by organizations including the Center for Open Science, the Society for the Improvement of Psychological Science, and funders like the National Science Foundation and the Wellcome Trust. Educational programs at universities including University of California, Davis, University of British Columbia, and University of Texas at Austin incorporated Many Labs findings into curricula on research methods, and professional societies including the American Psychological Association and Association for Psychological Science discussed policy implications at annual conferences.
Critiques of the project emerged from scholars at institutions such as University of Chicago, Princeton University, Yale University, and Columbia University. Common limitations noted include potential sampling biases due to heavy reliance on university-affiliated convenience samples from North America and Western Europe, constraints on ecological validity compared to field studies conducted by researchers at University of California, Santa Barbara and University of Michigan, and debates about interpretation of null results raised by statisticians at Stanford University and University College London. Some commentators from journals like Perspectives on Psychological Science argued that large-scale replications may underrepresent theoretical diversity found in work by scholars at institutions such as University of Wisconsin–Madison and Indiana University Bloomington.
Category:Replication crisis