Generated by GPT-5-mini| Papers with Code | |
|---|---|
| Name | Papers with Code |
| Type | Online platform |
| Founded | 2017 |
Papers with Code is an online platform that aggregates academic papers and reproducible implementations of machine learning research, linking publications to code, datasets, and evaluation metrics. It serves researchers, engineers, and students by cataloging results across domains such as computer vision, natural language processing, and reinforcement learning. The platform emphasizes reproducibility and benchmarking by connecting peer-reviewed work with open-source repositories and leaderboards.
The project began in 2017 amid a surge of interest sparked by high-profile events and institutions such as NeurIPS, ICML, CVPR, ACL, and ICLR, when researchers from organizations like OpenAI, Google Research, Facebook AI Research, Microsoft Research and academic labs at Stanford University, MIT, University of Toronto, Carnegie Mellon University and University of Oxford increasingly published code on GitHub and preprints on arXiv. Early adoption was driven by community champions including contributors affiliated with labs that had produced landmark works like AlexNet, BERT, ResNet, Transformer, and GANs. Over time the platform expanded to incorporate automated scraping, metadata extraction, and integrations with version control services and dataset repositories such as Kaggle, ImageNet, COCO, and GLUE. Growth paralleled debates at venues like AAAI and policy discussions involving institutions like the European Commission and national bodies concerned with research transparency.
The service provides searchable listings that link papers to code repositories on platforms such as GitHub, GitLab, and Bitbucket, and displays experiment results from benchmark suites used in conferences like NeurIPS and CVPR. It includes leaderboards that aggregate metrics from influential benchmarks like ImageNet, SQuAD, COCO, GLUE, and SuperGLUE and highlights implementations tied to authors affiliated with Google DeepMind, DeepMind, OpenAI, Apple Machine Learning Research, Huawei Noah's Ark Lab, and universities such as Harvard University and Princeton University. The platform supports tagging of papers with methods related to architectures like Convolutional neural network, Recurrent neural network, Transformer, and techniques popularized by works from labs such as DeepMind and OpenAI. Additional services include API access for tools developed at companies and labs like Hugging Face, integrations with citation indices like those maintained by Semantic Scholar and features used by groups influenced by awards such as the Turing Award and recognition from conferences like NeurIPS.
The database indexes datasets and benchmarks frequently cited alongside papers, including resources produced by organizations such as ImageNet, COCO, Pascal VOC, SQuAD, GLUE, and corpus-building efforts from teams at Allen Institute for AI and Google Research. It documents evaluation metrics and leaderboards that compare results from contributions by researchers at Facebook AI Research, Microsoft Research, DeepMind, IBM Research, Toyota Technological Institute at Chicago, and academic groups from University of California, Berkeley, ETH Zurich, and Cornell University. Benchmark coverage spans areas showcased at conferences such as CVPR, ECCV, ICCV, ACL, and EMNLP, and lists implementations hosted on repositories linked to labs like Hugging Face and community efforts coordinated through platforms like Kaggle.
Collaboration on the platform reflects open-source culture embodied by projects and organizations such as GitHub, Hugging Face, Apache Software Foundation, Mozilla, NumFOCUS, and notable research groups at Stanford University, MIT, University of Washington, University of Cambridge, and University of Toronto. Contributors include authors of seminal works associated with Transformer, BERT, ResNet, U-Net, and YOLO. The site facilitates reproducibility efforts aligned with initiatives from institutions such as the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery, and is used by participants in competitions organized by Kaggle and benchmarked in workshops at NeurIPS and ICLR. Community-led curation often surfaces implementations from researchers affiliated with labs like OpenAI, DeepMind, Facebook AI Research, and universities including Columbia University and Yale University.
The platform has been cited in discussions about reproducibility, openness, and accelerated innovation within ecosystems involving NeurIPS, ICML, ICLR, ACL, and policy reviews by bodies such as the European Commission and national research councils. Its leaderboards and aggregated links have influenced adoption of models and codebases developed by teams at Google Research, OpenAI, DeepMind, Facebook AI Research, and Microsoft Research and have been referenced in tutorials and curricula at institutions like Stanford University, MIT, Carnegie Mellon University, and Harvard University. Reception among practitioners highlights benefits to replication and engineering productivity, while critiques from academics and policymakers reference concerns raised in venues such as NeurIPS workshops and reports by organizations like the Partnership on AI.
Support and sustainability efforts reflect partnerships and funding mechanisms similar to those used by research tools and platforms originating from collaborations between academic labs and industry groups such as Allen Institute for AI, OpenAI, Google Research, Microsoft Research, and nonprofit entities like Mozilla Foundation. Resource needs mirror those encountered by infrastructure projects in the research ecosystem funded by grants, corporate sponsorships, and pro bono contributions from engineers at GitHub, Hugging Face, Google, Microsoft, and university research groups. Governance and stewardship discussions reference models used by organizations such as the Apache Software Foundation, NumFOCUS, and the Allen Institute for AI.