LLMpediaThe first transparent, open encyclopedia generated by LLMs

OpenReview.net

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: ICLR Hop 4
Expansion Funnel Raw 55 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted55
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
OpenReview.net
NameOpenReview
DeveloperStern School of Business, Cornell University, International Conference on Learning Representations
Released2016
Programming languagePython, JavaScript

OpenReview.net

OpenReview.net is an online platform for scholarly peer review and conference management that emphasizes transparency, accountability, and interactivity. It was developed to support peer review workflows for conferences, workshops, and journals associated with major venues in computer science and related fields. The system has been used by prominent events and institutions to experiment with open identities, public reviews, and post-publication discussion.

History

OpenReview.net was initiated by researchers and administrators seeking alternatives to traditional blind review systems used by Association for Computing Machinery, Institute of Electrical and Electronics Engineers, and academic conferences such as NeurIPS, ICML, and CVPR. Early adopters included organizers from Stern School of Business and projects tied to Cornell University faculty who collaborated with program committees from International Conference on Learning Representations and International Joint Conference on Artificial Intelligence. The platform gained visibility during debates over review transparency at events like NeurIPS 2018 and controversies involving policy decisions at Conference on Neural Information Processing Systems and Machine Learning venues. Over time it attracted participation from researchers affiliated with Stanford University, Massachusetts Institute of Technology, University of California, Berkeley, Google Research, Facebook AI Research, DeepMind, and national labs such as Lawrence Berkeley National Laboratory.

Platform and Features

The platform provides programmable submission workflows, comment threads, and review forms used by program committees from NeurIPS, ICLR, ACL, and workshops associated with EMNLP. Features include support for double-blind and open identities, configurable rebuttal windows used by organizers from AAAI and IJCAI, and metadata integration compatible with indexing services like arXiv and digital libraries maintained by IEEE Xplore and ACM Digital Library. OpenReview.net supports reviewer bidding and assignment tools similar to systems used at SIGMOD and KDD, implements conflict-of-interest matrices with institutional data from Harvard University and Princeton University, and offers APIs enabling integrations developed by teams at Microsoft Research and Amazon Web Services.

Submission and Review Process

Contributors submit manuscripts and supplemental materials following policies adopted by program chairs from ICLR 2017, NeurIPS 2019, and workshop organizers at COLT and UAI. The review lifecycle can include anonymous submissions, public reviews, author responses influenced by practices at EMNLP 2020, and post-publication commentary modeled after open peer review experiments at F1000Research and PeerJ. Committees configure review criteria and tensions between anonymity and openness have been debated by stakeholders from Google Research, Facebook AI Research, DeepMind, University of Toronto, and editorial boards of journals like Journal of Machine Learning Research.

Community and Governance

Governance of the platform and its deployments involves collaborations among university groups from Cornell University, steering committees from conferences like ICLR and NeurIPS, and software contributors from organizations including OpenAI and research labs at IBM Research. Community norms have been shaped by discussions among program chairs, ethics committees at AAAI, and panels at events such as ICML Workshops and NeurIPS Workshops. Decisions about moderation, data retention, and reviewer conduct have engaged representatives from funding bodies such as the National Science Foundation and policy groups associated with European Research Council.

Impact and Reception

OpenReview.net influenced debates about transparency and reproducibility highlighted in commentary by scholars at Stanford University, Massachusetts Institute of Technology, and Harvard University. Proponents cite increased accountability and richer scientific dialogue in venues like ICLR and NeurIPS, while critics from editorial boards of Journal of Machine Learning Research and conference organizers at ACL and SIGIR have raised concerns about reviewer anonymity, harassment, and the potential for strategic behavior referenced in critiques published by researchers at University College London and University of Oxford. The platform's role in high-profile acceptance and rebuttal episodes contributed to policy reforms at NeurIPS, influenced meta-research at centers like Meta-Research Innovation Center at Stanford, and informed open peer review pilots at publishers including PLOS and Springer Nature.

Category:Academic peer review platforms