LLMpediaThe first transparent, open encyclopedia generated by LLMs

ICLR

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Facebook AI Research Hop 4
Expansion Funnel Raw 57 → Dedup 3 → NER 2 → Enqueued 2
1. Extracted57
2. After dedup3 (None)
3. After NER2 (None)
Rejected: 1 (not NE: 1)
4. Enqueued2 (None)
ICLR
NameInternational Conference on Learning Representations
AbbreviationICLR
DisciplineMachine learning
First2013
FrequencyAnnual

ICLR is an annual academic conference focused on machine learning and representation learning. It serves as a venue for researchers from institutions such as Massachusetts Institute of Technology, Stanford University, University of Toronto, University of California, Berkeley, and Carnegie Mellon University to present advances in algorithms, theory, and applications. The conference attracts participation from corporate research labs including Google DeepMind, OpenAI, Facebook AI Research, Microsoft Research, and Amazon Web Services and is closely followed by communities around journals and meetings like NeurIPS, ICML, AAAI Conference on Artificial Intelligence, and CVPR.

History

ICLR was founded in the early 2010s by researchers working on unsupervised feature learning and representation learning emerging from groups at University of Montreal, New York University, University of Toronto, and Google Research. Early contributors and organizers had connections to influential works by researchers affiliated with Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Andrew Ng, and Ian Goodfellow, whose related outputs were discussed alongside proceedings from NIPS 2012, ICML 2013, and workshops at NeurIPS. The conference evolved from workshop format to a full-scale conference with open reviewing and public discussion, drawing policy attention from bodies such as National Science Foundation and corporate partners including DeepMind. Over successive years ICLR expanded geographical reach to host events in venues associated with Montreal, San Francisco, Vancouver, Lyon, and virtual platforms influenced by pandemic responses by organizations like World Health Organization-era remote conferencing practices.

Organization and Conference Format

ICLR is organized by program chairs appointed from leading universities and research labs; past chairs have been faculty or staff at Princeton University, Columbia University, ETH Zurich, University of Oxford, and University College London. The format typically comprises oral presentations, poster sessions, spotlight talks, tutorials, and workshops, mirroring structures seen at NeurIPS and CVPR. Keynote and invited talks have featured speakers affiliated with Google DeepMind, OpenAI, Facebook AI Research, Microsoft Research, and academic centers such as Massachusetts Institute of Technology, Stanford University, and University of Toronto. Workshops often collaborate with special interest groups connected to Association for Computational Linguistics, IEEE, and funding agencies like European Research Council.

Submission and Review Process

Submissions to ICLR follow annually announced calls for papers with deadlines coordinated among program committees drawn from institutions including Carnegie Mellon University, University of California, Berkeley, Massachusetts Institute of Technology, Stanford University, and research labs like DeepMind and OpenAI. The peer review process uses double-blind or open-review variants, with public commenting features inspired by online platforms such as arXiv and community moderation practices seen in GitHub and preprint servers. Reviewers are recruited from faculty and industry researchers affiliated with University of Toronto, ETH Zurich, Princeton University, Columbia University, and corporate labs including Google Research and Facebook AI Research. Accepted papers proceed to oral presentations or poster sessions; decisions mirror acceptance-rate dynamics found at NeurIPS, ICML, and ECCV and influence hiring and grant decisions tied to institutions such as National Science Foundation and European Research Council.

Notable Papers and Contributions

ICLR has been the venue for influential works in deep learning and representation learning, alongside seminal contributions published in venues like NeurIPS and ICML. Papers presented at ICLR have advanced architectures and techniques used by practitioners at Google DeepMind, OpenAI, Facebook AI Research, and companies such as Amazon Web Services and Microsoft Research. Notable research threads include developments in generative models connected to contributions by researchers affiliated with Ian Goodfellow, diffusion models advanced by teams at OpenAI and Google Research, optimization techniques related to work from Stanford University and University of California, Berkeley, and interpretability studies connected to groups at MIT and Princeton University. ICLR papers have interacted with benchmark datasets and suites maintained by organizations like ImageNet, COCO, GLUE, and OpenAI-adjacent evaluation frameworks, influencing downstream deployment at firms including Tesla and NVIDIA.

Awards and Recognition

ICLR confers best-paper awards and honorable mentions, judged by program committees composed of senior researchers from MIT, Stanford University, University of Toronto, Carnegie Mellon University, and ETH Zurich. Recipients often hold positions at institutions such as Google DeepMind, OpenAI, Facebook AI Research, and universities like Harvard University and Princeton University, and sometimes receive additional recognition through citations tracked in databases maintained by Google Scholar, Microsoft Academic, and indexing by arXiv. Awarded works frequently shape prize-winning projects and fellowships supported by agencies like National Science Foundation and foundations such as Simons Foundation.

Criticism and Controversies

ICLR has faced critique paralleling debates at NeurIPS and ICML over topics like reproducibility, benchmark overfitting, and the influence of corporate sponsorship from Google DeepMind, OpenAI, Facebook AI Research, and Microsoft Research. Concerns raised by academics from University of California, Berkeley, University of Washington, and ETH Zurich have focused on peer review quality and the scaling of acceptance processes, echoing broader discussions in venues such as Nature and Science commentary. Controversies have included disputes over open review moderation practices influenced by platforms like arXiv and GitHub, and debates about ethical impacts referenced alongside policy discussions at European Commission and national advisory bodies.

Category:Machine learning conferences