Generated by GPT-5-mini| International Conference on Learning Representations | |
|---|---|
![]() | |
| Name | International Conference on Learning Representations |
| Abbreviation | ICLR |
| Established | 2013 |
| Discipline | Machine learning |
| Frequency | Annual |
International Conference on Learning Representations. The International Conference on Learning Representations is an annual scholarly conference focusing on representation learning and deep learning research. It attracts researchers from institutions such as Massachusetts Institute of Technology, Stanford University, University of Toronto, University of Oxford, and Carnegie Mellon University, along with corporate research labs including Google, Facebook, Microsoft Research, DeepMind, and OpenAI. The meeting sits alongside events such as NeurIPS, ICML, AAAI Conference on Artificial Intelligence, KDD, and CVPR in the calendar of machine learning venues.
ICLR began in 2013 with founders and early organizers drawn from groups affiliated with Yoshua Bengio, Yann LeCun, and researchers connected to Geoffrey Hinton-influenced networks and labs. Early iterations were influenced by work presented at NIPS 1987-era gatherings and subsequent milestones such as the AlexNet breakthrough that reshaped deep learning dissemination. The conference evolved through locations tied to institutions like New York University and cities where companies such as Google DeepMind and Facebook AI Research host labs. Over time ICLR transitioned from workshop-style formats to a full peer-reviewed conference model, interacting with other events including ICLR Workshop on Representation Learning, Workshop on Uncertainty in Artificial Intelligence, and satellite symposia associated with International Joint Conference on Artificial Intelligence.
ICLR's scope covers representation learning, neural networks, optimization, and applications intersecting with researchers at Princeton University, University of California, Berkeley, ETH Zurich, École Polytechnique Fédérale de Lausanne, and Tsinghua University. Core topics include deep learning architectures first explored by groups around Geoffrey Hinton, recurrent models influenced by work at Bell Labs, convolutional innovations following Yann LeCun's research, and generative modeling traditions linked to Ian Goodfellow's developments. The conference also embraces topics connecting to researchers at Google Brain, Amazon Web Services, Apple Machine Learning Research, and academic centers like Columbia University and University of Washington: representation learning, unsupervised learning, self-supervised learning, meta-learning, reinforcement learning advances associated with David Silver-led initiatives, adversarial robustness work tied to Alexey Kurakin and others, and theoretical analyses building on contributions from Shai Shalev-Shwartz and Michael Jordan (computer scientist). Cross-disciplinary applications appear from groups at Johns Hopkins University, Imperial College London, University of Cambridge, and Peking University.
ICLR operates an annual cycle of paper submission, open peer review, and conference presentation, attracting submissions from laboratories such as Facebook AI Research and DeepMind and universities like Yale University and Brown University. The process typically involves initial paper upload, anonymous review by program committees with members from University of Michigan, University of Edinburgh, Duke University, and industrial partners including NVIDIA and Intel Labs, followed by rebuttal and discussion phases. Accepted works are presented as oral talks, poster sessions, or spotlight presentations at the main meeting and affiliated workshops such as Workshop on Representation Learning. The conference program often includes keynotes by prominent figures associated with Royal Society-affiliated scholars, award presentations named in the style of community recognitions, and tutorial sessions led by experts from Cornell University and SRI International.
ICLR has been the venue for influential papers that shaped fields tied to Alex Krizhevsky-era developments, Ian Goodfellow’s generative adversarial networks tradition, and methods that built on Yoshua Bengio's representation-learning agenda. Notable contributions include advances in variational inference popularized by researchers at University College London and algorithmic improvements from labs at Microsoft Research Cambridge. The conference has played a role in disseminating breakthroughs that impacted products and services at Google, Facebook, Amazon, and Apple, as well as enabling startups spun out of academic groups at University of Toronto and ETH Zurich. ICLR papers have influenced subsequent research at NeurIPS and ICML and informed standards and practices in industrial deployments at Uber AI Labs and Baidu Research.
ICLR is governed by organizing committees composed of academics and industry researchers from institutions such as Stanford University, Harvard University, University of California, San Diego, and University of Montreal. Sponsorship commonly comes from technology companies and research organizations including Google, Facebook, Microsoft Research, NVIDIA, Amazon, Intel, and DeepMind. Financial and logistical support has also involved academic societies and publishers connected to venues like Association for Computing Machinery and collaborative initiatives with research institutes such as Allen Institute for AI and Vector Institute.
ICLR has faced critiques similar to those aimed at other major machine learning conferences, including concerns raised by researchers from University of Cambridge and University of Oxford about review quality and reproducibility, debates echoed in commentaries from scholars associated with MIT Media Lab and Harvard Kennedy School regarding the societal impacts of AI research. Controversies have included disputes over commercialization and industry influence involving entities like Google DeepMind and Facebook AI Research, discussions on diversity and inclusion highlighted by activists linked to Women in Machine Learning and community groups around Black in AI, and methodological criticisms from theoreticians at Princeton University and Columbia University about benchmark overfitting and evaluation protocols.
Category:Machine learning conferences