Generated by GPT-5-mini| Conference on Learning Theory | |
|---|---|
| Name | Conference on Learning Theory |
| Abbreviation | COLT |
| Discipline | Computer science Machine learning Theoretical computer science |
| Founded | 1988 |
| Frequency | Annual |
Conference on Learning Theory is an annual scholarly meeting that convenes researchers in Machine learning, Theoretical computer science, Statistics, and related fields to present advances in computational learning. It fosters exchange among scholars from institutions such as Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, Carnegie Mellon University, and University of Toronto and draws participants affiliated with organizations like Google, DeepMind, Microsoft Research, Facebook AI Research, and OpenAI. The conference sits within a network of events including NeurIPS, ICML, ALT (workshop), and COLT-affiliated workshops while awarding honors comparable to the Gödel Prize and the Knuth Prize.
The conference emerged in the late 1980s amid activity at venues such as Carnegie Mellon University, University of California, San Diego, Harvard University, Princeton University, and University of Cambridge with early participants from labs like Bell Labs, AT&T Labs, IBM Research, Microsoft Research Redmond, and SRI International. Founding figures and frequent contributors include researchers associated with Leslie Valiant, Michael Kearns, Avrim Blum, Richard Karp, Shai Shalev-Shwartz, Yann LeCun (as related researcher), and Rong Ge (as later contributor), linking to theoretical work by authors at University of Toronto and ETH Zurich. Over decades COLT has rotated through host sites including Cornell University, University of Washington, Columbia University, University of Michigan, University of Oxford, University of Edinburgh, Tel Aviv University, UC Santa Cruz, and international venues in Paris, Berlin, Tokyo, and Beijing.
Program tracks emphasize formal analyses that trace intellectual lines from Vapnik–Chervonenkis theory and researchers like Vladimir Vapnik and Alexey Chervonenkis to modern work by scholars at Princeton University, Yale University, Brown University, Duke University, Northwestern University, University of Pennsylvania, Columbia University, New York University, University of Maryland, and University of Chicago. Topics include PAC learning linked to Leslie Valiant, online learning with roots in work by Nick Littlestone and Manfred K. Warmuth, boosting related to Robert Schapire and Yoav Freund, regret bounds connected to Nicolo Cesa-Bianchi and Gábor Lugosi, and stochastic optimization reflecting contributions from John Duchi and Shai Shwartz. Research presented often interfaces with areas represented by ACM, IEEE, SIAM, NeurIPS, ICML, AAAI, and SIGACT, and touches computational complexity concepts tied to names like Scott Aaronson, László Babai, Oded Goldreich, and Sanjeev Arora.
The conference adopts peer-reviewed paper sessions, invited talks, tutorials, and poster sessions administered by program committees drawing members from MIT, Stanford University, Berkeley Lab, Princeton University, Harvard University, Caltech, EPFL, ETH Zurich, Imperial College London, and Tsinghua University. Steering committees have included representatives from Association for Computing Machinery, Society for Industrial and Applied Mathematics, and regional chapters of IEEE Computer Society with logistical support from host universities such as University of California, Los Angeles and University of Southern California. The submission process parallels procedures at NeurIPS and ICML with double-blind review practices influenced by editorial norms at journals like Journal of Machine Learning Research, Annals of Statistics, Theory of Computing, and SIAM Journal on Computing.
Proceedings have published influential papers later appearing in venues such as Journal of Machine Learning Research, Annals of Statistics, Proceedings of the ACM, and collections tied to Springer and MIT Press. Paper awards and recognitions at the conference have noted early work by authors associated with Yale University, University of Waterloo, University of British Columbia, McGill University, Georgia Institute of Technology, University of Illinois Urbana–Champaign, and Rice University. Distinguished invited lecturers have included scholars affiliated with Princeton University, Stanford University, Harvard University, University of Cambridge, University of Oxford, ETH Zurich, University College London, and industrial research groups at Google DeepMind and Microsoft Research. The conference also coordinates best-paper prizes and doctoral presentation awards analogous to honors like the Turing Award in prestige for theoretical contributions.
Work originated or refined at the conference has shaped foundational theory used by practitioners at Google Research, DeepMind, OpenAI, Facebook AI Research, and startups founded by alumni from Stanford University and MIT. Contributions have influenced algorithmic design in projects associated with TensorFlow, PyTorch, and libraries maintained by teams at Google, Facebook, and Microsoft; they also inform curricula at institutions such as Carnegie Mellon University, University of Toronto, Imperial College London, and EPFL. COLT-era results connect to complexity-theoretic studies led by researchers at Princeton, Berkeley, and MIT, and to applied research in areas tied to Google Brain, Amazon Web Services, Apple Machine Learning Research, and NVIDIA Research. The conference continues to intersect with broader scientific forums including NeurIPS, ICML, AISTATS, UAI, and ALT while shaping standards cited in textbooks authored by scholars from MIT Press, Oxford University Press, and Cambridge University Press.
Category:Machine learning conferences