Generated by GPT-5-mini| AISTATS | |
|---|---|
| Name | AISTATS |
| Discipline | Machine learning; Statistics; Artificial intelligence |
| Country | International |
| First | 1990 |
| Frequency | Annual |
AISTATS AISTATS is an international conference focusing on the intersection of machine learning and statistics. It brings together researchers from diverse institutions to present theoretical advances and empirical studies, fostering exchanges among contributors from Stanford University, Massachusetts Institute of Technology, University of Cambridge, Carnegie Mellon University, and University of California, Berkeley. The venue alternates among regions and has attracted participants affiliated with Google, Microsoft Research, DeepMind, Facebook AI Research, and Amazon research labs.
AISTATS was established to create a dedicated forum for scholars working at the crossroads of statistical theory and algorithmic development, following the growth of communities around events such as NeurIPS, ICML, COLT, UAI, and SIGIR. Early meetings featured contributions from researchers associated with Bell Labs, AT&T Research, University of Toronto, Princeton University, and Yale University. Over time the conference evolved alongside milestones like the development of support vector machines at AT&T Bell Laboratories, the resurgence of neural network research at University of Toronto and Google DeepMind, and the rise of Bayesian nonparametrics from groups at Harvard University and University College London. Organizational structures took inspiration from longstanding forums such as Royal Statistical Society meetings and workshops linked to National Science Foundation initiatives.
AISTATS emphasizes methodological advances and rigorous evaluation across topics including probabilistic modeling, optimization, causal inference, and representation learning. Typical subjects intersect contributions from researchers at ETH Zurich, University of Oxford, California Institute of Technology, Imperial College London, and New York University. The scope encompasses work on graphical models related to advances at Microsoft Research Cambridge and Bayesian computation influenced by researchers at Columbia University and Duke University. Other recurring themes reflect collaborations with teams at IBM Research, Toyota Technological Institute at Chicago, Seoul National University, Tsinghua University, and Peking University.
Annual meetings produce proceedings that collect peer-reviewed full papers and extended abstracts, comparable to archives from Proceedings of Machine Learning Research outlets and proceedings formats used by ICLR and NeurIPS. Conferences have been hosted in cities such as Barcelona, Sydney, Honolulu, Palm Springs, and Athens, often coordinated with workshops and tutorials featuring speakers from ETH Zurich, University of Washington, Oxford Brookes University, University of Michigan, and Johns Hopkins University. Proceedings have documented influential contributions later cited alongside works from Proceedings of the National Academy of Sciences, Journal of Machine Learning Research, Annals of Statistics, and publications by authors associated with Princeton Plasma Physics Laboratory and Lawrence Berkeley National Laboratory.
The conference is organized by an elected program committee and local organizing committee comprised of volunteers from academic institutions and research labs, including members from Cornell University, Brown University, University of Illinois Urbana–Champaign, Northwestern University, and Arizona State University. Sponsorship and partnership have been provided by corporate research groups and professional societies such as Google Research, Microsoft Research, Amazon Web Services, Facebook AI Research, Intel Labs, IEEE, and Association for Computing Machinery. Funding models and logistical support have also drawn on regional academic consortia linked to European Research Council, Australian Research Council, and national funding agencies.
Papers presented at the conference have influenced directions in sparse modeling, variational inference, and scalable optimization, resonating with foundational work emerging from Bell Labs Research, Stanford Linear Accelerator Center, and groups at University of California, San Diego. Awardees and distinguished paper recipients include researchers with affiliations to Massachusetts General Hospital and Broad Institute for interdisciplinary applications, and to IBM T.J. Watson Research Center for algorithmic innovations. Notable contributions have been acknowledged alongside prizes and recognitions from bodies such as Royal Society, MacArthur Foundation, ACM, and IEEE Fellows elections when authors transitioned to broader honors.
Submissions follow a single-track format with page limits and anonymity constraints, processed through double-blind peer review managed by area chairs and program committee members drawn from institutions like University of Toronto, University of Edinburgh, McGill University, University of British Columbia, and University of Sydney. The review cycle is comparable to those used by NeurIPS and ICML, employing reviewer recruitment from academic and industrial labs including DeepMind, Google Brain, Facebook AI Research, and Microsoft Research Redmond. Decisions are based on novelty, technical quality, empirical evaluation, and clarity; accepted papers are published in conference proceedings and often archived in collections associated with professional societies such as Association for the Advancement of Artificial Intelligence and repositories maintained by university presses.
Category:Artificial intelligence conferences