Generated by GPT-5-mini| Transactions on Machine Learning Research | |
|---|---|
| Title | Transactions on Machine Learning Research |
| Discipline | Machine learning |
| Abbreviation | TMLR |
| Publisher | Open-access |
| Country | International |
| History | 2020–present |
| Frequency | Continuous |
Transactions on Machine Learning Research is an open-access, peer-reviewed journal founded to provide an alternative venue for fast publication in the field of Machine learning and related subfields. The journal was established by researchers seeking to reform publication practices associated with flagship conferences and traditional journals such as NeurIPS, ICML, AAAI, JMLR, and Nature Machine Intelligence. It operates within a network of institutions and initiatives including MIT, Stanford University, University of California, Berkeley, OpenAI, and Google Research while engaging contributors from labs such as DeepMind, Facebook AI Research, Microsoft Research, and Amazon Web Services.
Transactions on Machine Learning Research positions itself amid established venues like Proceedings of the IEEE, Science Advances, Communications of the ACM, IEEE Transactions on Neural Networks and Learning Systems, and Neurocomputing. It emerged when debates at meetings such as the NeurIPS 2020 conference and workshops at ICLR highlighted concerns voiced by figures associated with Yoshua Bengio, Yann LeCun, and Geoffrey Hinton on peer review reform. The journal maintains ties with editorial practices familiar from publications like PLoS One, eLife, and F1000Research while seeking the visibility of outlets such as Science, Nature, and PNAS.
The scope includes empirical studies, theoretical advances, algorithms, benchmarks, and applications spanning communities active at KDD, SIGMOD, ACL, CVPR, EMNLP, and ICASSP. Aims reference best practices promoted by organizations like the Association for Computing Machinery, IEEE, and the International Machine Learning Society and by initiatives such as the Reproducibility Project and the AI Index. Papers often relate to work by researchers at Carnegie Mellon University, University of Toronto, ETH Zurich, University of Oxford, University of Cambridge, and labs led by Fei-Fei Li, Andrew Ng, Pieter Abbeel, and Daphne Koller. The journal explicitly welcomes submissions about topics central to projects at IBM Research, Intel Labs, Baidu Research, Alibaba DAMO Academy, and Tencent AI Lab.
Editorial leadership mirrors structures seen at Journal of Machine Learning Research and Transactions of the Association for Computational Linguistics with editors and area editors drawn from universities such as Princeton University, Yale University, Columbia University, Cornell University, and New York University. The board includes scientists associated with awards like the Turing Award, ACM Prize in Computing, and IEEE Fellow designations; names connected to laboratories at Bell Labs, SRI International, Los Alamos National Laboratory, and Lawrence Berkeley National Laboratory appear among reviewers. Peer review workflow takes cues from reformers at OpenReview, Publons, and the editorial models of Frontiers, with emphasis on transparency championed by advocates linked to Timnit Gebru, Joy Buolamwini, and Cathy O'Neil who have engaged in public debates at venues such as The New York Times and panels at SXSW.
The journal follows an open-access publishing model analogous to PLOS Biology and eLife and interoperates with preprint repositories like arXiv and bioRxiv to promote early dissemination as practiced by researchers at Caltech, Imperial College London, University of Washington, and University of Melbourne. It supports licensing frameworks used by Creative Commons and indexing by services including CrossRef, DOI Foundation, Scopus, and Web of Science. Funding and sponsorship have involved grants and institutional support from organizations such as the Gordon and Betty Moore Foundation, Wellcome Trust, European Research Council, NSF, and DARPA, alongside partnerships with conference organizers for special issues tied to events like NeurIPS Workshop, ICML Workshop, and CVPR Tutorial tracks.
Reception has been debated across editorial columns in Nature, Science, The Economist, and commentary from societies including the Association for Computational Linguistics and the Royal Society. The journal's citation patterns intersect with bibliographic databases referencing work by scholars from MIT CSAIL, Berkeley AI Research, Vector Institute, Mila, and INRIA. Critics compare its model to experiences at Journal of the ACM, Information Processing Letters, and ACM Transactions on Graphics, while advocates cite improvements noted in analyses by groups at Harvard University, University of Chicago, University of Pennsylvania, and Duke University. Debates over peer review burden echo discussions in panels at AAAI and editorials in Communications of the ACM.
Notable contributions include papers that influenced benchmarks and toolkits associated with TensorFlow, PyTorch, Scikit-learn, Hugging Face, and datasets curated by ImageNet and COCO. Influential articles have been cited alongside work by pioneers like Nick Bostrom, Stuart Russell, Judea Pearl, Ian Goodfellow, Ruslan Salakhutdinov, Zoubin Ghahramani, Michael Jordan, Sergey Levine, Richard Sutton, and Shai Shalev-Shwartz. Special issues have aggregated research tied to industrial efforts at NVIDIA, ARM Research, Qualcomm Research, Xerox PARC, and government labs such as NASA and NSA.
Category:Machine learning journals