Generated by GPT-5-mini| IEEE Transactions on Pattern Analysis and Machine Intelligence | |
|---|---|
| Title | IEEE Transactions on Pattern Analysis and Machine Intelligence |
| Discipline | Computer vision; Machine learning; Pattern recognition |
| Publisher | IEEE Computer Society |
| Country | United States |
| Abbreviation | IEEE Trans. Pattern Anal. Mach. Intell. |
| Frequency | Monthly |
| History | 1979–present |
IEEE Transactions on Pattern Analysis and Machine Intelligence is a monthly peer-reviewed scientific journal covering research in computer vision, pattern recognition, and machine intelligence. The journal is published by the IEEE Computer Society and has been a primary venue for advances linking theory and practice across institutions such as Massachusetts Institute of Technology, Stanford University, University of Oxford, Carnegie Mellon University, and University of California, Berkeley.
The journal was established in 1979 during a period of growth in research at Bell Labs, Stanford Research Institute, IBM Research, Hitachi, and AT&T laboratories, paralleling developments at conferences like IEEE Conference on Computer Vision and Pattern Recognition, International Conference on Machine Learning, NeurIPS, International Joint Conference on Artificial Intelligence, and European Conference on Computer Vision. Early editorial leadership included figures associated with University of Illinois Urbana-Champaign, University of Toronto, Princeton University, Cornell University, and University College London, and the journal chronicled methodological shifts influenced by work at SRI International, Microsoft Research, Google Research, Facebook AI Research, and DeepMind.
The journal's scope encompasses algorithms and systems reported from laboratories such as MIT Computer Science and Artificial Intelligence Laboratory, Oxford Robotics Institute, ETH Zurich, Tsinghua University, and Peking University, and addresses topics that intersect with applications at NASA, National Institutes of Health, European Space Agency, Toyota Research Institute, and Siemens. Subjects regularly include deep learning methods developed in the tradition of Geoffrey Hinton, Yann LeCun, Yoshua Bengio, and Ian Goodfellow; image processing pipelines influenced by work from Kodak Research Labs and Canon; object recognition paradigms linked to research at Caltech and NYU; and biometric systems related to studies at NIST and DHS. The journal publishes contributions on theoretical foundations related to results from Alan Turing-inspired computation, connections to statistical techniques from Jerzy Neyman and Ronald Fisher, and applications used in projects at DARPA, European Commission, Bill & Melinda Gates Foundation, and World Health Organization.
The editorial board historically comprises editors and associate editors affiliated with institutions including Columbia University, Imperial College London, Johns Hopkins University, University of Cambridge, and National University of Singapore. Peer review follows standards shared with journals such as Communications of the ACM, Journal of the ACM, Nature Machine Intelligence, and Science Robotics, and employs reviewers drawn from networks including ACM SIGGRAPH, AAAI, IEEE Signal Processing Society, SIAM, and Royal Society. Editorial policies have adapted in response to debates involving organizations like Committee on Publication Ethics, decisions by National Academies of Sciences, Engineering, and Medicine, and recommendations from panels at CVPR, ICCV, and ECCV.
The journal is published under the aegis of the IEEE Computer Society with distribution channels similar to titles from Springer Nature, Elsevier, Wiley-Blackwell, and Oxford University Press. Access models have evolved amid discussions involving Plan S, mandates from funding bodies such as European Research Council, National Science Foundation, and Wellcome Trust, and institutional policies from libraries at Harvard University, University of Michigan, and University of Tokyo. The publication offers hybrid access arrangements that intersect with repositories like arXiv, preprint practices advocated at BioRxiv, and data-sharing initiatives promoted by OpenAI, Allen Institute for AI, and Creative Commons.
The journal has high impact metrics comparable to leading venues such as NeurIPS, ICLR, CVPR Proceedings, and Journal of Machine Learning Research, and its influence is recognized by awards from organizations including IEEE Fellow nominations, Turing Award citations, and honors at ACM Awards. Reviews in venues like Nature, Science, Communications of the ACM, and commentary from research groups at Google DeepMind, Facebook AI Research, and Microsoft Research have highlighted its role in shaping fields pursued at Stanford University, MIT, ETH Zurich, and University of Toronto.
Notable contributions published in the journal include foundational work that influenced architectures from researchers such as Geoffrey Hinton, Yann LeCun, Andrew Ng, Fei-Fei Li, and Jitendra Malik; algorithmic advances related to variational methods with ties to David MacKay and Michael I. Jordan; statistical learning theory building on results by Vladimir Vapnik and Corinna Cortes; and practical systems later adopted by industrial teams at Google, Apple, Tesla, Amazon, and Samsung. Specific high-impact topics appearing in the journal comprise deep convolutional networks, generative models, scene understanding, graph-based learning, and biometric verification, influencing projects at DARPA Robotics Challenge, Human Genome Project-adjacent analyses, and imaging programs at European Southern Observatory and National Institutes of Health.
Category:IEEE journals Category:Computer vision publications