Generated by GPT-5-mini| COLING | |
|---|---|
| Name | COLING |
| Status | Active |
| Genre | Academic conference |
| Frequency | Biennial |
| Discipline | Computational Linguistics |
| First | 1965 |
| Organizer | International Committee on Computational Linguistics |
COLING COLING is a biennial international conference for researchers in computational linguistics and natural language processing that brings together academics, industry practitioners, and students from institutions such as Massachusetts Institute of Technology, Stanford University, University of Cambridge, University of Oxford, University of Edinburgh, Carnegie Mellon University, University of Pennsylvania, University of California, Berkeley, University of California, San Diego, University of Toronto, University of Washington, Johns Hopkins University, University of Illinois Urbana–Champaign, University of Michigan, Princeton University, Columbia University, New York University, Harvard University, University of Chicago, University of California, Los Angeles, Peking University, Tsinghua University, Tokyo University, Seoul National University, National University of Singapore, Australian National University, ETH Zurich, University of Zürich, Max Planck Institute for Informatics, Institut Polytechnique de Paris, École Polytechnique, University of Edinburgh Informatics, University of Melbourne, University of Sydney, University of Helsinki, Aalto University, University of Amsterdam, Leiden University, Vrije Universiteit Amsterdam, Dublin City University, Trinity College Dublin, University of Copenhagen, University of Oslo, University of Barcelona, Universitat Politècnica de Catalunya, Università di Bologna, Sapienza University of Rome, Politecnico di Milano, ETH Zurich Informatics, Technical University of Munich, Ludwig Maximilian University of Munich, Fraunhofer Society, Max Planck Society, Google Research, Microsoft Research, Facebook AI Research, DeepMind, Amazon Research, IBM Research, Alibaba Group, Baidu Research, Huawei Technologies, Naver Corporation, Baidu, Tencent, Adobe Research).
The conference originated in 1965 amid developments at Massachusetts Institute of Technology, Stanford University, University of Edinburgh, University of Cambridge and University of Oxford where early computational linguists connected with projects at Bell Labs, RAND Corporation, IBM Research and AT&T Bell Labs. Early meetings featured participants from Princeton University, Columbia University, Harvard University, Yale University, Cornell University and Brown University, and engaged researchers affiliated with SRI International, MITRE Corporation, Hewlett-Packard, Honeywell, General Electric, NASA Ames Research Center and European Organization for Nuclear Research. Over decades COLING paralleled milestones at Association for Computational Linguistics, ACL Anthology, International Joint Conference on Artificial Intelligence, NeurIPS, ICML, EMNLP, NAACL, EACL, IJCNLP, LREC, SIGIR, WWW Conference and KDD Conference while reflecting advances from labs such as Stanford NLP Group, Berkeley AI Research, CMU Language Technologies Institute, Oxford Machine Learning Research Group, Cambridge Language Technology Lab and initiatives like WordNet, Penn Treebank, Brown Corpus, TIMIT Corpus, Switchboard Corpus, British National Corpus, Universal Dependencies and Europarl Corpus.
COLING covers topics including computational syntax, computational semantics, computational pragmatics, machine translation, information retrieval, information extraction, dialogue systems, speech recognition, speech synthesis, multilingual NLP, corpus linguistics, language resources, machine learning for language, neural architectures, transformers, attention mechanisms, embeddings, word sense disambiguation, parsing, discourse analysis, sentiment analysis, question answering, summarization, text generation, evaluation metrics, low-resource languages, cross-lingual transfer, morphological analysis, lexical semantics, semantics representation, knowledge graphs, ontology learning, commonsense reasoning, multimodal processing, vision-and-language, speech-to-text, text-to-speech, privacy-preserving NLP, fairness in NLP, interpretability, reproducibility, benchmarking and shared tasks run in collaboration with organizations such as European Language Resources Association, LREC Organizers, Text REtrieval Conference, NIST, ICDAR, CLEF, SemEval, CoNLL, WMT, IWSLT, Shared Task Organizers, ELRA and research groups at Google DeepMind, OpenAI, Meta AI Research, Microsoft Research AI.
Typical organizational components include program committees chaired by senior researchers from Stanford University, University of Cambridge, University of Edinburgh, Carnegie Mellon University, University of Oxford, University of Pennsylvania and industry leaders from Google Research, Microsoft Research, IBM Research and Facebook AI Research. The conference hosts keynote addresses by figures affiliated with Turing Award recipients, senior investigators from Max Planck Institute for Informatics, Allen Institute for AI, Renaissance Technologies and prominent lab directors from DeepMind, OpenAI, Google Brain and FAIR. Format elements include oral sessions, poster sessions, tutorials, workshops, demos, PhD consortia, panels, doctoral consortiums and industry tracks supported by institutions such as ACL, IEEE, ACM SIGIR, Association for the Advancement of Artificial Intelligence, Royal Society, National Science Foundation, European Research Council and national science agencies. Venues have included cities like Osaka, Geneva, Beijing, Santiago, Hyderabad, Manchester, Tokyo, Seoul, Singapore, Buenos Aires, Sydney, Melbourne, Helsinki, Stockholm, Barcelona, Prague, Warsaw, Montreal, Vancouver and Mexico City.
Proceedings typically appear in digital libraries and archives maintained by organizations such as the ACL Anthology, Springer, Lecture Notes in Computer Science, Elsevier, IEEE Xplore, ACM Digital Library and proprietary publisher collections. Influential papers presented have addressed topics pioneered by researchers from Brown University, MIT Computer Science and Artificial Intelligence Laboratory, Princeton Neuroscience Institute, Johns Hopkins University CLSP, University of Maryland, Indiana University Bloomington, Darmstadt University of Technology, Technical University of Denmark, University of Tokyo, Seoul National University and corporate labs including IBM Watson Group, Google AI Language, Microsoft Research Cambridge and Amazon Alexa. Collections include long papers, short papers, system demonstrations, dataset descriptions, and shared task reports that later informed standards at ISO, W3C, Unicode Consortium and initiatives like OpenAI Baselines.
COLING confers best paper awards, outstanding demo awards, and recognitions for students and young researchers often endorsed by bodies such as Association for Computational Linguistics, European Association for Machine Translation, International Speech Communication Association, IEEE Signal Processing Society and national academies including Royal Society, US National Academy of Sciences, Chinese Academy of Sciences, Indian National Science Academy and Russian Academy of Sciences. Award recipients frequently include researchers affiliated with Stanford NLP Group, Berkeley AI, CMU, Oxford, Cambridge, Google Research, Microsoft Research, Facebook AI Research, DeepMind, OpenAI and universities honored by prizes like the Turing Award, ACM Prize in Computing, IJCAI Award, IEEE John von Neumann Medal.
COLING has shaped benchmarks, resource creation, methodology and community-building across collaborations involving ACL, NAACL, EACL, EMNLP, NeurIPS, ICML, IJCAI, SIGIR, LREC, WMT and technology companies such as Google, Microsoft, Meta Platforms, Amazon, Apple Inc., Baidu and Alibaba. Its proceedings and shared tasks have influenced standards at ISO/TC 37, W3C, Unicode Consortium and spurred research that contributed to commercial products like Google Translate, Microsoft Translator, Amazon Alexa, Apple Siri, Baidu Translate and open-source toolkits such as NLTK, spaCy, Stanford CoreNLP, Moses, OpenNMT, Fairseq, Transformer library, Hugging Face Transformers, TensorFlow, PyTorch, AllenNLP and corpora used by University of Pennsylvania and Linguistic Data Consortium.
Category:Computational linguistics conferences