LLMpediaThe first transparent, open encyclopedia generated by LLMs

CoNLL

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 84 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted84
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
CoNLL
NameCoNLL
DisciplineComputational linguistics
FrequencyAnnual
Established1997

CoNLL

CoNLL is an annual scientific conference and workshop series focused on computational linguistics and natural language processing. It brings together researchers from institutions such as Stanford University, Massachusetts Institute of Technology, University of Cambridge, University of Oxford, Carnegie Mellon University and companies like Google, Microsoft, Facebook, Amazon to present shared tasks, datasets, and methods. The event often collaborates with venues such as ACL, EMNLP, NAACL, EACL and organizations including the Association for Computational Linguistics and ACL Special Interest Group on Natural Language Learning.

Overview

CoNLL functions as a focal point for research on sequence labeling, parsing, information extraction, and multilingual processing, attracting authors from University of Edinburgh, Tsinghua University, Peking University, University of Tokyo, and ETH Zurich. Presentations commonly cover topics related to models from Google DeepMind, architectures inspired by Yann LeCun, innovations influenced by Geoffrey Hinton, and evaluation paradigms used by teams at IBM Research, Bloomberg, and Apple Inc.. The conference format combines oral sessions, poster sessions, and shared-task workshops, often featuring keynote addresses from figures affiliated with DARPA, European Research Council, National Science Foundation, and industrial labs including DeepMind, OpenAI, and Microsoft Research.

History and Editions

CoNLL began in the late 1990s and has evolved through collaborations with conferences such as ACL 1997, EMNLP 2000, NAACL 2005, and regional meetings like COLING and IJCAI. Early editions showcased rule-based and statistical models from groups at IBM Watson Research Center, Siemens, and SRI International. Later editions documented the rise of machine learning paradigms developed at Google Research, research breakthroughs from Facebook AI Research, transformer architectures tied to work by teams at Google Brain and OpenAI, and multilingual initiatives linked to projects at Meta AI and Allen Institute for AI. Hosts have included universities such as University of Pennsylvania, Princeton University, University of California, Berkeley, University of Washington, and venues in cities like Barcelona, Beijing, Maryland, Prague, and Lisbon.

Shared Tasks and Datasets

CoNLL is best known for running influential shared tasks and releasing benchmark datasets employed by groups at Stanford NLP Group, Berkeley NLP, Johns Hopkins University, University of Sheffield, and University of Illinois Urbana-Champaign. Notable tasks have addressed named entity recognition efforts similar to work at Reuters, dependency parsing related to theoretical frameworks from Noam Chomsky (indirectly through generative linguistics communities), coreference resolution advanced by teams at Google Research, semantic role labeling pursued by researchers at Columbia University, and cross-lingual transfer studied at Facebook AI. Datasets associated with CoNLL editions have been used in evaluations by labs such as Microsoft Research Asia, Tencent AI Lab, Huawei Noah's Ark Lab, Naver Labs, and research groups at Max Planck Institute for Informatics and CNRS.

Impact on Natural Language Processing

CoNLL has shaped evaluation culture and benchmarking practices adopted by Association for Computational Linguistics, industry leaders including Amazon Web Services, and academic centers like Massachusetts Institute of Technology, University of Cambridge, and University of Oxford. Results from CoNLL shared tasks have influenced curriculum at institutions such as Columbia University, deployment choices at Google Cloud, and standards referenced in workshops at NeurIPS, ICML, and ICLR. The conference contributed to shifts toward deep learning architectures similar to models from Google Brain and consolidated practices used in production systems at Facebook, Twitter, and Slack Technologies.

Organization and Sponsorship

CoNLL editions are organized by committees drawn from universities including University of Pennsylvania, King's College London, University of Amsterdam, University of Toronto, and research labs like Microsoft Research, Google Research, and IBM Research. Sponsors have included funding bodies and corporations such as National Science Foundation, European Commission, Google, Microsoft, Facebook, Amazon, Alibaba Group, Baidu, and academic publishers like ACL Anthology collaborators. Program chairs and organizers have often been affiliated with institutions such as Princeton University, ETH Zurich, Tsinghua University, Peking University, and Carnegie Mellon University.

Category:Computational linguistics conferences