LLMpediaThe first transparent, open encyclopedia generated by LLMs

HCOMP

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: AAAI Hop 4
Expansion Funnel Raw 66 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted66
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
HCOMP
NameHCOMP
Established2009
DisciplineHuman Computation
OrganizerInternational Community
FrequencyAnnual

HCOMP

HCOMP is an annual conference and workshop series focused on human computation, crowdsourcing, and hybrid human–machine systems. It brings together researchers from computer science, cognitive science, psychology, and human factors to present empirical studies, algorithms, platforms, and evaluations. The event fosters cross-disciplinary exchange among practitioners affiliated with institutions such as Google, Microsoft Research, Amazon Mechanical Turk, Stanford University, and Massachusetts Institute of Technology.

Overview

HCOMP convenes academic researchers, industry engineers, and policy analysts to explore tasks where humans and machines collaborate, including crowdsourcing workflows, human-in-the-loop machine learning, and collective intelligence. Typical participants include faculty from Carnegie Mellon University, postdocs from University of California, Berkeley, engineers from Facebook, and teams from startups spun out of MIT Media Lab. Proceedings often appear alongside conferences such as CHI, NeurIPS, ICML, and AAAI, and attract attendees from labs like IBM Research and OpenAI.

History

HCOMP originated in initiatives that traced back to early human computation efforts and distributed problem-solving projects. Early milestones involved demonstrations from groups at Yahoo! Research and collaborations with platforms such as Mechanical Turk and projects linked to DARPA. Over time, the event evolved through partnerships with university labs at University of Washington, Columbia University, and New York University. Key moments included special workshops co-located with SIGCHI and joint programs with funding agencies like the National Science Foundation and corporate sponsors such as Intel and Adobe. HCOMP has documented shifts from simple microtask aggregation to complex hybrid systems influenced by breakthroughs from teams at DeepMind and academic results disseminated at ACL and EMNLP.

Tasks and Benchmarks

HCOMP sessions typically present tasks and benchmarks that evaluate human and hybrid system performance on annotation, synthesis, and decision-making. Common benchmark tasks include image annotation challenges related to datasets developed by groups at ImageNet collaborators and labeling schemes influenced by work from Stanford Vision Lab and MIT CSAIL. Language understanding and annotation tasks often reference datasets and evaluation protocols popularized in GLUE, SQuAD, and corpora used by groups at Allen Institute for AI. Other benchmarks address crowdsourced transcription inspired by archives at the Library of Congress and citizen science efforts like Zooniverse. Comparative evaluations draw from standards used by teams at NIST and metrics promoted at competitions such as Kaggle.

Methodologies and Tools

Methodologies presented at HCOMP span experimental design, incentive mechanisms, quality control, aggregation algorithms, and hybrid orchestration frameworks. Notable algorithmic approaches discussed include probabilistic graphical models from work linked to Stanford School of AI, statistical aggregation methods pioneered by researchers at Columbia University, and active learning strategies employed by groups at University of Toronto. Tools and platforms demonstrated include extensions to Amazon Mechanical Turk, workflow systems inspired by TurKit and projects from MIT Media Lab, as well as open-source frameworks maintained by communities around GitHub repositories. Evaluation techniques often leverage randomized controlled trials modeled after protocols used at Harvard and statistical analysis methods taught at Princeton University.

Applications

Applications covered at HCOMP range across industry sectors and scientific domains. In healthcare, collaborations between teams at Johns Hopkins University and startups working with Mayo Clinic have explored human-in-the-loop diagnostic annotation. In environmental science, citizen science workflows connect with projects run by National Geographic and NASA data initiatives. In digital humanities, crowdsourced transcription projects tie to collections at the British Library and Smithsonian Institution. Commercial applications presented by participants from Uber, Airbnb, and LinkedIn demonstrate hybrid approaches for content moderation, recommendation tuning, and data labeling. Security and defense applications discussed include threat analysis prototypes influenced by collaborations with labs at MIT Lincoln Laboratory.

Ethical and Societal Implications

HCOMP addresses ethical, legal, and social issues arising from human computation, such as worker payment, consent, privacy, and bias. Discussions often cite labor studies linked to scholars at University of Oxford and policy frameworks debated at institutions like European Commission and United Nations forums. Research ethics panels reference standards from Institutional Review Board practices at universities including Yale University and University of Cambridge. Debates at HCOMP examine algorithmic fairness informed by work at Fairness, Accountability, and Transparency (FAT)* workshops, transparency initiatives promoted by Electronic Frontier Foundation, and legal perspectives influenced by cases in jurisdictions represented by scholars from Harvard Law School and Stanford Law School. The community explores interventions proposed by labor advocates, researchers from Consumer Financial Protection Bureau-adjacent projects, and NGOs active in digital rights.

Category:Conferences