LLMpediaThe first transparent, open encyclopedia generated by LLMs

Queer in AI

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 122 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted122
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Queer in AI
NameQueer in AI
Formation2017
TypeProfessional association
HeadquartersSan Francisco, California
Region servedGlobal
WebsiteOfficial website

Queer in AI is an international advocacy and community organization focused on supporting LGBTQ+ researchers, practitioners, and practitioners in artificial intelligence. It connects members across academia and industry, promotes inclusive research practices, and organizes events to increase visibility for queer professionals in machine learning. The organization engages with policy, publishes community resources, and collaborates with allied institutions to address bias, safety, and equity in AI systems.

History

Queer in AI emerged during a period of rapid expansion in machine learning communities associated with NeurIPS, ICML, CVPR, ACL, and AAAI conferences. Founders drew on networks from Google Research, OpenAI, DeepMind, Facebook AI Research, Microsoft Research, IBM Research, and academic labs at Stanford University, Massachusetts Institute of Technology, University of California, Berkeley, Carnegie Mellon University, and University of Toronto. Early organizing paralleled initiatives such as Black in AI, Women in Machine Learning, Latinx in AI, and WiML. The initiative formalized meetups, mentorship, and workshops around events like NeurIPS 2017 and ICLR, responding to debates in venues such as The New York Times, The Guardian, and academic forums including arXiv preprints. Growth was shaped by collaborations with foundations and institutions like the Mozilla Foundation, Ford Foundation, Open Society Foundations, Allen Institute for AI, and Horizon 2020-funded projects.

Mission and Activities

The group states goals aligned with inclusion in technology workplaces at companies such as Amazon Science, Apple Inc., Salesforce Research, and Uber AI Labs. Activities include mentorship programs linking students from Harvard University, Princeton University, Yale University, University of Oxford, University of Cambridge, and ETH Zurich to industry partners at NVIDIA, Intel Labs, Qualcomm Research, and Siemens. It produces resources addressing fairness and safety relevant to policy discussions at European Commission, United Nations, U.S. Department of Homeland Security, and advisory bodies like NIST. The organization has issued guidance examined by research groups at Berkeley AI Research (BAIR), MIT CSAIL, Oxford Machine Learning Research Group, and think tanks such as Brookings Institution and Center for Humane Technology.

Community and Membership

Membership spans professionals and students associated with institutions including Columbia University, University of Washington, University of Michigan, Imperial College London, Peking University, Tsinghua University, Australian National University, University of Toronto Scarborough, and labs at Salesforce Research. Notable participants have included researchers formerly affiliated with Google Brain, Meta AI, DeepMind Ethics & Society, AI Now Institute, and the Partnership on AI. The network facilitates connections with affinity groups such as TransTech Social Enterprises, Lesbians Who Tech, Out in Tech, GLAAD, and Human Rights Campaign. Chapters and local meetups have appeared in cities including San Francisco, London, New York City, Toronto, Berlin, Paris, Tel Aviv, Bangalore, Sydney, Shanghai, and São Paulo.

Research and Publications

Queer in AI members have co-authored peer-reviewed work appearing at venues like NeurIPS, ICML, AAAI, ACL, EMNLP, CHI, and journals such as Nature Machine Intelligence and IEEE Transactions on Pattern Analysis and Machine Intelligence. Topics include algorithmic fairness, dataset auditing, representation learning, privacy-preserving machine learning, and social impact assessments related to datasets such as ImageNet, MS COCO, Wikidata, and corpus efforts connected to Wikipedia. Research teams have included collaborators from Stanford Human-Centered AI, MIT Media Lab, Oxford Internet Institute, Data & Society Research Institute, AI Now Institute, and The Alan Turing Institute. Community-authored reports and white papers have informed policy debates at European Parliament committees, U.S. Congress hearings, and NGO forums hosted by Amnesty International and Human Rights Watch.

Events and Conferences

The organization runs workshops, mentorship sessions, and social events co-located with major conferences like NeurIPS, ICML, ICLR, CVPR, ACL, and CHI. It has organized panels with speakers from DeepMind, OpenAI, Google Research, Microsoft Research, and universities such as Columbia University and Princeton University. Events have featured collaborations with festival and summit partners including SXSW, Re:Publica, Ada Lovelace Day, and policy venues such as WEF side events. Regional workshops and unconference formats have been hosted at institutions including Goldsmiths, University of London, University of Toronto’s Vector Institute, and University of California, Berkeley’s Center for Human-Compatible AI.

Advocacy and Policy Work

Queer in AI engages in advocacy around nondiscrimination and algorithmic accountability in contexts overseen by bodies such as European Commission, United States Federal Trade Commission, and National AI Advisory Committee. It submits testimony and amicus comments alongside organizations like Electronic Frontier Foundation, Creative Commons, Access Now, Center for Democracy & Technology, and Human Rights Watch. Policy priorities include bias mitigation in hiring systems used by corporations including LinkedIn, Indeed, and HireVue and surveillance impacts tied to vendors such as Clearview AI and Palantir Technologies. The group has participated in consultations with standards bodies like ISO and national regulators in Canada and the United Kingdom.

Criticism and Controversies

Critiques have addressed representation, fundraising, and partnerships, echoing debates seen in organizations such as Black in AI and Women in Machine Learning. Controversies have involved decisions on sponsorship from corporations including Google, Meta Platforms, Inc., and Amazon and tensions reported at events parallel to disputes at venues like NeurIPS 2020 and policy disputes referenced in commentary from The Guardian and Wired. Discussions within academic forums at arXiv and panels at ICML have scrutinized the balance between advocacy and academic independence, data governance practices, and intersectional inclusion spanning collaborations with TransEquality advocates and civil society groups.

Category:LGBT organizations