Generated by GPT-5-mini| European Commission High-Level Expert Group on AI | |
|---|---|
| Name | European Commission High-Level Expert Group on AI |
| Formation | 2018 |
| Dissolution | 2020 |
| Headquarters | Brussels |
| Parent organization | European Commission |
| Region served | European Union |
European Commission High-Level Expert Group on AI. The European Commission High-Level Expert Group on AI was a temporary advisory body appointed by the European Commission to provide guidance on artificial intelligence policy, ethics, and innovation across the European Union. The group delivered reports and policy recommendations intended to inform proposals by the Juncker Commission, influence the agenda of the von der Leyen Commission, and align EU action with international dialogues such as the Organisation for Economic Co-operation and Development and the United Nations.
The High-Level Expert Group on AI was announced by the European Commission in 2018 during the tenure of Jean-Claude Juncker and formed against a backdrop of rising public debate sparked by incidents involving companies like Cambridge Analytica and research from institutions such as OpenAI and DeepMind. Its formation followed earlier EU initiatives including the Digital Single Market strategy and the creation of the European Data Protection Supervisor advisory actions after the adoption of the General Data Protection Regulation by the European Parliament and the Council of the European Union. The mandate reflected pressure from Member States including Germany, France, and Estonia and coordination with bodies such as the European Investment Bank.
Membership combined representatives from academia, industry, and civil society, drawn from organizations such as Universität Cambridge, University of Oxford, ETH Zurich, INRIA, Siemens, SAP SE, Google affiliates, and NGOs similar to Amnesty International and European Consumer Organisation (BEUC). The group included experts connected to institutions like Max Planck Society, Massachusetts Institute of Technology, and Tsinghua University and professionals with prior roles at bodies such as the European Commission, the European Parliament, and national ministries in Italy and Spain. Governance followed European Commission advisory standards and involved liaisons with the European Data Protection Board and consultation events with stakeholders including the European Economic and Social Committee and representatives from the Council of Europe.
The group's mandate was to advise on trustworthy AI principles, risk assessment, and best practices to guide regulation by the European Commission and the European Parliament. Objectives included developing ethical guidelines consonant with values articulated in the Treaty on European Union and the Charter of Fundamental Rights of the European Union, proposing measures to stimulate AI research funding aligned with the Horizon 2020 programme and its successor Horizon Europe, and recommending frameworks for public procurement with agencies like the European Defence Agency and the European Investment Fund. The group aimed to bridge dialogues with international actors including the G7, the G20, and the International Telecommunication Union.
Principal outputs included the "Ethics Guidelines for Trustworthy AI," assessment lists and policy briefs, and a final report on policy and regulatory options that informed subsequent proposals such as the Artificial Intelligence Act (EU proposal). The group produced documents that referenced standards from organizations like the International Organization for Standardization and the Institute of Electrical and Electronics Engineers, and recommended benchmarking approaches used by research centres such as Center for Data Innovation and think tanks like Bruegel and the Bertelsmann Foundation. Workshops and public consultations engaged stakeholders from European Research Council awardees, recipients of Marie Skłodowska-Curie Actions, and representatives from innovation hubs such as Station F and the Silicon Fen community.
The group's guidelines shaped drafts of the Artificial Intelligence Act (EU proposal), informed amendments debated in the European Parliament's Committee on Civil Liberties, Justice and Home Affairs, and impacted positions taken by Member States during trilogue discussions with the Council of the European Union. Its emphasis on "trustworthy AI" influenced procurement rules adopted by bodies like the European Commission's Directorate-General for Communications Networks, Content and Technology and funding priorities within Horizon Europe. Internationally, the group's outputs were referenced in dialogues at the Organisation for Economic Co-operation and Development and influenced standards discussions at the United Nations Educational, Scientific and Cultural Organization.
Critics from civil society organizations including Access Now and scholars from institutions like University College London argued the group was biased toward industry interests because of member affiliations with companies such as Google and Microsoft. Some Members of the European Parliament questioned transparency and the adequacy of engagement with privacy advocates linked to the European Data Protection Supervisor and with unions such as the European Trade Union Confederation. Legal scholars citing jurisprudence from the Court of Justice of the European Union and commentators from outlets such as Politico Europe and The Economist debated whether non-binding guidance could meaningfully constrain market actors or whether it risked delaying stronger regulatory measures exemplified in proposals for binding rules introduced by the von der Leyen Commission.
Category:European Commission advisory bodies Category:Artificial intelligence in Europe