LLMpediaThe first transparent, open encyclopedia generated by LLMs

European Union AI Act

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 53 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted53
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
European Union AI Act
NameEuropean Union AI Act
LegislatureEuropean Parliament
Long titleRegulation laying down harmonised rules on artificial intelligence
Territorial extentEuropean Union
Introduced byEuropean Commission

European Union AI Act. The European Union's pioneering comprehensive legal framework for artificial intelligence, proposed by the European Commission in April 2021. It establishes a unified regulatory and legal approach for the development, market placement, and use of AI systems across the single market, aiming to foster innovation while ensuring safety and fundamental rights. The regulation is designed to address risks posed by specific uses of AI technology and build trust through clear requirements and oversight mechanisms, positioning the EU as a global standard-setter in digital governance alongside regulations like the General Data Protection Regulation.

Overview and objectives

The legislative proposal was developed by the European Commission under President Ursula von der Leyen and Commissioner Thierry Breton, following extensive consultations and a white paper on artificial intelligence. Its primary objective is to ensure that AI systems used in the European Union are safe, transparent, and respectful of fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union. The act seeks to create a predictable legal environment that stimulates investment and innovation in AI technology across the single market, preventing fragmentation among member states like France, Germany, and Italy. It aligns with broader EU digital strategy goals, complementing other initiatives such as the Digital Services Act and the Digital Markets Act, and aims to enhance the global competitiveness of European actors like DeepL and Mistral AI.

Risk-based classification system

The regulation's core innovation is a four-tier, risk-based classification that dictates the level of regulatory scrutiny for different AI systems. Unacceptable risk applications are prohibited entirely, while high-risk systems, such as those used in critical infrastructure, educational institutions, or law enforcement, are subject to strict ex-ante conformity assessments. Limited risk systems, like chatbots or emotion recognition systems, face specific transparency obligations, such as informing users they are interacting with an artificial intelligence. Minimal risk applications, which constitute the majority of AI systems including AI-powered video games or spam filters, are largely exempt from new rules, operating under existing legislation like the General Data Protection Regulation and the Product Liability Directive.

Prohibited AI practices

The act explicitly bans a narrow set of AI systems deemed to pose an unacceptable risk to the safety, livelihoods, and rights of people within the European Union. These prohibited practices include AI systems that deploy subliminal techniques or exploit vulnerabilities to materially distort behavior, such as toys using voice assistance to encourage dangerous acts by minors. It also bans the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, with narrowly defined exceptions subject to judicial authorization. Other prohibitions cover social scoring by public authorities that leads to detrimental treatment and the use of predictive policing systems based solely on profiling or assessing personality traits.

Requirements for high-risk AI systems

Providers of high-risk AI systems, such as those for medical devices, recruitment processes, or access to essential private services, must comply with rigorous requirements before placing their products on the market. These mandates include establishing robust risk management systems, using high-quality training data sets to minimize biases, and maintaining detailed technical documentation for national authorities. Systems must be designed for sufficient human oversight, achieve appropriate levels of accuracy, robustness, and cybersecurity, and provide clear information to users, as stipulated in an EU declaration of conformity. Providers based outside the EU, such as OpenAI or Google DeepMind, must appoint an authorized representative within the union to ensure compliance.

Governance and enforcement

The regulation establishes a decentralized enforcement framework, relying on existing national market surveillance authorities within member states like ANSSI in France and the Federal Office for Information Security in Germany. A new European Artificial Intelligence Office within the European Commission will oversee the rules for general-purpose AI models and coordinate with a board of member state representatives. Non-compliance can result in significant administrative fines, calculated as a percentage of the offending company’s global annual turnover, with higher penalties for violations of prohibited practices. The Court of Justice of the European Union will serve as the ultimate judicial arbiter for legal disputes arising from the act's application.

Timeline and implementation

Following a trilogue agreement between the European Commission, the Council of the European Union, and the European Parliament in December 2023, the final text was formally adopted in early 2024. The regulation will become applicable 24 months after its entry into force, with certain provisions, such as those banning prohibited practices, taking effect sooner. Specific rules for general-purpose AI models will apply after 12 months, while obligations for high-risk systems embedded in regulated products, like medical devices under the Medical Devices Regulation, have a longer 36-month grace period. This phased implementation allows providers, conformity assessment bodies, and authorities across the EU to adapt to the new requirements.