The Council has adopted a common approach to the AI Act. Its goal is to ensure that AI systems sold in the EU are safe and comply with fundamental rights and Union values.
The Council of the EU (the Council) approved its revisions to the draft EU Regulation on Artificial Intelligence (“AI Act”) on December 6, 2022. Before interinstitutional negotiations may commence, the European Parliament must finalize its position.
The Council General Approach, the result of months of internal Council talks, is more business-friendly than the European Commission (EC) plan for AI legislation. The definition of an AI system and the scope of the AI Act are limited, and a layer is added to the classification of high-risk AI to exclude systems that are high risk but only employed as accessories to relevant decision making. Some standards for high-risk AI system suppliers are made more technically possible and less burdensome. The list of forbidden systems is enlarged and narrowed, and penalties are reduced for small and medium-sized businesses (SMEs).
Europe prioritizes AI regulation. Following the 2019 Ethics Guidelines for Trustworthy AI, the EC began a three-pronged legal approach to AI regulation. New and amended civil liability regulations and sectoral laws like the General Product Safety Regulation aim to foster trustworthy AI in the EU, along with the AI Act. The General Data Protection Regulation, Digital Services Act, Data Act, and Cyber Resilience Act will all work with the AI Act.
AI ACT Proposal
The EC proposed the AI Act in April 2021. The Proposal pertains to all EU providers and users of AI systems, regardless of location, using a cross-sector and risk-based approach. The most damaging AI applications will be outlawed, while “high-risk” AI systems must meet severe standards. High-risk AI system suppliers have the most Proposal duties. Limited-risk AI systems will be subject to transparency standards, while low-risk ones will not. A new “EU AI Board” will monitor enforcement by national regulators. Companies might be fined €30 million or 6% of global annual turnover.
AI ACT Changes
• Scope is limited. AI systems for research, defence and national security are excluded from the scope. On the other hand, AI systems used for personal, non-professional activities will be subject to transparency agreements only.
• “AI System” redefined. The Council’s stricter definition of AI systems requires “components of autonomy” that infer how to achieve a set of objectives using “machine learning and/or logic and knowledge-based techniques.” The EC may regulate the elements of “logic and knowledge-based approaches”.
• Some prohibited AI practices are narrowed, while others are broadened. AI systems that use harmful “subliminal tactics” or abuse a defined set of groups are still illegal. The Council adds vulnerable social and economic categories and bans private sector social scoring. The Council also expands law enforcement’s use of real-time facial recognition systems in public settings.
• Some illegal AI practices are narrowed, while others are enlarged. AI systems that use harmful “subliminal tactics” or abuse a specific list of vulnerable groups are still illegal. The Council adds vulnerable social and economic categories and bans social scoring by private sector organizations. Law enforcement’s use of real-time facial recognition systems in public settings is also expanded by the Council.
• High-risk AI categories updated. AI systems that relate to items that require a third-party conformity assessment under EU health and safety regulation (e.g., medical devices, radio equipment, and autos) or are used for a purpose listed in the Act are high-risk. The Council adds AI systems employed in key digital infrastructure or to analyse life and health insurance risks and pricing to the EC’s remote biometric identification, recruiting, and creditworthiness objectives. Law enforcement deep fake detection, criminal analytics, and travel document authentication are no longer high-risk AI.
• High-risk AI transparency and accountability changes. The Council improves record-keeping, information for high-risk AI users, and SMEs’ technical documentation. Risk management systems must identify hazards to health, safety, and fundamental rights that are most likely to occur when the AI system is utilized for its intended purpose. Risks from AI system “reasonably foreseeable misuse” are excluded. High-risk AI systems having quality management duties under other EU regulations may utilise components of their compliance programs to meet AI Act requirements.
• Training data and bias detection changes. High-risk AI providers must ensure “to the best extent possible” that their training data is full, relevant, representative, and error-free. The Council only examines biases that could harm people or cause discrimination. High-risk AI providers must minimize bias in feedback loops.
• SMB fine threshold and EU AI Board expansion. For enterprises with over 250 employees and a revenue of €50 million, the Council sets the maximum sanction at €30 million or six percent of worldwide annual turnover. Only forbidden AI system violations can result in maximum punishments. SMEs have three percent global turnover criteria. A new EU AI Board will coordinate enforcement of these sanctions by national regulators. The new EU AI Board creates expert groups and advises the EC on international AI regulation.
Before starting “trilogues”, the Parliament must finish amending the Proposal. Parliamentarians debate over 3,000 amendments. Trilogues may commence once they vote on the modifications in the first half of 2023. The law could enter into force by end of 2023, before the 2024 Parliament elections. Companies will have two to three years to comply with the law.