The EU AI Act: what you need to know
Lieke De Vries
The European Union (EU) is leading the world in setting rules for artificial intelligence (AI), a technology that has enormous potential but also poses significant risks to human rights, safety and democracy.
On 14 June 2023, the European Parliament approved a draft version of the Artificial Intelligence Act (AI Act), which aims to introduce a common regulatory and legal framework for AI in the EU. The Act will now be negotiated with the Council of the European Union and EU member states before becoming law.
The AI Act is based on the principle of human-centric and trustworthy AI, which means that AI systems should respect human dignity, autonomy, privacy and equality, as well as ensure a high level of protection for health, safety and the environment.
The Act proposes to classify and regulate AI systems according to their level of risk, from unacceptable to minimal. It also establishes obligations for providers and users of AI systems, such as transparency, accountability, quality and security.
Here are some of the key aspects of the AI Act:
Unacceptable risk
AI systems that are considered a clear threat to people's fundamental rights or values will be banned outright. These include:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition, in public spaces
Some exceptions may be allowed for specific purposes, such as law enforcement or national security, but only under strict conditions and with prior authorisation.
High risk
AI systems that could cause significant harm to people's health, safety or fundamental rights will be considered high risk and subject to strict requirements. These include:
- AI systems that are used in products falling under the EU's product safety legislation, such as toys, aviation, cars, medical devices and lifts
- AI systems falling into eight specific areas, such as education, employment, justice, law enforcement, migration and social security
High-risk AI systems will have to undergo a conformity assessment before being placed on the market or put into service. They will also have to comply with obligations such as:
- Providing clear and accurate information to users about the system's capabilities and limitations
- Ensuring human oversight and intervention in case of errors or risks
- Implementing technical measures to ensure data quality, security and accuracy
- Establishing a risk management system and keeping records of the system's performance and functioning
- Registering the system in an EU database
Other AI systems
AI systems that do not fall into the unacceptable or high-risk categories will be subject to minimal or no regulation. However, some general provisions will apply to all AI systems, such as:
- Prohibiting practices that exploit human vulnerabilities or subliminal techniques to cause physical or psychological harm
- Prohibiting practices that cause users to harm themselves or others by deception or coercion
- Requiring providers of AI systems that interact with humans to disclose that they are not human
- Requiring providers of AI systems that generate or manipulate content (such as images, videos or text) to disclose that they are artificial
Enforcement and sanctions
The AI Act will be enforced by national authorities designated by each member state. These authorities will have powers to monitor compliance, conduct inspections, impose corrective measures and impose fines.
The maximum fine for breaching the AI Act will be 6% of the annual worldwide turnover of the provider or user of the AI system. This fine will apply to serious infringements, such as placing on the market or using a prohibited AI system or providing false information.
The European Commission will also establish a European Artificial Intelligence Board (EAIB), which will consist of representatives from national authorities and relevant EU agencies. The EAIB will advise and assist the Commission on various aspects of the AI Act, such as issuing guidelines, developing standards and ensuring coordination.
Implications and challenges
The AI Act is a landmark initiative that aims to create a legal framework for AI that is consistent with EU values and principles. It also seeks to foster innovation and competitiveness in the EU's digital economy, as well as to promote international cooperation and convergence on AI regulation.
However, the AI Act also faces some challenges and criticisms, such as:
- The complexity and ambiguity of some of the definitions and concepts, such as what constitutes AI, risk or harm
- The potential burden and costs for providers and users of AI systems, especially small and medium-sized enterprises (SMEs) and start-ups
- The possible impact on the development and deployment of AI systems in the EU, especially in comparison to other regions such as the US or China
- The balance between ensuring adequate protection and preserving flexibility and innovation
- The coordination and cooperation among different stakeholders, such as national authorities, EU institutions, industry, civil society and international partners
The AI Act is still a work in progress and will undergo further discussions and revisions before becoming law. It is expected that the final version of the Act will be adopted by 2024 and enter into force by 2026.
The AI Act is a crucial step towards creating a common European approach to AI that is ethical, human-centric and trustworthy. It is also an opportunity for the EU to shape the global governance of AI and to set an example for other countries and regions.
Additional reading: