• +351 916752315

  • +55 31 998063246 - Brazil

Artificial Intelligence Regulation comes into force in the European Union

The European Union’s Artificial Intelligence Regulation came into force yesterday, 1 August. The legislative act aims to promote the responsible development and deployment of artificial intelligence. Most of the legislation will apply from 2026, but some provisions will already be binding next year.

Proposed by the Commission in April 2021 and approved by the European Parliament and the Council in December 2023, the Artificial Intelligence Regulation addresses the potential risks to the health, safety and fundamental rights of citizens, and establishes clear implementation requirements and obligations on the specific subject of Artificial Intelligence based on the different potential risks and levels of impact.

Definition of Artificial Intelligence

Article 3(1) of the Regulation defines Artificial Intelligence systems as:

“a system machine-based designed to operate with varying levels of autonomy, which may exhibit adaptability after deployment, and which, for explicit reasons or implicit purposes, infers from the information it receives how to generate results such as predictions, content, recommendations, or decisions that may influence physical or virtual environments.”

Prohibited Applications – Artificial Intelligence of Unacceptable Risk

The new rules prohibit certain AI applications considered to pose unacceptable risks that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and the untargeted collection of facial images from the Internet or closed-circuit television to create facial recognition databases. Emotion recognition in the workplace and schools, social classification, predictive policing (when it is based solely on profiling a person or assessing their characteristics) and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be prohibited.

Obligations applicable to systems considered high risk

Clear obligations are also foreseen for other AI systems, considered high risk, due to their potential significant harm to health, safety, fundamental rights, the environment, democracy and the rule of law. Examples of high-risk uses of AI include critical infrastructure, education and vocational training, employment, essential public and private services (notably healthcare and banking), certain law enforcement systems, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems should assess and mitigate risks, keep records of use, be transparent and accurate, and ensure human oversight. Citizens will have the right to complain about AI systems and to receive explanations about decisions based on high-risk AI systems that affect their rights.

Artificial Intelligence with minimal or no risk

AI systems with negligible risks can remain unregulated.

Transparency requirements

General-purpose AI systems considered to pose limited risks, as well as the general-purpose AI models on which such systems are based, must comply with certain transparency requirements, including compliance with EU copyright law and publishing detailed information on the content used for training. More powerful general-purpose AI models that may pose systemic risks will have to comply with additional requirements, such as performing model evaluations, assessing and mitigating systemic risks, and reporting incidents. Furthermore, artificial or manipulated image, audio or video content (‘deep fakes’) should be clearly labelled as such.

Exemptions for law enforcement purposes

The use of remote biometric identification systems by law enforcement authorities is in principle prohibited, except in exhaustively listed and narrowly defined situations. Remote biometric identification in real time may only be applied if strict safeguards are met, in particular if its use is limited in time and geographical scope and if it is subject to specific prior judicial or administrative authorisation. Such uses may include, for example, the targeted search for a missing person or the prevention of a terrorist attack. The use of remote biometric identification systems “on a deferred basis” is considered a high-risk use case, requiring judicial authorisation associated with a criminal offence.

Innovation support measures

Regulatory testing and real-world testing environments accessible to start-ups will have to be created at national level in Member States in order to develop and train innovative AI before it is placed on the market. The European Union’s Artificial Intelligence Office is being set up to help companies start complying with the new rules before they come into force.

This content is merely informative, for a case analysis and specific framing of your company or AI’s situation, please contact us.