Artificial Intelligence is moving fast—it was only a matter of time before the world’s first comprehensive legal framework to govern this technology was enacted. We are talking about the EU AI Act.
This regulatory framework is not new; as a matter of fact, it entered into force in August 2024. Its implementation has been rolled out in phases, with the final requirements taking effect in August 2027.
But what is it exactly, and what does it mean for AI service providers in the European Union? Furthermore, what does it mean for companies using AI? Who should be concerned about it?
Below is a structured breakdown of its architecture, obligations, and relevance to both AI service providers and deployers.
What is the AI Act for
The AI Act has a dual objective:
- To promote the uptake of trustworthy, human-centric AI across the EU single market.
- To ensure the highest protection for health, safety, fundamental rights, democracy, the rule of law, and the environment. [1]
The framework aims to harmonize the rules on AI technology for members across the European Union and prevent fragmented policies that could harm EU citizens. With a single approach, AI-powered goods and services can circulate freely between Member States.
Policy logic
The regulatory approach of the AI Act is risk-based, meaning the greater the potential risk, the stricter the obligations. This is why the regulation is built on different tiers, each matching a specific risk level. The tiers range from outright bans at the top, through heavy compliance requirements for high-risk systems, to simple disclosure rules for limited-risk systems, and no mandatory obligations for the last tier.
The risk pyramid and its four tiers
The following recaps the four tiers covered by the AI Act. For further information, please visit the European Union’s official AI Act page.
Tier 1: Prohibited AI Practices (Article 5)
The first tier covers AI practices that are strictly banned within the European Union and took effect on February 2, 2025. The following practices are banned as they are considered unacceptable due to their potential risks to the EU’s values and rights.
| Banned practice | Definition |
| Subliminal manipulation and deception | AI using techniques beyond a person’s consciousness to influence and distort behaviour, ultimately causing harm |
| Exploiting user vulnerabilities | Targeting people based on age, disability, or socioeconomic situation |
| Social scoring | Evaluating or classifying people based on social behaviour, influencing an individual’s opportunities in unrelated contexts. |
| Predictive policing based on solely profiling | Predicting criminal behaviour from personality traits alone |
| Untargeted facial recognition scraping | Building databases by scraping images from the internet or CCTV |
| Workplace and educational emotion inference | Reading emotions in workplaces or schools, with the exception of medical and safety contexts |
| Biometric categorization for sensitive attributes | Inferring race, religion, political opinions, sexual orientation etc. from biometric data |
The last banned practice is real-time remote biometric identification (RBI) in public spaces for law enforcement, but it comes with three narrow exceptions:
Tier 2: High-Risk AI Systems (Articles 6–27)
This tier comprises the latest obligations to be enforced, as most will enter into force from 2 August 2026. For product-embedded systems, existing products that are already regulated under EU legislation and have native AI features, it will apply from 2 August 2027. [5,6]
High-risk classification triggers through two tracks:
Track A covers AI used as a safety component in products already subject to EU product-safety legislation (medical devices, machinery, aviation, vehicles, etc.).
Track B covers AI deployed in eight sensitive use-case areas, all listed in Annex III:
Obligations for high risk providers
| Requirement | Description |
| Risk management (Art 9) | Continuous, lifecycle-spanning risk identification, evaluation, and mitigation |
| Data Governance (Art.10) | Training data must be relevant, representative, free of errors, and examined for bias |
| Technical documentation (Art. 11) | Comprehensive documentation before market placement, retained 10 years |
| Automatic logging (Art. 12) | Systems must generate event logs enabling traceability throughout their lifetime |
| Transparency (Art. 13) | Clear information to deployers on capabilities, limitations, and correct use |
| Human oversight (Art. 14) | Designed to enable humans to understand, monitor, override, or halt the system |
| Accuracy, robustness, cybersecurity (Art. 15) | Declared accuracy metrics; resilience to errors and adversarial attacks |
| Quality management system (Art. 17) | Lifecycle-spanning documented quality processes |
| Conformity assessment (Art. 43) | Self-assessment for most Annex III systems; mandatory third-party audit for biometric identification systems |
But AI service providers are not the only ones to face the AI Act’s obligations. Deployers must also comply with a set of obligations as per Article 26 of the Act. Deployers are organizations using AI systems in their operations, such as AI-powered contract review tools or HR screening solutions. The table below is non-exhaustive
Obligations for high-risk deployers
| Requirement | Description |
| Use the system as instructed (§1) | Must follow the provider’s instructions for use and not repurpose the AI beyond its intended function |
| Assign qualified human oversight (§2) | Appoint trained people with the authority and resources to supervise, intervene, and override the AI; requires named, competent individuals |
| Monitor systems and report problems (§5) | Watch performance and suspend usage in case of risks; deployers must report incidents to providers and authorities. |
| For public bodies, register use (§8) | Public authorities must check the EU database before using a high-risk system; if it’s not registered they cannot use it. |
| Cooperate with authorities (§12) | Work with regulators on any action they take regarding the AI system. Obstructing this obligation is itself a compliance failure. |
Good to know: Non-compliant organizations can face up to €15 million or 3% of global turnover, and up to €7.5 million or 1% for providing misleading information.
Tier 3: Limited Risk / Transparency Obligations (Article 50)
Article 50 of the AI Act requires providers and deployers of limited-risk systems to comply with transparency obligations. These include:
- Chatbots
- Deepfakes and synthetic media
- Emotion recognition and biometric categorization systems
| Who | Obligation | Exception |
| Provider | Tell users when they’re interacting with an AI (§1) | Not required if deemed obvious for a reasonable person or for criminal law enforcement |
| Provider | Mark all AI generated content (audio, video, text and image) as AI in machine readable format (§2) | Except for standard editing assistance or criminal law enforcement |
| Deployer | Inform users when emotion recognition or biometric categorization is used on them (§3) | Not needed for criminal law enforcement (with safeguards) |
| Deployer | Disclose deepfakes and AI-generated public interest text (§4) | For creative works, disclosure is required without disturbing the audience’s enjoyment; it is not required if the text has been reviewed by a human |
Tier 4: Minimal Risk
The vast majority of AI systems (spam filters, recommendation engines, video game AI, productivity tools) carry no mandatory obligations. Voluntary codes of conduct are encouraged.
All AI models must comply
Chapter V of the AI Act covers General Purpose AI Models (GPAI models). It states that all AI models can carry risks depending on their downstream integration, not on the models themselves. The obligations below entered into force in August 2025.
For this reason, all GPAI providers must:
- Prepare technical documentation
- Provide downstream integrators with capability and limitation information
- Implement a copyright compliance policy
- Publish a sufficiently detailed summary of training data content
In addition, GPAI models considered to carry systemic risk* must also:
- Do adversarial testing/red-teaming
- Conduct systematic risk assessment and mitigation
- Report serious incidents to the AI Office
- Apply enhanced cybersecurity measures
Good to know: The GPAI Code of Practice, published in July 2025, provides voluntary compliance pathways. Today its signatories include Amazon, IBM, Google and Microsoft among others.
* Presumed when cumulative training compute exceeds 10²⁵ FLOPs (number of Floating Point Operations executed per second, measures computational processing speed of AI driven models)
Who enforces the AI Act
Like other EU wide regulations, the enforcement structure operates on two levels, EU and national.
EU level enforcement
The European Union established an AI Office to directly enforce GPAI obligations. This office is supported by independent experts and an advisory forum drawing on industry, academia, and civil society.
The European AI Board, for its part, coordinates national application. Said board counts one representative per Member State. [1]
National level enforcement
Each Member State must designate market surveillance authorities with powers to access documentation, datasets, and even source code, and to order system withdrawal or recall. Notified bodies conduct third-party conformity assessments for biometric identification and product-safety systems.
Certain member states have decided to create new governing bodies to oversee the enforcement of the AI Act while others rely on existing authorities. For instance, Spain created the AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) in 2023. France and Germany respectively assigned the AI responsibility to the CNIL (Commission Nationale de l’Informatique et des Libertés) and the BNetzA (Bundesnetzagentur / Federal Network Agency)
Regulatory sandboxes are mandated: every Member State must establish at least one by August 2026. These controlled environments allow providers to develop and test AI under regulatory supervision, with free access for SMEs. As of March 2026, operational sandboxes exist in Denmark, Spain, Italy, Luxembourg, and France, with many others in development.
What comes next for the GC and the Legal Function
The EU AI Act fundamentally redefines the Legal Function’s role in technology governance. No longer merely a compliance checkpoint, the legal function is now akey builders of the AI accountability architecture.
This is not about taking on IT’s responsibilities; it is about ensuring that AI adoption aligns with legal risk tolerance, regulatory requirements, and strategic objectives. In our next article, we explore the challenges for General Counsel, how they can lead this transformation, and what governance structures will separate reactive compliance from strategic leadership.


