The EU AI Act: What Changes for the GC and Legal Function  

General Counsels are now personally accountable for AI governance—not just their IT vendors. The AI Act is the first comprehensive AI governance framework to launch such initiatives. Established by the European Union, it entered into force in August 2024 and has been progressively applied across all Member States, with the next major wave of obligations set to be in place from August 2026. This shifts legal oversight from a vendor question to a boardroom responsibility. 

The GC and the Legal Function are becoming coordinators of the Act internally, turning a regulation into a workable operating model. 

AI risk-management is not a one-shot project 

Contrary to other regulations, the AI Act needs a roadmap and continuous monitoring. Its phased approach with obligation enforcement since 2024 and projected to finish by August 2027, indicates this need of continuous monitoring.  

Four key challenges for the GC  

The following list is non-exhaustive, as challenges certainly vary according to industry, company size, and of course AI readiness. Nevertheless, we can identify four common challengesof the AI Act that will challenge internal legal teams. 

Classification in a nuanced context  

The EU’s regulatory framework comprises a large set of rules to follow when risk-classifying AI tools, whether internally created and deployed or from third-party service providers. The key piece of information to keep in mind is that, according to the AI Act, a system that looks low-risk at first can actually turn out to be high-risk depending on the use, integration, or customization of the tool.

The role of the GC will expand beyond consulting: they must establish clear processes to define high-risk AI tools and services and assign, or at least help assign, owners of those decisions.  

Choosing the right vendors  

Contract risk will be a major concern for many organizations. This is because most companies rely on third-party AI tools for internal work (think of an AI-powered contract review tool) and providers will give some information, but the GC and their team to ensure there are no legal risk gaps. 

As per the regulation, AI service providers must give transparent information on system behaviour, training data, and updates; but in order to be fully sure, the GC needs to make sure contracts allocate responsibility clearly. The old model that shifted full liability for any incident onto the vendor no longer holds. GCs must now verify, document, and own the compliance posture of every high-risk AI system they deploy. 

A note on data governance:  

Article 10’s requirements on training data quality, representativeness, and bias examination are particularly consequential for legal AI tools trained on historical legal documents, which risk encoding jurisdictional, demographic, or jurisprudential biases. Providers must conduct proactive bias audits, and the GDPR’s data minimization principle creates an inherent tension with the Act’s demand for sufficiently representative datasets. 

Making compliance operational  

A third challenge is making compliance operational. The AI Act requires more than legal interpretation; it needs documentation, training, governance, and ongoing monitoring. For GCs , the real issue is how to embed these controls into procurement, deployment approvals, and periodic reviews. 

The “privilege” paradox  

This is perhaps the sharpest tension for GCs: the Act’s transparency and logging requirements (Articles 12 and 13) demand documented audit trails of AI system operations. This sounds standard, but in legal contexts those very logs, prompts, and metadata may become discoverable records that compromise legal professional privilege. 

Several core risk areas have been identified, such as AI misclassification of privileged documents in eDiscovery or AI transcription of privileged communications. For the Legal Function as a whole, these risks need to be kept under control. 

Consider a scenario where a GC receives a discovery request that encompasses AI-generated logs from internal contract review workflows. Those logs may reveal strategic reasoning, litigation strategy, or confidential client advice, all historically protected under privilege. Now they are audit trails required by law. The tension is real: compliance demands transparency, but privilege depends on confidentiality. 

So how do GCs navigate this?

The answer lies in designing privilege-aware AI governance from the outset. This means implementing role-based access controls, segregating privileged workflows from general business use, and ensuring AI logs distinguish between operational data and protected legal analysis. In practice, it is about engineering privilege into the system architecture and not retrofitting it after deployment. 

The good news is the AI Act provides a set of practical control obligations to mitigate such risks. Meeting these obligations is not a one-time audit; it’s an ongoing operating model.  

The clock is ticking: as of August 2026, every law firm and legal department deploying AI tools classified as high-risk under the Act must comply with the obligations set out in Article 26, cited below. 

An AI inventory It must map every tool under Annex III categories. These tools are automatically considered high-risk. 
Role determinations  Providers and deployers must be clearly defined. Notably, white-labeling can under some circumstances turn deployers into providers. 
Human oversight protocols There must be assigned and trained owners with authority to override.* 
Log retention systems  Log retention is a minimum of 6 months for AI system deployers. 
AI literacy training  This is mandatory since February 2025. Training must be available to all users.  
Vendor contracts  Must secure technical documentation access, incident notification, and data processing compliance. 

The EU AI Act moves information governance once again from a back-office function into a front-line compliance discipline. For organizations to thrive, the AI Act must be seen as an opportunity to build internal governance architectures that make AI defensible and valuable. 

The legal function and the GC are uniquely positioned to bring the regulation into operational reality while linking it to company-wide objectives. In practice, this means educating users or embedding compliance controls into AI agents or workflows. 

And this is why the true question is not whether legal is involved in AI governance, but rather how legal must, and can lead it. 

References: 

[1] AI Act | Shaping Europe’s digital future 

[2] AI Act enters into force – European Commission 

[3] Implementation Timeline | EU Artificial Intelligence Act 

[4] Timeline for the Implementation of the EU AI Act | AI Act Service Desk 

[5] AI Act From timelines to tensions A mid-2025 round-up 

[6] The EU AI Act: Update on the application timeline and implications … 

[7] EU AI Act 2026 Updates: Compliance Requirements and Business …