- New Blog Series: Decoding the EU AI Act
- Decoding the EU AI Act Part 2 – A Deep Dive into Article 2
- Decoding the EU AI Act Part 3 – AI Literacy Article 4
- Decoding the EU AI Act Part 4 – Prohibited Practices Article 5
- Decoding the EU AI Act Part 5 -The EU General‑Purpose AI Code of Practice: Copyright Chapter
- Decoding the EU AI Act Part 6 – Obligations for GPAI Providers with Systemic Risk
On 10 July 2025, the European Commission unveiled the General-Purpose AI (GPAI) Code of Practice. Crafted by an independent expert group under the guidance of the EU AI Office, the Code can function as a structured roadmap for GPAI model providers to operationalize their legal obligations under the EU AI Act, particularly under Articles 53 and 55.
While voluntary in nature, the Code carries substantial legal and strategic weight. It serves as a soft law instrument, offering an officially endorsed framework to demonstrate compliance with some of the most complex areas of the AI Act, especially those involving systemic risks, transparency, and copyright compliance.
What is the Function of the Code of Practice?
1. Operationalizing Legal Requirements
The primary function of the Code is to translate the abstract legal obligations of the AI Act into concrete, actionable measures. While the AI Act itself outlines high-level principles and duties for GPAI providers and deployers, the Code provides the “how”.
It maps the legal duties in Articles 53 (for all GPAI providers) and Article 55 (for high-impact models) into granular procedures. This includes documentation standards, data transparency expectations, and risk mitigation strategies across various domains (e.g., misuse, bias, IP rights, safety).
For example, the Code transforms the vague requirement to “respect copyright” into precise obligations around crawler behavior, data provenance, and output monitoring.
2. Establishing a Presumption of Compliance
By aligning with the Code, signatories benefit from a rebuttable presumption of conformity with relevant AI Act obligations. This dramatically reduces the compliance burden and provides legal certainty, which is especially useful in cross-border EU settings where enforcement and interpretation may vary among national authorities.
This mirrors how codes of conduct or standard contractual clauses function in other EU regulatory regimes like the GDPR. They serve as recognized compliance instruments that regulators and courts may rely on to assess due diligence and good faith.
3. Providing a (somewhat) Regulatory Safe Harbour
While not a formal safe harbor, adhering to the Code can mitigate the risk of enforcement actions or litigation. EU regulators are likely to focus first on non-signatory firms, or those who deviate from the Code without robust justification.
In practical terms, signing and implementing the Code becomes a strategic hedge, a way to signal responsible innovation and earn goodwill with regulators, civil society, and commercial partners.
4. Promoting Industry Norms and Convergence
The Code plays a norm-setting function within the AI ecosystem. By codifying what “good practice” looks like for GPAI models in Europe, it encourages standardization across providers and fosters interoperability with global regulatory efforts (such as those by the OECD, the UK’s AI Safety Institute, or U.S. Executive Orders on AI).
This helps establish common benchmarks for auditability, transparency, and ethical data governance, especially in contested areas like synthetic media, risk disclosures, and IP compliance.
5. Filling Gaps Pending Full AI Act Implementation
The AI Act’s full application is staggered, with GPAI obligations beginning on 2 August 2025, and specific requirements for high-impact models kicking in a year later. The Code serves as a transitional compliance instrument, helping providers prepare before binding enforcement begins.
It also informs the future guidance to be issued by the EU AI Office, making early adoption a forward-looking compliance strategy.
Deep Dive: The Copyright Chapter
Of the Code’s three chapters covering transparency, copyright, and systemic risks, the Copyright Chapter stands out as both legally sensitive and operationally demanding.
It was the subject of extensive consultation with publishers, rights-holders, and AI companies, as it addresses one of the most contentious areas in generative AI, namely the lawful use of training data and mitigation of infringing outputs.
Overview of Copyright Obligations
The Copyright Chapter outlines five core compliance actions that AI providers must implement:
1. Draft and Maintain a Copyright Policy
- Signatories must create an internal governance policy detailing how they handle copyrighted material.
- This includes policies on dataset curation, training workflows, crawler protocols, and output filtering.
- A designated individual or unit must oversee compliance, audits, and updates.
2. Use Lawfully Accessible Data
- Providers must avoid training on content that is behind paywalls, uses DRM, or is obtained from blacklisted websites known for infringement.
- They must track and document sources of training data and ensure all uses are lawful under EU copyright law, including exceptions for text and data mining (TDM).
3. Respect Opt-Out Mechanisms
- The Code reinforces the importance of rights-holder reservations, especially those expressed through:
- robots.txt
- machine-readable metadata
- licensing APIs
- Firms must configure their web crawlers and dataset acquisition tools to automatically detect and comply with such reservations.
4. Prevent Infringing Outputs
- Generative models must incorporate technical safeguards to prevent unauthorized reproduction of protected works.
- This may include:
- Output filters
- Similarity detectors
- Prompt engineering constraints
- Contractual controls in user terms of service
5. Handle Infringement Complaints Effectively
- Providers must maintain a transparent, accessible complaint-handling mechanism.
- Rights-holders must be able to report infringing outputs and receive timely responses.
- Providers are expected to take corrective action where justified.
Strategic Takeaways for AI Firms and Legal Teams
For legal departments, GCs, and compliance officers advising AI firms, the Code of Practice, especially the copyright section, requires proactive internal alignment:
- Conduct copyright risk assessments of current and future training datasets.
- Draft or revise copyright governance policies in line with the Code’s structure.
- Implement crawler compliance rules, including routine monitoring of rights reservations.
- Collaborate with product teams to integrate output controls.
- Prepare for potential EU audits, especially if your models are classified as high-impact GPAI.