The EU’s still somewhat new AI Act (Regulation 2024/1689) introduces a sweeping compliance regime, but its requirements come into force in phases. In particular, the penalties and enforcement framework is active from 2 August 2025, while many substantive obligations on high-risk AI will only kick in later.
The EU AI Act applies extraterritorially. Article 2(1)(c) specifies, it applies not only to providers and deployers established in the EU, but also to non-EU actors placing AI systems on the EU market or whose outputs are used in the EU, regardless of where they are based.
This means that companies outside the EU, including AI model developers and application vendors, must comply if their systems come within this definition.
This article explains what deployers (user) and so-called “AI wrappers” (entities that fine-tune or build applications on top of general-purpose AI models) need to know about obligations and liabilities starting August 2025.
Key Definitions: Deployers vs. AI Wrappers
The AI Act distinguishes providers of AI (who develop or put models/systems on the market) from deployers (who use AI systems under their authority). In Article 3, a “deployer” is defined as a person or organization that uses an AI system (except in purely personal, non-professional contexts). In practical terms, a corporate client that uses an AI tool (for example, a company deploying an AI‐based hiring screen) is a deployer.
The term “AI wrapper” is not in the Act itself, but roughly corresponds to what the AI Act calls a “downstream provider.” Article 3(68) defines a downstream provider as a provider of an AI system (including one based on a general-purpose model) that integrates an AI model into its system, whether the model was developed in-house or obtained from another entity. In other words, an “AI wrapper” is typically a software vendor that takes an existing (general-purpose) model and fine-tunes or embeds it into its application (usually via API call). These entities will likely act as providers of the end AI system to their customers, even if they did not train the underlying model themselves.
Phased Entry into Force (2nd August 2025 vs. Later)
The AI Act takes effect in stages (Article 113). Critically, Chapter V (general-purpose AI models), Chapter VII (governance), and Chapter XII (penalties and enforcement) apply from 2 August 2025. All other chapters (including most of the high risk AI requirements in Chapter III, with the exception of Section 4, and the transparency rules in Chapter IV) only apply later – generally from 2 August 2026, with a few provisions as late as 2027.
In summary:
- In force by 2 Aug 2025: Article 4 (AI literacy), the prohibition of certain “forbidden” AI uses (Chapter II, Article 5), obligations on providers of general-purpose AI models (Chapter V, Article 53), establishment of the AI Office and governance structures (Chapter VII), and the penalties regime (Chapter XII).
- Deferred until 2026/27: The bulk of the high-risk AI obligations on providers and deployers (Chapter III Sections 1–3, Articles 6–27), the transparency rules for providers/deployers of certain AI (Chapter IV), and post-market monitoring/reporting obligations (Chapters IX–XI).
Current Obligations for Deployers
As of August 2025, deployers have two key sets of enforceable obligations:
1. Compliance with Article 4 – AI Literacy Requirements
Article 4 of the AI Act requires that “…deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
This means:
- Corporate users of AI (deployers) must ensure staff using AI tools, especially high-risk or sensitive ones, are properly trained and understand how to operate them legally.
- This obligation is already in force and is enforceable from 2 August 2025.
2. Prohibited AI Practices – Article 5
Deployers must ensure they do not use or benefit from AI systems that engage in certain prohibited practices, such as social scoring, manipulative subliminal techniques, and untargeted scraping of facial images. These bans have been in effect since early 2025 and are enforceable from 2 August 2025
Current Obligations for Providers incl.Wrappers
1. Providers of General-Purpose AI Models
If an entity provides a general-purpose AI model (as defined in Article 3(63)), Article 53 imposes immediate obligations, including:
- Technical documentation (Annex XI)
- Respect EU copyright rules (Article 53(1)(c))
- Summary of training data (Article 53(1)(d))
- Usage policies and downstream information (Annex XII)
- Cooperation with national authroities (Article 53(3)
The EU Commission has recently released a Code of Practice for General-Purpose AI Models that may be use to signal compliance.
These obligations are fully enforceable from 2 August 2025. Models already on the market benefit from a transitional period until 2027.
2. Wrappers / Downstream Providers of AI Systems
Entities that integrate (GPAI) models into applications will likely become “providers” of AI systems and may even fall under Chapter III obligations if their system is classified as high-risk. While those Chapter III high-risk rules are not yet enforceable, wrappers must:
- Comply with AI literacy responsibilities under Article 4, ensuring that staff & external developers are equipped to identify legal, technical and ethical opportunities or risks
- Avoid prohibited uses (Article 5)
- Prepare, if applicable, to meet documentation and risk management requirements beginning in 2026
Enforcement and Penalties Effective 2 August 2025
Chapter XII is in force with the exception of Article 101, which becomes applicable from August 2nd 2026. Penalties can then be imposed for violations of applicable provisions.
Administrative Fines:
- Up to €35 million or 7% of global turnover for prohibited AI practices (Article 5)
- Up to €15 million or 3% of global turnover for other violations
- Up to €7.5 million or 1.5% of turnover for supplying incorrect or misleading information to notified bodies or NCA’s
National authorities are empowered to investigate, issue warnings, or impose fines depending on the severity of the breach.
What’s Not Yet Applicable (2026–2027 Phase)
Not yet enforceable:
- Chapter III obligations on deployers (e.g., human oversight, documentation, impact assessments)
- Transparency requirements (Chapter IV, including Article 50)
- System classification and registration processes
- Information sharing and post-market surveillance (Chapters IX–XI)
Compliance Takeaways
- Deployers: Must already comply with Article 4 AI literacy and avoid prohibited Art. 5 practices. Begin preparing governance and training processes now.
- AI Wrappers / Providers: Those building on GPAI must comply with Article 53, fulfill Article 4 obligations to facilitate AI literacy in downstream and avoid prohibited Art. 5 practices. Prepare for full provider duties in 2026.
- All actors: Should document compliance steps, review vendor and customer contracts, and implement internal training to align with Article 4 & 5 now.
A quick reminder: The AI Act Is Not the Only Legal Framework
While the EU AI Act introduces new obligations, both deployers and providers of AI systems must continue complying with other applicable EU laws, including:
- General Data Protection Regulation (GDPR):
Obligations around data minimization, transparency, legal basis for processing, and rights related to automated decision-making. - Digital Services Act (DSA):
For platforms and intermediaries, this includes duties on content moderation, algorithmic accountability, transparency, and systemic risk assessments. - EU Copyright Law:
Providers must ensure lawful use of training data and outputs, especially when using copyrighted material. Deployers also risk liability when publishing or commercializing infringing AI outputs.
These frameworks operate in parallel with the AI Act. Non-compliance with any of them can result in overlapping enforcement and penalties. Integrated governance across legal domains is essential.
Conclusion
The 2 of August 2025 marks the beginning of meaningful legal exposure under the EU AI Act. While not all obligations are yet in force, Article 4 (AI literacy), Article 5 (prohibited practices), and Article 53 (GPAI responsibilities) already create enforceable duties and penalties under Chapter XII can be applied.