Decoding the EU AI Act Part 4 – Prohibited Practices Article 5

Article 5 EU AI Act
This entry is part 4 of 6 in the series Decoding the EU AI Act

Welcome to Part 4 of our blog series ‘Decoding the EU AI Act’! Central to the legislation is Article 5, which identifies and prohibits certain AI practices deemed to present “unacceptable risks” to fundamental rights, safety, and democratic values.

With the publication of the European Commission’s Guidelines on Prohibited AI Practices on 4 February 2025, we now have additional clarity on how Article 5 is to be interpreted and enforced. This blog post unpacks the legal framework of Article 5, categorizes the prohibited AI practices, and explores some of the implications for developers, deployers, and users of AI systems in the EU and beyond

What is Article 5 of the EU AI Act?

Article 5 of the EU AI Act defines a limited set of AI practices that are prohibited outright across the EU. These systems are banned because they pose an “unacceptable risk,” which cannot be sufficiently mitigated by technical or organizational measures.

The prohibitions apply irrespective of whether the provider is located inside or outside the EU, provided the AI system is placed on the market, put into service, or used within the Union.

Categories of Prohibited AI Practices

1. Subliminal Techniques That Distort Behavior (Art. 5(1)(a))

AI systems that use subliminal methods—those below conscious awareness—to materially distort behavior causing or likely to cause physical or psychological harm are banned.

2. Exploitation of Vulnerabilities (Art. 5(1)(b))

Exploiting vulnerabilities of groups like children, elderly, or economically disadvantaged individuals to manipulate behavior causing harm is prohibited.

  • Example: AI targeting financially vulnerable individuals with predatory offers.

3. Social Scoring (Art. 5(1)(c))

The EU AI Act prohibits AI systems that evaluate or classify the trustworthiness of individuals based on their social behaviour or known or predicted personal or personality traits when such evaluations lead to either of the following:

  • (i) Detrimental or unfavourable treatment in contexts unrelated to where the data was originally collected, or
  • (ii) Unjustified or disproportionate detrimental or unfavourable treatment.

Importantly, this prohibition applies to both public and private actors. The law targets AI-driven “social scoring” systems—similar in concept to “social credit” regimes—that use behavioural profiling to systematically assess a person’s worthiness and impose consequences accordingly. The EU Commission Guidelines in Sec. 4.2 (154) uses the example of an AI driven creditworthiness scoring system as potentially problematic.

4. Real-Time Remote Biometric Identification in Public Spaces (Art. 5(1)(d))

The legal text stipulatesthe placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;

The Guidelines explain that the conditions are cumulative and the AI practice must be either a.) ‘placing on the market’, b.) ‘the putting into service for this specific purpose’ or c.) the ‘use’ of an AI system. Subsequently, the AI system must make a prediction that a criminal offence will be committed and such prediction is only based either on (a) the profiling of a natural person or (b) assessment a natural person’s personality traits and characteristics. By contrast if the AI system works in support of human assessment it does not fall under this provision.

Illustrative Example:

  • Prohibited Practice: An AI system predicts that an individual is likely to commit theft based on their socioeconomic background, neighborhood, and personality traits (e.g., introversion or impulsiveness), and authorities act on this prediction without objective evidence.
  • Permitted Use: An AI system analyzes objective evidence, such as CCTV footage and digital communication records, to assist investigators in determining whether a person may be involved in a specific robbery.

5. Untargeted Scraping for Facial Recognition Databases (Art. 5(1)(e))

The legal text stipulates:(e) the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage

Importantly, the legislation merely refers to ‘untargeted‘ scraping! If the AI system is used to find a specific individual it is thus not covered under this prohibition.

6. Emotion Recognition in Workplaces and Education Institution (Art. 5(1)(f))

Using AI to infer emotions in employees or students is banned to protect dignity and autonomy.

  • Example: AI monitoring employee emotions to influence performance evaluations.

Note: The prohibition does not refer to ‘emotion recognition systems’ and only mentions those AI systems that ‘infer emotions’

7. Biometric Categorisation Based on Sensitive Attributes (Art. 5(1)(g))

AI systems categorizing individuals by sensitive attributes such as race, religion, or sexual orientation are prohibited to avoid discriminatory treatment.

  • Example: A service is only provided because somebody is considered to be of a predetermined race.

8.Real‑Time Remote Biometric Identification in Public Spaces by Law Enforcement (Art. 5(1)(h))

The legislation only addresses deployers of Remote Biometric Identification systems in publicly accessible spaces for law enforcement purposes, not the development or deployment of such systems for other purposes.

Looking Ahead: Compliance and Ethical AI Design

While the prohibited practices in Article 5 represent a relatively narrow set of AI use cases, they set important boundaries for ethical and legally compliant AI development. Companies working with AI should always integrate compliance-by-design principles, including:

  • Fundamental rights impact assessments (FRIAs).
  • Human oversight mechanisms to detect manipulation or profiling.
  • Transparency and explainability.

As we move into the implementation phase of the AI Act, law firms, compliance officers, and AI businesses must remain vigilant. Article 5 is not just a list of prohibitions—it’s a mirror of the EU’s core values in the age of artificial intelligence.

Need Help With Your AI?

Contact us today

Disclaimer:
The content of this blog is provided for general informational purposes only and does not constitute legal advice. While we strive to ensure that the information is accurate and up to date, it may not reflect the most current legal developments or the specific circumstances of your organization. Readers should not act upon any information contained in this blog without first seeking professional legal counsel. No attorney-client relationship is created through your use of or reliance on the information provided herein.

Series Navigation<< Decoding the EU AI Act Part 3 – AI Literacy Article 4Decoding the EU AI Act Part 5 -The EU General‑Purpose AI Code of Practice: Copyright Chapter >>