What's Completely Banned
The Red Line — Already in Effect
Article 5 of the EU AI Act lists AI practices that are so dangerous or unethical that they are banned outright. This is the "poison" category from our food safety analogy.
Important: these prohibitions are ALREADY IN EFFECT since February 2, 2025. This is not a future deadline — companies using any of these practices must have already stopped. The fines for violations are the highest in the entire regulation: up to 35 million euros or 7% of global annual turnover, whichever is higher.
The 8 Prohibited Practices
1. Subliminal Manipulation (Article 5(1)(a)): AI that uses tricks below your conscious awareness to change your behavior in harmful ways. Example: an app that flashes imperceptible images to make you anxious so you buy their calming product. Why banned: people cannot protect themselves from something they cannot detect.
2. Exploiting Vulnerabilities (Article 5(1)(b)): AI that takes advantage of someone's age, disability, or difficult economic situation to manipulate them. Example: AI-powered gambling apps specifically targeting people with addiction, or confusing financial products marketed to elderly people through AI. Why banned: it protects those least able to protect themselves.
3. Social Scoring (Article 5(1)(c)): Rating people based on their social behavior, then using that score to treat them differently in unrelated areas. Example: a government system that gives you a citizen score based on your online activity, then uses it to deny you housing or travel. Why banned: your behavior in one area should not determine your rights in another.
4. Individual Criminal Risk Prediction (Article 5(1)(d)): AI that predicts whether a specific person will commit a crime based solely on profiling or personality traits. Example: software that flags someone as likely to commit theft based on their neighborhood, age, or appearance. Why banned: it assumes guilt before any crime. Exception: AI that assesses someone already linked to criminal activity is allowed.
5. Facial Recognition Database Scraping (Article 5(1)(e)): Building facial recognition databases by scraping photos from the internet or CCTV without consent. Example: a company downloading millions of social media photos to build a face recognition system. This actually happened with a company called Clearview AI. Why banned: mass surveillance without consent violates privacy.
6. Emotion Recognition at Work and School (Article 5(1)(f)): AI that reads your emotions in the workplace or at school. Example: software that monitors whether employees look happy during video calls, or whether students look engaged during online classes. Why banned: people should not be emotionally monitored in places where they cannot freely leave. Exception: medical or safety reasons are allowed.
7. Biometric Categorization by Sensitive Traits (Article 5(1)(g)): AI that uses biometric data like facial features to categorize people by race, political opinions, religion, or sexual orientation. Example: AI that scans faces to guess someone's ethnicity or religion. Why banned: sorting people by sensitive categories using their physical characteristics is inherently discriminatory.
8. Real-Time Facial Recognition in Public Spaces (Article 5(1)(h)): Live facial recognition scanning in publicly accessible spaces for law enforcement. Example: cameras at train stations scanning every face and matching against a police database in real time. Why banned: mass surveillance of everyone just to find a few suspects. Narrow exceptions exist: finding missing children or trafficking victims, preventing imminent terrorist threats, and locating suspects of serious crimes — but these require prior judicial authorization.
Interactive Exercise
Banned or allowed under the EU AI Act? Click to reveal.