The Risk Pyramid — How It All Works
The Big Idea: Risk-Based Regulation
The EU AI Act uses a risk-based approach. The higher the risk to people's rights and safety, the stricter the rules. Think of it like food safety:
Poison is banned — you simply cannot sell it. Raw meat is high-risk — it needs strict handling, temperature controls, and proper labeling. Packaged snacks are limited-risk — they need ingredient labels and basic safety checks. Water is minimal risk — there are basically no specific food safety rules for it.
The same logic applies to AI: the more an AI system can affect people's lives, the more rules apply to it. This is the fundamental principle of the entire regulation.
The Four Risk Levels
The EU AI Act defines four risk levels, often visualized as a pyramid.
At the top: UNACCEPTABLE RISK (Red). These AI practices are completely banned. No company may use them in the EU under any circumstances (with very narrow exceptions). This is covered by Article 5. Examples include social scoring and manipulative AI. These prohibitions are already in effect since February 2, 2025.
Second level: HIGH RISK (Orange). These are AI systems that can seriously affect people's lives — think hiring decisions, credit scoring, medical diagnosis. They are allowed, but they must meet strict requirements before going to market. This is the biggest category of rules, covered by Articles 6 through 43.
Third level: LIMITED RISK (Yellow). These are AI systems that interact with people or generate content. The main rule is simple: you must tell people they are dealing with AI. This is the transparency requirement from Article 50.
At the bottom: MINIMAL RISK (Green). Everything else. No specific obligations under the AI Act. Most AI systems fall into this category. A movie recommendation engine, a spam filter, an AI that suggests which font to use — none of these carry specific obligations.
Why Classification Matters for You
Your very first job in AI Act compliance is figuring out where each of your AI systems falls on this pyramid. That classification determines everything else:
What you must do — your specific obligations. What documents you need — technical documentation, risk assessments, fundamental rights impact assessments. What deadlines apply to you. What fines you risk if you do not comply.
A company with only minimal-risk AI systems has very few obligations. A company with high-risk AI systems has extensive documentation, testing, and monitoring requirements. A company using banned AI practices faces the highest penalties.
This is exactly what the Witness Classifier tool does — it walks you through a series of questions to determine the correct classification for your AI system.
Interactive Exercise
Where does each AI system fall on the risk pyramid? Click to reveal.