High-Risk AI — The Heavy Rules
Two Ways to Be High-Risk
There are two pathways to being classified as high-risk under the EU AI Act.
Pathway 1 (Article 6(1)): Your AI system is used as a safety component of a product that is already covered by existing EU safety legislation — for example, medical devices, cars, machinery, or toys. If that product requires a third-party conformity assessment under its existing EU legislation, then the AI component is automatically high-risk.
Pathway 2 (Article 6(2) + Annex III): Your AI system is used in one of the specific sensitive areas listed in Annex III. These are areas where AI decisions can significantly affect people's fundamental rights.
Most companies will encounter Pathway 2. Annex III lists 8 categories of high-risk AI use cases.
The 8 High-Risk Categories (Annex III)
1. Biometrics: Remote biometric identification (like facial recognition in public spaces), biometric categorization, and emotion recognition systems — beyond simple device unlock.
2. Critical Infrastructure: AI managing electricity grids, water supply, gas, heating, internet networks, or road traffic. If the AI fails, essential services fail.
3. Education and Vocational Training: AI that grades exams, decides school admissions, detects cheating, determines educational paths, or monitors students during tests. These affect access to education.
4. Employment and Worker Management: AI that screens CVs, posts job ads, evaluates candidates, decides promotions, monitors worker productivity, or makes termination decisions. These affect people's livelihoods.
5. Essential Services: AI for credit scoring, insurance risk assessment, emergency services dispatch, and social benefit eligibility. Also includes credit and health insurance pricing. These affect access to essential financial and social services.
6. Law Enforcement: AI used as polygraphs, for evidence evaluation, profiling, crime prediction, and risk assessment. These affect people's freedom and presumption of innocence.
7. Migration and Border Control: AI for visa application assessment, border crossing risk assessment, and irregular migration surveillance. These affect people's freedom of movement.
8. Justice and Democracy: AI assisting courts in fact-finding or sentencing, and AI that could influence election outcomes. These affect the foundations of democratic society.
The Exception — When High-Risk Isn't High-Risk
Article 6(3) provides an important exception. An AI system listed in Annex III is NOT considered high-risk if it meets ALL of these conditions:
It performs a narrow procedural task, OR it improves the result of a previously completed human activity, OR it detects decision-making patterns without replacing human judgment, OR it performs preparatory work for an assessment that a human will make.
BUT — and this is critical — this exception NEVER applies if the AI system profiles individuals. Profiling means automated processing of personal data to evaluate personal aspects. If your AI profiles people, it is always high-risk, regardless of how narrow the task is.
Example: AI that sorts incoming CVs alphabetically is not high-risk — it performs a narrow procedural task. AI that ranks CVs by a predicted quality score IS high-risk — it evaluates candidates and influences hiring decisions.
What High-Risk Providers Must Do — Overview
If your AI system is classified as high-risk, here is what you must implement as a provider. This is an overview — Module 7 covers these in more detail:
Risk Management System (Article 9): Identify, analyze, evaluate, and mitigate risks throughout the AI system's lifecycle. Data Governance (Article 10): Ensure your training data is relevant, representative, free of errors, and appropriate for the intended purpose.
Technical Documentation (Article 11 + Annex IV): Detailed documentation covering how the system works, its design choices, data sources, testing results, and more. Record-Keeping and Logging (Article 12): Automatic recording of events while the system operates.
Transparency and Instructions (Article 13): Provide clear instructions to deployers on how to use the system properly. Human Oversight (Article 14): Design the system so that humans can effectively supervise its operation.
Accuracy, Robustness, and Cybersecurity (Article 15): The system must perform consistently and be resilient against errors and attacks. CE Marking (Article 48): The familiar CE mark indicating EU compliance. Post-Market Monitoring (Article 72): Continue monitoring after the system is deployed.
Interactive Exercise
Is this AI system high-risk or not? Click to reveal.