WWitness
Erste SchritteAkademieExperten-ChatPreiseBlog
Anmelden
Zurück zum Blog
5. April 2026

Hochrisiko-KI-Systeme nach der EU-KI-Verordnung: Vollständiger Leitfaden

Witness Team·9 Min. Lesezeit

Why High-Risk Classification Matters

The EU AI Act's compliance obligations are concentrated almost entirely on high-risk AI systems. Minimal-risk systems face no mandatory requirements. Limited-risk systems need only transparency disclosures. But high-risk systems must meet a comprehensive set of requirements covering documentation, risk management, data governance, human oversight, accuracy, cybersecurity, conformity assessment, registration, and ongoing monitoring.

Getting the classification right is therefore the most consequential step in EU AI Act compliance. Misclassifying a high-risk system as limited risk leaves you exposed to penalties of up to €15 million or 3% of global annual turnover. Misclassifying a limited-risk system as high risk wastes resources on unnecessary compliance work.

Two Pathways to High-Risk

The AI Act establishes two independent pathways for high-risk classification. A system only needs to trigger one.

Pathway 1: Safety Component (Article 6(1))

An AI system is high-risk if both conditions are met:

  1. The AI system is used as a safety component of a product, or is itself a product, covered by EU harmonisation legislation listed in Annex I of the AI Act
  2. That product is required to undergo a third-party conformity assessment under the applicable harmonisation legislation

Annex I covers existing EU product safety legislation including:

  • Machinery Regulation (EU) 2023/1230
  • Medical Devices Regulation (EU) 2017/745
  • In-Vitro Diagnostic Medical Devices Regulation (EU) 2017/746
  • Radio Equipment Directive 2014/53/EU
  • Civil Aviation Regulation (EU) 2018/1139
  • Motor Vehicle Type-Approval Regulations
  • Marine Equipment Directive 2014/90/EU
  • Rail Interoperability Directive (EU) 2016/797
  • And others

Examples: An AI system that controls braking decisions in an autonomous vehicle. An AI-powered diagnostic tool classified as a medical device. An AI component managing safety-critical functions in industrial machinery.

Application date: August 2, 2027 (one year later than Pathway 2).

Pathway 2: Annex III Listed (Article 6(2))

An AI system is high-risk if it falls into one of the use case categories listed in Annex III of the AI Act. This is the pathway that applies to most business AI systems.

Application date: August 2, 2026.

The Eight Annex III Categories

Category 1: Biometrics

Sub-area Description Examples
1(a) Remote biometric identification Post-hoc facial recognition, identification in non-public spaces
1(b) Biometric categorization by sensitive attributes Categorizing people by ethnicity, gender, or other protected characteristics from biometric data
1(c) Emotion recognition Inferring emotions from facial expressions, voice patterns, body language (outside the workplace/education ban under Article 5)

Note: Real-time biometric identification in public spaces for law enforcement is not high-risk — it is prohibited under Article 5(1)(h), with three narrow exceptions requiring judicial authorization. Emotion recognition in the workplace and education is also prohibited (Article 5(1)(f)), except for medical or safety purposes.

Category 2: Critical Infrastructure

Sub-area Description Examples
2(a) Safety components in management/operation of critical digital infrastructure, road traffic, water/gas/heating/electricity supply AI managing electricity grid load balancing, AI controlling traffic light systems, AI monitoring water treatment processes

The system must be a safety component — not just any software used by a utility or infrastructure provider. An AI-powered billing system at an energy company is not high-risk. An AI system making decisions about power distribution is.

Category 3: Education and Vocational Training

Sub-area Description Examples
3(a) Determining access, admission, or assignment to educational institutions AI-powered university admissions, automated school placement
3(b) Evaluating learning outcomes, including steering the learning process AI grading systems, adaptive learning platforms that determine curriculum
3(c) Assessing appropriate education level and influencing access to education AI that decides what level of education someone should receive
3(d) Monitoring and detecting prohibited behavior during tests AI exam proctoring, cheating detection systems

Category 4: Employment, Worker Management, Access to Self-Employment

Sub-area Description Examples
4(a) Recruitment and selection: targeted job ads, screening/filtering applications, evaluating candidates AI-powered CV screening, automated interview scoring, AI job ad targeting
4(b) Decisions on work relationships: promotion, termination, task allocation based on behavior/traits; monitoring and evaluating performance AI-driven performance reviews, automated task assignment based on employee profiling, AI-powered workforce management

This category is particularly relevant for SMEs. Any AI tool used in hiring or employee evaluation falls here — including third-party SaaS tools. If you use an AI-powered applicant tracking system, you are a deployer of a high-risk AI system.

Category 5: Access to Essential Services

Sub-area Description Examples
5(a) Evaluating eligibility for public assistance, benefits, or services AI determining welfare eligibility, healthcare access decisions
5(b) Evaluating creditworthiness or establishing credit scores AI credit scoring, automated lending decisions. Exception: fraud detection is explicitly not high-risk
5(c) Risk assessment and pricing for life and health insurance AI determining insurance premiums for natural persons
5(d) Evaluating and classifying emergency calls, triaging emergency patients AI-powered 112/911 dispatch, emergency room triage systems

The fraud detection exception in 5(b) is notable. Banks using AI solely for fraud detection do not trigger high-risk classification for that system, even though credit scoring and lending AI does.

Category 6: Law Enforcement

Sub-area Description Examples
6(a) Assessing risk of person becoming crime victim Victim risk assessment tools
6(b) Polygraphs and lie detectors AI-based deception detection
6(c) Evaluating reliability of evidence AI analyzing evidence in criminal investigations
6(d) Assessing risk of offending or re-offending Recidivism prediction (must augment human assessment, not replace it)
6(e) Profiling during detection, investigation, or prosecution Criminal profiling AI

Category 7: Migration, Asylum, and Border Control

Sub-area Description Examples
7(a) Polygraphs and lie detectors in migration context AI deception detection at borders
7(b) Assessing irregular migration risk or health risk Risk assessment for persons intending to enter the EU
7(c) Examining asylum, visa, or residence applications AI processing immigration applications
7(d) Detecting, recognizing, or identifying persons in migration context Exception: travel document verification

Category 8: Administration of Justice and Democratic Processes

Sub-area Description Examples
8(a) Assisting judicial authorities in interpreting and applying law AI legal research tools used by courts, AI in alternative dispute resolution
8(b) Influencing election outcomes or voting behavior Exception: AI for organizing political campaigns that does not directly interact with voters

The Article 6(3) Exception: When High-Risk Can Be Downgraded

Not every Annex III system is automatically high-risk. Article 6(3) provides an exception pathway that can downgrade a system that technically falls within an Annex III category.

The exception applies when a system meets both conditions:

Condition 1: Limited Function

The system performs only one of the following:

  • (a) A narrow procedural task (e.g., data format conversion, document sorting)
  • (b) Improving the result of a previously completed human activity (e.g., spell-checking a human-written assessment)
  • (c) Detecting patterns for human review without replacing or influencing the human assessment (e.g., anomaly flagging)
  • (d) A preparatory task for a human assessment (e.g., data gathering before a human makes the decision)

Condition 2: No Profiling

The system does not profile natural persons. The GDPR Article 4(4) definition of profiling applies: automated processing of personal data to evaluate personal aspects such as work performance, economic situation, health, preferences, reliability, behavior, location, or movements.

If profiling is involved, the system is always high-risk — the Article 6(3) exception cannot apply, regardless of the function.

Exception Obligations

Even when the exception applies and the system is downgraded, the provider must:

  1. Document the assessment explaining why the exception applies
  2. Register the system in the EU database with the documented reasoning (Article 49(2))
  3. Notify the relevant market surveillance authority

This is not a "get out of compliance free" card. It is a documented exception with its own obligations.

Provider Obligations for High-Risk Systems

Providers — those who develop high-risk AI systems or place them on the market under their name — bear 19 distinct obligations:

# Article Obligation
1 Art. 9 Establish and maintain a continuous risk management system
2 Art. 10 Implement data governance for training, validation, and testing data
3 Art. 11 Prepare and maintain Annex IV technical documentation
4 Art. 12 Design automatic logging capabilities into the system
5 Art. 13 Provide transparency information and instructions for use
6 Art. 14 Design human oversight measures into the system
7 Art. 15 Ensure accuracy, robustness, and cybersecurity
8 Art. 17 Implement a quality management system
9 Art. 18 Retain documentation for 10 years
10 Art. 19 Retain auto-generated logs for minimum 6 months
11 Art. 20 Take corrective actions for non-compliant systems
12 Art. 21 Cooperate with authorities upon request
13 Art. 22 Appoint an EU authorized representative (if non-EU provider)
14 Art. 43 Complete conformity assessment before market placement
15 Art. 47 Draw up EU declaration of conformity
16 Art. 48 Affix CE marking
17 Art. 49 Register in the EU database
18 Art. 72 Establish post-market monitoring system
19 Art. 73 Report serious incidents within 15 days

Conformity Assessment: Self or Third-Party?

Most high-risk systems undergo self-assessment based on Annex VI. The provider verifies their own compliance, issues a declaration of conformity, and affixes the CE marking.

Third-party assessment by a notified body (Annex VII) is required only for biometric identification systems (Annex III, Category 1), unless the provider has applied harmonised standards covering all requirements and follows the QMS assessment path.

Deployer Obligations for High-Risk Systems

Deployers — those who use high-risk AI systems under their authority — have a lighter but still significant set of obligations:

# Article Obligation
1 Art. 26(1) Use the system according to the provider's instructions
2 Art. 26(2) Assign competent, trained persons for human oversight
3 Art. 26(4) Ensure input data is relevant and representative
4 Art. 26(5) Monitor operation, suspend if risks arise, report incidents
5 Art. 26(6) Retain auto-generated logs for minimum 6 months
6 Art. 27 Conduct a FRIA (only for: public bodies, public service providers, credit scoring deployers, life/health insurance deployers)
7 Art. 26(11) Inform persons affected by AI-based decisions
8 Art. 26(7) Inform workers and worker representatives when AI is used in the workplace
9 Art. 50(3) Inform persons exposed to emotion recognition or biometric categorization

Role Shifting Warning

Article 25 specifies that a deployer becomes a provider — inheriting all 19 provider obligations — if they:

  • Put their own name or trademark on a high-risk AI system
  • Make a substantial modification to the system
  • Change the intended purpose of any AI system in a way that makes it high-risk

This is a critical consideration for companies that customize or rebrand third-party AI tools.

Timeline: When Do High-Risk Obligations Apply?

Pathway Application Date Scope
Annex III (Pathway 2) August 2, 2026 All 8 categories listed above
Annex I product-embedded (Pathway 1) August 2, 2027 Safety components of regulated products

There is a transitional provision: high-risk AI systems already on the market before August 2, 2026 need to comply only if they undergo a significant design change after that date. The exception: systems used by public authorities must comply regardless.

Getting Started

Classification is the foundation. If you are unsure whether your AI system is high-risk, the Witness AI System Classifier walks you through the decision tree in about three minutes — covering the Article 5 prohibitions check, Annex III category matching, and the Article 6(3) exception assessment. It identifies your risk level, your role, and the specific obligations that apply.

Prüfen Sie, ob die EU-KI-Verordnung für Sie gilt

Kostenlose Klassifizierung in 3 Minuten. Keine Anmeldung erforderlich.

Jetzt starten