What You Actually Have to Do
Obligations That Apply to Everyone
Before we get into high-risk specific obligations, two requirements apply to virtually everyone who touches AI.
AI Literacy (Article 4) — ALREADY IN EFFECT since February 2, 2025. Everyone who provides or deploys AI must ensure their staff has sufficient AI literacy. This means staff understand what AI is and how it works at a basic level, they know the risks and limitations, they are trained on how to use AI systems properly, and the training is appropriate to their role and context. This course you are taking right now helps satisfy this obligation.
Transparency (Article 50) — applies from August 2, 2026. If your AI interacts with people: tell them it is AI, unless it is obvious from the context. If your AI generates or manipulates content like images, video, or text: mark it as AI-generated. If your AI recognizes emotions or categorizes people biometrically: inform the people affected. If your AI creates deepfakes: label them clearly as artificially generated or manipulated.
Provider Obligations for High-Risk AI
If you are the provider of a high-risk AI system, here are your core obligations under Article 16:
1. Risk Management System (Article 9): Establish a continuous process to identify, analyze, evaluate, and mitigate risks throughout the AI system's entire lifecycle — from design through deployment and retirement.
2. Data Governance (Article 10): Your training, validation, and testing data must be relevant, sufficiently representative, and as free of errors as possible. You must consider potential biases.
3. Technical Documentation (Article 11 + Annex IV): Prepare comprehensive documentation covering the system's general description, development process, monitoring, and risk management. This is a detailed document with many required fields.
4. Logging (Article 12): Build automatic recording of events into the system so that its operation can be traced and monitored.
5. Transparency (Article 13): Provide deployers with clear, understandable instructions about what the system does, its capabilities, limitations, and how to use it properly.
6. Human Oversight (Article 14): Design the system so humans can effectively supervise it, understand its outputs, and intervene or override when necessary.
7. Accuracy and Robustness (Article 15): The system must achieve appropriate levels of accuracy and be resilient against errors, faults, and attempts to exploit vulnerabilities.
8. Quality Management System (Article 17): Document your overall quality processes covering everything listed above.
9. Conformity Assessment (Article 43): Prove that your system meets all requirements before placing it on the market.
10. CE Marking (Article 48): Apply the CE marking to indicate compliance.
11. EU Database Registration (Article 49): Register the system in the EU public database.
12. Post-Market Monitoring (Article 72): Continue monitoring the system after deployment and collect data on its performance.
13. Incident Reporting (Article 73): Report serious incidents to the relevant national authority.
Deployer Obligations for High-Risk AI
If you are the deployer of a high-risk AI system, your obligations under Article 26 are:
1. Use the system according to the provider's instructions of use. Do not use the system for purposes the provider did not intend.
2. Ensure human oversight by people who have the necessary competence, training, and authority. The humans supervising the system must be able to understand its outputs and override it.
3. Monitor the system's operation and report any issues to the provider. If you notice the system performing incorrectly or creating risks, you must act.
4. Keep logs generated by the system for at least six months, or longer if required by other EU or national law.
5. Conduct a Fundamental Rights Impact Assessment (FRIA) before deploying high-risk AI in certain sensitive areas (Article 27). This is similar to a GDPR Data Protection Impact Assessment but focused on fundamental rights.
6. Report serious incidents to the provider and the relevant national authority.
7. Cooperate with national authorities — provide information and access when requested.
The FRIA — Fundamental Rights Impact Assessment
Article 27 requires deployers of high-risk AI in certain areas to conduct a Fundamental Rights Impact Assessment before deployment.
This applies specifically to deployers who are: bodies governed by public law, private entities providing public services, or deployers of certain high-risk systems like credit scoring or insurance risk assessment.
The FRIA must describe: the deployer's processes using the AI system, the period and frequency of use, the categories of people and groups likely to be affected, the specific risks of harm to those groups, the human oversight measures, and the measures to be taken if risks materialize.
Think of it as answering: Who is affected? What could go wrong for them? What safeguards are in place? The Witness FRIA Generator tool walks you through this entire process step by step.
Interactive Exercise
What obligations apply in each situation? Click to reveal.