EU-KI-Verordnung: Compliance-Checkliste für 2026
What Is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted in June 2024 and published in the Official Journal of the European Union on July 12, 2024, it establishes harmonised rules for the development, deployment, and use of AI systems across the European Union.
Unlike voluntary frameworks or industry guidelines, the EU AI Act carries real legal force. It applies to any organisation that places an AI system on the EU market or puts one into service within the EU — regardless of where that organisation is headquartered. If your AI system affects people in the EU, this regulation likely applies to you.
The August 2, 2026 Deadline
The EU AI Act entered into force on August 1, 2024, but its obligations are being phased in over time. The most significant wave of requirements — covering high-risk AI systems — takes effect on August 2, 2026. After this date, organisations deploying or providing high-risk AI systems must be fully compliant with the regulation's requirements for technical documentation, risk management, human oversight, data governance, and more.
Some provisions are already enforceable. The ban on prohibited AI practices (Article 5) and AI literacy requirements (Article 4) took effect on February 2, 2025. General-purpose AI model obligations follow on August 2, 2025. But for most companies building or using AI, August 2, 2026 is the date that matters.
Who Needs to Comply?
The EU AI Act assigns different obligations depending on your role in the AI value chain:
-
Providers — Organisations that develop an AI system or have one developed on their behalf and place it on the market or put it into service under their own name or trademark. Providers bear the heaviest compliance burden, including technical documentation, conformity assessment, and post-market monitoring.
-
Deployers — Organisations that use an AI system under their authority. Deployers must ensure human oversight, monitor the system in operation, and in some cases conduct a Fundamental Rights Impact Assessment (FRIA).
-
Importers and Distributors — Entities in the supply chain that bring AI systems into the EU market or make them available. They have verification and record-keeping obligations.
Note that you can be both a provider and a deployer simultaneously — for example, if you develop an AI system internally and also use it in your own operations.
Step 1: Determine If Your System Is an AI System
The first question is whether your system falls under the regulation at all. Article 3(1) defines an AI system as a machine-based system that is designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Not every piece of software qualifies. Simple rule-based automation, traditional statistical methods, or basic data processing may fall outside the definition. The key indicators are autonomy, adaptiveness, and inference-based output generation.
Step 2: Classify Your Risk Level
The EU AI Act uses a risk-based approach with four tiers:
Unacceptable Risk (Prohibited) — Certain AI practices are banned outright under Article 5. These include social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions), manipulation through subliminal techniques, and exploitation of vulnerabilities of specific groups. If your system falls here, it cannot be deployed in the EU.
High Risk — AI systems listed in Annex III or used as safety components of products covered by EU harmonisation legislation are classified as high-risk. Annex III covers areas like biometric identification, critical infrastructure, education, employment, access to essential services (including credit scoring and insurance), law enforcement, migration, and administration of justice. High-risk systems face the most extensive compliance obligations.
Limited Risk — Systems that interact with people (chatbots), generate synthetic content (deepfakes), or perform emotion recognition or biometric categorisation must meet transparency obligations under Article 50. Users must be informed they are interacting with an AI or viewing AI-generated content.
Minimal Risk — AI systems that do not fall into any of the above categories are subject to minimal regulation. The EU encourages voluntary codes of conduct but imposes no mandatory requirements beyond the general AI literacy obligation.
Step 3: Identify Your Role
Understanding whether you are a provider, deployer, or both determines which specific obligations apply to you. Article 25 also introduces the concept of "role shifting" — if a deployer substantially modifies a high-risk AI system or puts their own name on it, they become a provider and inherit provider obligations.
Ask yourself:
- Did you develop or commission the development of this AI system?
- Are you placing it on the market under your own name or trademark?
- Are you using an AI system developed by someone else?
- Have you made substantial modifications to a system you acquired?
Step 4: Complete Required Technical Documentation
For high-risk AI systems, Annex IV of the regulation prescribes detailed technical documentation requirements. This documentation must be prepared before the system is placed on the market and kept up to date throughout its lifecycle.
Key sections include:
- General description of the AI system and its intended purpose
- Detailed description of system components and development process
- Information about training, validation, and testing data
- Performance metrics and accuracy levels
- Description of the risk management system
- Changes made throughout the system lifecycle
- Applicable harmonised standards or common specifications
- EU declaration of conformity
This is often the most time-consuming part of compliance. If you already have internal product documentation, risk assessments, or data governance policies, you are likely further along than you think.
Step 5: Implement a Risk Management System
Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. This is not a one-time exercise — it must be a continuous, iterative process that runs throughout the entire lifecycle of the AI system.
The risk management system must:
- Identify and analyse known and reasonably foreseeable risks
- Estimate and evaluate risks that may emerge during intended use and foreseeable misuse
- Evaluate risks based on post-market monitoring data
- Adopt suitable risk management measures
- Ensure that residual risk is acceptable
Document your risk identification methodology, your assessment criteria, the mitigation measures you have implemented, and any residual risks you have accepted along with justification.
Step 6: Ensure Human Oversight
Article 14 requires that high-risk AI systems be designed to allow effective oversight by natural persons. The specific oversight measures depend on the nature and risk level of the system, but generally include:
- The ability for a human to understand the system's capabilities and limitations
- The ability to correctly interpret the system's output
- The ability to decide not to use the system or to disregard, override, or reverse its output
- The ability to interrupt or stop the system's operation
Document who is responsible for oversight, what training they have received, and what mechanisms exist for intervention.
Step 7: Conduct a Conformity Assessment
Before placing a high-risk AI system on the EU market, providers must undergo a conformity assessment (Article 43). For most high-risk AI systems, this can be done through internal assessment based on Annex VI. However, systems used for biometric identification require third-party assessment by a notified body under Annex VII.
After successful assessment, providers must:
- Draw up an EU declaration of conformity
- Affix the CE marking to the system
- Register the system in the EU database
Penalties for Non-Compliance
The penalty structure is designed to be meaningful. Under Article 99:
- Prohibited practices: Up to 35 million EUR or 7% of total worldwide annual turnover, whichever is higher
- High-risk obligations: Up to 15 million EUR or 3% of global turnover
- Incorrect information to authorities: Up to 7.5 million EUR or 1% of global turnover
For SMEs, the regulation specifies that the lower of the two amounts (fixed or percentage-based) applies, providing some proportionality.
These are not theoretical numbers. EU authorities have demonstrated willingness to enforce technology regulations aggressively, as seen with GDPR fines exceeding 4 billion EUR since 2018.
Getting Started
The compliance deadline is less than five months away. The companies that start now will have time to do this properly. Those that wait until July will be scrambling.
Witness provides free tools to help you get started. Our AI System Classifier determines your risk level in about three minutes, and our compliance toolkit walks you through each documentation requirement with guidance tied directly to the regulation's articles. No consultants, no six-figure contracts — just the tools you need to get compliant.
Prüfen Sie, ob die EU-KI-Verordnung für Sie gilt
Kostenlose Klassifizierung in 3 Minuten. Keine Anmeldung erforderlich.
Jetzt starten