EU AI Act, Article 9 — Template for providers of high-risk AI systems
Identify and analyse the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when used in accordance with its intended purpose.
Describe the intended purpose, known risks, reasonably foreseeable risks and the sources that generate them (model choices, training data, deployment environment, user interaction).
Estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.
Cover likelihood, severity, risks from intended use, risks from reasonably foreseeable misuse, and the evaluation methodology applied.
Adopt appropriate and targeted risk management measures designed to address the risks identified, giving due consideration to the effects of their combined application.
Describe elimination or reduction measures (Art. 9(5)(a)), mitigation and control measures for risks that cannot be eliminated (Art. 9(5)(b)), information provided to deployers and training where appropriate (Art. 9(5)(c)), and how the combined effect of measures has been considered (Art. 9(3)).
Describe the residual risks that remain after risk management measures have been applied, judge their acceptability and explain how this information is communicated to deployers.
Residual risks must be judged acceptable; each residual risk and relevant overall residual risk must be communicated to the deployer (Art. 9(5)).
Test the high-risk AI system to identify the most appropriate targeted risk management measures and to ensure it performs consistently for its intended purpose and complies with the requirements of Section 2 of Chapter III.
Specify testing procedures, performance metrics, prior defined probabilistic thresholds (Art. 9(7)), testing frequency and any results produced. Testing may be done under real-world conditions in line with Article 60 where appropriate.
Operate the risk management system as a continuous iterative process planned and run across the entire lifecycle of the high-risk AI system, with regular systematic review and updates.
Cover the lifecycle plan, triggers that require an update, review cadence, documentation management and how data from the Article 72 post-market monitoring system feeds back into the RMS (Art. 9(2)(e)).
Where the high-risk AI system is likely to be accessed by or have an impact on children or other vulnerable groups, assess whether the risk management measures give appropriate consideration to those groups.
State whether the system is accessible to children, identify vulnerable groups potentially impacted, and describe any additional safeguards. Record any integration with financial-sector risk management frameworks if applicable (Art. 9(10)).