Risk Management System (RMS)

EU AI Act, Article 9 — Template for providers of high-risk AI systems

Who must establish a Risk Management System. Under Article 9(1) of Regulation (EU) 2024/1689, providers of high-risk AI systems must establish, implement, document and maintain a risk management system. Per Article 9(2), it must run as a continuous iterative process across the entire lifecycle of the system, with regular review and systematic updates based on testing and post-market monitoring data (Article 9(2)(e)).

Organisation details

Organisation name
AI system name
System purpose
RMS owner
Date of assessment
Version

1. Risk identification — Art. 9(2)(a)

Identify and analyse the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when used in accordance with its intended purpose.

Describe the intended purpose, known risks, reasonably foreseeable risks and the sources that generate them (model choices, training data, deployment environment, user interaction).

2. Risk estimation and evaluation — Art. 9(2)(b)

Estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.

Cover likelihood, severity, risks from intended use, risks from reasonably foreseeable misuse, and the evaluation methodology applied.

3. Risk management measures — Art. 9(3), 9(5)

Adopt appropriate and targeted risk management measures designed to address the risks identified, giving due consideration to the effects of their combined application.

Describe elimination or reduction measures (Art. 9(5)(a)), mitigation and control measures for risks that cannot be eliminated (Art. 9(5)(b)), information provided to deployers and training where appropriate (Art. 9(5)(c)), and how the combined effect of measures has been considered (Art. 9(3)).

4. Residual risk and communication to deployers — Art. 9(5)

Describe the residual risks that remain after risk management measures have been applied, judge their acceptability and explain how this information is communicated to deployers.

Residual risks must be judged acceptable; each residual risk and relevant overall residual risk must be communicated to the deployer (Art. 9(5)).

5. Testing — Art. 9(6)–(8)

Test the high-risk AI system to identify the most appropriate targeted risk management measures and to ensure it performs consistently for its intended purpose and complies with the requirements of Section 2 of Chapter III.

Specify testing procedures, performance metrics, prior defined probabilistic thresholds (Art. 9(7)), testing frequency and any results produced. Testing may be done under real-world conditions in line with Article 60 where appropriate.

6. Continuous iteration, review and post-market integration — Art. 9(2)

Operate the risk management system as a continuous iterative process planned and run across the entire lifecycle of the high-risk AI system, with regular systematic review and updates.

Cover the lifecycle plan, triggers that require an update, review cadence, documentation management and how data from the Article 72 post-market monitoring system feeds back into the RMS (Art. 9(2)(e)).

7. Specific considerations: children and vulnerable groups — Art. 9(9)

Where the high-risk AI system is likely to be accessed by or have an impact on children or other vulnerable groups, assess whether the risk management measures give appropriate consideration to those groups.

State whether the system is accessible to children, identify vulnerable groups potentially impacted, and describe any additional safeguards. Record any integration with financial-sector risk management frameworks if applicable (Art. 9(10)).