EU AI Act, Article 27 — Template for deployers of high-risk AI systems
Describe the deployer's processes in which the high-risk AI system will be used, in line with its intended purpose.
Cover deployment context, intended purpose, operational environment and the types of decisions the system supports.
For how long and how frequently will the system be used?
Include start date, planned duration, usage cadence, expected decision volume and review schedule.
Which categories of natural persons and groups are likely to be affected by the system's use?
Identify primary affected groups, vulnerable groups, geographic scope, estimated number of affected persons and indirectly affected parties.
What specific risks of harm are likely to impact the identified groups?
Consider the provider's Article 13 information. Document likelihood, severity, cumulative effects, discrimination and privacy risks.
Describe the human oversight measures that will be implemented in accordance with the instructions for use.
Cover oversight roles, intervention mechanisms, escalation procedures, override capability and monitoring frequency.
What measures will be taken when the identified risks materialise, including governance and complaint-handling arrangements?
Describe preventive measures, corrective measures, complaint mechanism, governance structure, review triggers and the notification plan.
Have you conducted a GDPR Article 35 Data Protection Impact Assessment (DPIA)? If yes, how does this FRIA complement it?
Reference the DPIA, note the areas it covers and explain the additional fundamental-rights impacts addressed here.
If a risk identified under section 4 cannot be mitigated, describe how the relevant market surveillance authority will be notified without undue delay.
Include the responsible role, reporting channel and information to be provided.