Key terms and definitions from Regulation (EU) 2024/1689, explained in plain language.
The obligation for providers and deployers to ensure their staff and other persons dealing with AI systems have a sufficient level of AI literacy. This applies to ALL AI systems regardless of risk level and is one of the first provisions to take effect (February 2, 2025).
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from input how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The annex listing 8 high-risk areas with specific use cases that automatically classify an AI system as high-risk. Areas include biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice/democracy.
The annex specifying the required contents of the technical documentation that providers of high-risk AI systems must draw up and maintain. Covers 9 sections from general system description to post-market monitoring.
The automated recognition of physical, physiological, or behavioural human features for the purpose of establishing the identity of a natural person by comparing biometric data to stored reference data. One-to-many comparison.
The marking that must be affixed to high-risk AI systems to indicate conformity with the AI Act. The CE marking must be visible, legible, and indelible, and is affixed before the system is placed on the market.
The process of verifying whether a high-risk AI system meets the requirements of the AI Act before it can be placed on the market. Most systems use self-assessment (Annex VI), while biometric identification systems require third-party assessment by a notified body.
A natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. Use in workplaces and educational institutions is prohibited under Article 5.
A public EU database where high-risk AI systems must be registered before being placed on the market or put into service. Managed by the European Commission and accessible to the general public free of charge.
A document drawn up by the provider for each high-risk AI system, stating that the system meets the requirements of the AI Act. Must be kept up to date and made available to national competent authorities for 10 years after the system is placed on the market.
An assessment that deployers of high-risk AI systems in certain sectors must conduct before putting the system into use. Required for public bodies and private entities providing essential services like healthcare, education, and insurance.
An AI model that is trained with a large amount of data using self-supervision at scale, displays significant generality, can competently perform a wide range of distinct tasks, and can be integrated into downstream AI systems or applications. Subject to specific transparency and documentation obligations.
Technical standards adopted by European standardisation bodies (CEN, CENELEC, ETSI) at the request of the Commission. AI systems that comply with harmonised standards enjoy a presumption of conformity with the AI Act requirements those standards cover.
An AI system that falls under one of the use cases listed in Annex III or is a safety component of a product covered by existing EU harmonisation legislation listed in Annex I. High-risk systems face the most comprehensive obligations under the AI Act.
The use for which an AI system is intended by the provider, including the specific context and conditions of use as specified in the information supplied by the provider in the instructions for use, promotional materials, and technical documentation.
AI systems subject to transparency obligations under Article 50. Users must be informed they are interacting with AI. Includes chatbots, emotion recognition systems, and AI-generated content (deepfakes).
The national authority responsible for carrying out market surveillance of AI systems. Each Member State must designate at least one authority to supervise the application of the AI Act in their territory.
AI systems that do not fall into prohibited, high-risk, or limited risk categories. These systems face no specific obligations under the AI Act, though voluntary codes of conduct are encouraged.
An independent organization designated by an EU Member State to carry out third-party conformity assessments for certain high-risk AI systems, particularly those involving biometric identification.
The first making available of an AI system or a general-purpose AI model on the Union market. This triggers key compliance obligations for providers.
A system that providers of high-risk AI systems must establish and document to actively and systematically collect, analyse, and evaluate data on the performance of their AI systems throughout their lifetime.
A natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model, or that has an AI system or model developed, and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose. Distinct from placing on the market, as it refers to the actual deployment rather than making available.
A biometric identification system where the capturing of biometric data, comparison, and identification occur without significant delay. Strictly prohibited in publicly accessible spaces for law enforcement, with narrow exceptions requiring prior judicial authorisation.
The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems. Providers must account for this in their risk management.
A controlled framework set up by a competent authority to allow the development, testing, and validation of innovative AI systems for a limited time under regulatory oversight. At least one sandbox must be operational in each Member State by August 2, 2026.
A continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system. It involves identifying risks, estimating and evaluating them, adopting risk management measures, and testing those measures.
An incident or malfunction of an AI system that directly or indirectly leads to death, serious harm to health or fundamental rights, serious property damage, or serious environmental damage. Must be reported to market surveillance authorities.
A risk that is specific to the high-impact capabilities of general-purpose AI models. A GPAI model is classified as having systemic risk if it has high-impact capabilities or is designated as such by the Commission based on specific criteria.
Comprehensive documentation that providers of high-risk AI systems must prepare before placing their system on the market. Must be kept up to date and made available to national competent authorities upon request.
AI practices that are outright prohibited because they pose a clear threat to people's safety, livelihoods, and fundamental rights. Includes social scoring, manipulative AI, real-time biometric identification in public spaces (with narrow exceptions), and emotion recognition in workplaces/education.