WWitness
Get StartedAcademyExpert ChatPricingBlog
Sign In

EU AI Act Glossary

Key terms and definitions from Regulation (EU) 2024/1689, explained in plain language.

Jump to:ABCDEFGHILMNPRSTU
A

AI Literacy

Art. 4

The obligation for providers and deployers to ensure their staff and other persons dealing with AI systems have a sufficient level of AI literacy. This applies to ALL AI systems regardless of risk level and is one of the first provisions to take effect (February 2, 2025).

Related Tool: Academy

AI System

Art. 3(1)

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from input how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Related Tool: AI System Classifier

Annex III

Annex III

The annex listing 8 high-risk areas with specific use cases that automatically classify an AI system as high-risk. Areas include biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice/democracy.

Annex IV

Annex IV

The annex specifying the required contents of the technical documentation that providers of high-risk AI systems must draw up and maintain. Covers 9 sections from general system description to post-market monitoring.

Related Tool: Documentation Generator
B

Biometric Identification

Art. 3(35)

The automated recognition of physical, physiological, or behavioural human features for the purpose of establishing the identity of a natural person by comparing biometric data to stored reference data. One-to-many comparison.

C

CE Marking

Art. 48

The marking that must be affixed to high-risk AI systems to indicate conformity with the AI Act. The CE marking must be visible, legible, and indelible, and is affixed before the system is placed on the market.

Conformity Assessment

Art. 43

The process of verifying whether a high-risk AI system meets the requirements of the AI Act before it can be placed on the market. Most systems use self-assessment (Annex VI), while biometric identification systems require third-party assessment by a notified body.

Related Tool: Conformity Assessment Tool
D

Deployer

Art. 3(4)

A natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

Related Tool: Role Classifier
E

Emotion Recognition System

Art. 3(39)

An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. Use in workplaces and educational institutions is prohibited under Article 5.

EU Database

Art. 71

A public EU database where high-risk AI systems must be registered before being placed on the market or put into service. Managed by the European Commission and accessible to the general public free of charge.

EU Declaration of Conformity

Art. 47

A document drawn up by the provider for each high-risk AI system, stating that the system meets the requirements of the AI Act. Must be kept up to date and made available to national competent authorities for 10 years after the system is placed on the market.

F

Fundamental Rights Impact Assessment (FRIA)

Art. 27

An assessment that deployers of high-risk AI systems in certain sectors must conduct before putting the system into use. Required for public bodies and private entities providing essential services like healthcare, education, and insurance.

Related Tool: FRIA Generator
G

General-Purpose AI (GPAI)

Art. 51

An AI model that is trained with a large amount of data using self-supervision at scale, displays significant generality, can competently perform a wide range of distinct tasks, and can be integrated into downstream AI systems or applications. Subject to specific transparency and documentation obligations.

H

Harmonised Standards

Art. 40

Technical standards adopted by European standardisation bodies (CEN, CENELEC, ETSI) at the request of the Commission. AI systems that comply with harmonised standards enjoy a presumption of conformity with the AI Act requirements those standards cover.

High-Risk AI System

Art. 6

An AI system that falls under one of the use cases listed in Annex III or is a safety component of a product covered by existing EU harmonisation legislation listed in Annex I. High-risk systems face the most comprehensive obligations under the AI Act.

Related Tool: AI System Classifier
I

Intended Purpose

Art. 3(12)

The use for which an AI system is intended by the provider, including the specific context and conditions of use as specified in the information supplied by the provider in the instructions for use, promotional materials, and technical documentation.

L

Limited Risk

Art. 50

AI systems subject to transparency obligations under Article 50. Users must be informed they are interacting with AI. Includes chatbots, emotion recognition systems, and AI-generated content (deepfakes).

M

Market Surveillance Authority

Art. 70

The national authority responsible for carrying out market surveillance of AI systems. Each Member State must designate at least one authority to supervise the application of the AI Act in their territory.

Minimal Risk

Recital 28

AI systems that do not fall into prohibited, high-risk, or limited risk categories. These systems face no specific obligations under the AI Act, though voluntary codes of conduct are encouraged.

N

Notified Body

Art. 28

An independent organization designated by an EU Member State to carry out third-party conformity assessments for certain high-risk AI systems, particularly those involving biometric identification.

P

Placing on the Market

Art. 3(9)

The first making available of an AI system or a general-purpose AI model on the Union market. This triggers key compliance obligations for providers.

Post-Market Monitoring

Art. 72

A system that providers of high-risk AI systems must establish and document to actively and systematically collect, analyse, and evaluate data on the performance of their AI systems throughout their lifetime.

Provider

Art. 3(3)

A natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model, or that has an AI system or model developed, and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.

Related Tool: Role Classifier

Putting into Service

Art. 3(11)

The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose. Distinct from placing on the market, as it refers to the actual deployment rather than making available.

R

Real-Time Remote Biometric Identification

Art. 3(41)

A biometric identification system where the capturing of biometric data, comparison, and identification occur without significant delay. Strictly prohibited in publicly accessible spaces for law enforcement, with narrow exceptions requiring prior judicial authorisation.

Reasonably Foreseeable Misuse

Art. 3(13)

The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems. Providers must account for this in their risk management.

Regulatory Sandbox

Art. 57

A controlled framework set up by a competent authority to allow the development, testing, and validation of innovative AI systems for a limited time under regulatory oversight. At least one sandbox must be operational in each Member State by August 2, 2026.

Risk Management System

Art. 9

A continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system. It involves identifying risks, estimating and evaluating them, adopting risk management measures, and testing those measures.

Related Tool: Risk Management Tool
S

Serious Incident

Art. 73

An incident or malfunction of an AI system that directly or indirectly leads to death, serious harm to health or fundamental rights, serious property damage, or serious environmental damage. Must be reported to market surveillance authorities.

Systemic Risk

Art. 51

A risk that is specific to the high-impact capabilities of general-purpose AI models. A GPAI model is classified as having systemic risk if it has high-impact capabilities or is designated as such by the Commission based on specific criteria.

T

Technical Documentation

Art. 11

Comprehensive documentation that providers of high-risk AI systems must prepare before placing their system on the market. Must be kept up to date and made available to national competent authorities upon request.

Related Tool: Documentation Generator
U

Unacceptable Risk

Art. 5

AI practices that are outright prohibited because they pose a clear threat to people's safety, livelihoods, and fundamental rights. Includes social scoring, manipulative AI, real-time biometric identification in public spaces (with narrow exceptions), and emotion recognition in workplaces/education.