EU AI Act Glossary
A comprehensive glossary of key terms in the EU AI Act. Look up definitions, requirements, and article references organized alphabetically.
Last updated: April 7, 2026
A
AI System
A machine-based system designed to operate with some degree of autonomy and that can, for explicit or implicit purposes, generate outputs such as predictions, recommendations, or decisions affecting physical or virtual environments.
Article 3(1)
EU AI Act
The European Union's artificial intelligence regulation (Regulation (EU) 2024/1689), which establishes a harmonized framework for AI development and deployment in the EU, with compliance deadlines beginning in 2025 and full enforcement by August 2, 2026.
Full regulation text
Annex I
Annex I of the EU AI Act lists the AI practices and use cases that are classified as high-risk or prohibited, including biometric categorization, discrimination, and social scoring systems.
Article 6(2), Article 5
Annex III
Annex III specifies the 8 categories of high-risk AI systems under the EU AI Act, including recruitment, education, biometrics, infrastructure, and law enforcement applications.
Article 6(2), Article 7
Annex IV
Annex IV specifies the 9 mandatory sections of technical documentation that providers of high-risk AI systems must prepare, including system purpose, training data, testing results, risk assessments, and post-market monitoring plans.
Article 11, Annex IV
B
Biometric Identification
A process of identifying a natural person based on biometric data (e.g., fingerprints, facial geometry, iris patterns) through comparison with a database of known individuals.
Article 3(13), Article 5(1)(d)
Biometric Categorization
Using biometric data to infer and classify individuals into categories based on sensitive attributes such as race, ethnicity, political opinions, religious beliefs, or sexual orientation. This practice is prohibited under the EU AI Act.
Article 5(1)(a)
C
CE Marking
A marking indicating conformity with EU harmonized standards and regulations. High-risk AI system providers must affix a CE mark to systems and include it in Declaration of Conformity documents.
Article 49
Conformity Assessment
The process of verifying that an AI system complies with EU AI Act requirements. High-risk systems must undergo conformity assessment by a notified body or by the provider (depending on the category).
Article 43, Article 44
Common Specification
EU harmonized standards or specifications that describe technical means of achieving EU AI Act compliance. Compliance with common specifications creates a presumption of conformity.
Article 40
D
Deployer
Any natural or legal person (other than the provider) who uses an AI system for its intended purpose or for a substantially different purpose that the provider might reasonably foresee.
Article 3(4)
DPIA (Data Protection Impact Assessment)
A requirement under GDPR and EU AI Act to assess the impact of AI system processing on data protection rights before deployment. Mandatory for high-risk systems and systems processing personal data.
GDPR Article 35, EU AI Act Article 27
F
FRIA (Fundamental Rights Impact Assessment)
An assessment of the impact of a high-risk AI system on fundamental rights (privacy, non-discrimination, freedom of expression, etc.). Mandatory for deployers of high-risk AI systems.
Article 27
Foundation Model
A large-scale AI model trained on broad data and adapted for a wide range of applications. Foundation models may be high-risk themselves or become high-risk when integrated into specific systems.
Article 3(63)
G
General-Purpose AI (GPAI)
An AI system that can be trained on a broad range of data and adapted for a wide variety of distinct downstream purposes without specialized training for each purpose (e.g., large language models).
Article 3(60)
General-Purpose AI Model
A large-scale model (typically a foundation model) trained on broad data that can be adapted for multiple downstream applications. Subject to transparency and documentation requirements.
Article 6(3), Article 13
H
Harmonized Standard
A technical standard developed by EU standardization bodies (CEN, CENELEC, ETSI) that specifies how to comply with EU AI Act requirements. Standards are listed in the Official Journal.
Article 40
High-Risk AI System
An AI system that poses significant risks to health, safety, fundamental rights, or the law, as determined by the EU AI Act. High-risk systems must meet strict requirements including risk management, technical documentation (Annex IV), human oversight, and conformity assessment.
Article 6(2), Annex III
I
Intended Purpose
The use for which an AI system is designed and marketed by the provider. The intended purpose determines the system's risk level and which requirements apply. Deployers using systems for unintended purposes may incur additional obligations.
Article 3(10)
L
Limited Risk
AI systems that do not fall into the high-risk or prohibited categories but may have certain risks. Limited-risk systems must meet transparency requirements and may be subject to specific governance measures.
Article 6(2)(c), Article 50
M
Minimal Risk
AI systems that present no significant risk to fundamental rights, safety, or lawfulness. These systems have minimal compliance obligations under the EU AI Act beyond general transparency requirements.
Article 6(2)(a)
Market Surveillance Authority
National authorities responsible for monitoring AI systems in the market, enforcing compliance, investigating incidents, and taking corrective measures. In Sweden, this includes the Swedish Board for Accreditation and Conformity Assessment (SWEDAC).
Article 63, Article 71
N
Notified Body
An independent third-party organization designated by EU member states to perform conformity assessments of high-risk AI systems. Notified bodies verify that systems comply with technical documentation and risk management requirements.
Article 33, Article 48
P
Post-Market Monitoring
Ongoing collection and analysis of data on how an AI system performs after market deployment, including real-world performance, incidents, and emerging risks. Providers must monitor their systems and report serious incidents to market surveillance authorities.
Article 61, Annex IV
Provider
A natural or legal person that develops an AI system, irrespective of whether the system is placed on the market under their name or another entity's name, or used for their own purposes.
Article 3(2)
Prohibited AI Practice
AI practices explicitly forbidden under the EU AI Act, including subliminal manipulation, exploitation of vulnerabilities, social scoring, biometric categorization, real-time remote biometric identification in public spaces, and criminal risk assessment based solely on profiling.
Article 5
R
Real-Time Remote Biometric Identification
Identifying a person in real-time based on biometric data collected remotely (e.g., facial recognition at a distance). This practice is generally prohibited, with limited exceptions for law enforcement in strictly defined circumstances.
Article 5(1)(d)
Risk Management System
A systematic process that high-risk AI system providers must implement to identify, analyze, evaluate, and mitigate risks throughout the system lifecycle, including before and after deployment.
Article 9
Regulatory Sandbox
A controlled environment established by regulatory authorities where providers can test and develop innovative AI systems with reduced compliance burdens, subject to oversight and monitoring.
Article 53
S
Safety Component
A component of a non-AI system whose failure could reasonably result in serious damage. When an AI system controls a safety component, the entire system becomes high-risk.
Article 6(2)(b)
Social Scoring
Rating or assigning a score to natural persons or their behavior based on their personal data, such as creditworthiness, behavioral patterns, or compliance with social norms. Social scoring by public authorities is prohibited.
Article 5(1)(e)
Substantial Modification
A change to a high-risk AI system that significantly alters its intended purpose, performance, or safety characteristics. Substantially modified systems must undergo new conformity assessment and risk management.
Article 4(23)
T
Technical Documentation (Annex IV)
Comprehensive documentation that high-risk AI system providers must prepare covering system purpose, development process, training data, testing results, known limitations, risk assessments, data governance, human oversight, and post-market monitoring.
Article 11, Annex IV
Transparency Obligations
Requirements for AI system providers and deployers to disclose information about system capabilities, limitations, and operation. All AI systems must meet minimum transparency requirements, with enhanced requirements for high-risk systems.
Article 13, Article 50
U
Unacceptable Risk
AI systems that pose an unacceptable risk to fundamental rights or safety are prohibited from being placed on the market in the EU. These include subliminal manipulation systems and systems that enable systematic discrimination.
Article 5
Understand your requirements
Classify your AI system to get a personalized compliance checklist and find out which requirements apply to you.
Classify your systemFrequently Asked Questions
What is an AI system under the EU AI Act?
An AI system is a machine-based system that can generate outputs such as predictions, recommendations, or decisions affecting physical or virtual environments. This includes machine learning models, neural networks, knowledge-based systems, and statistical methods. Purely software-based systems designed with some degree of autonomy fall under the definition.
What is the difference between a provider and a deployer?
Providers develop and place AI systems on the market or use them for their own purposes. Deployers use AI systems developed by others for the intended purpose or for a substantially different purpose. Some organizations are both providers and deployers. High-risk systems impose requirements on both roles.
What are high-risk AI systems?
High-risk AI systems are those with significant potential to harm health, safety, fundamental rights, or lawfulness. The EU AI Act identifies 8 categories in Annex III: biometric identification, critical infrastructure, education, employment, law enforcement, essential services, migration/asylum, and justice systems. High-risk systems must undergo conformity assessment, have technical documentation (Annex IV), implement risk management, and enable human oversight.
What is Annex IV technical documentation?
Annex IV specifies 9 mandatory sections of technical documentation for high-risk AI systems: (1) system purpose and intended use, (2) development process and methodology, (3) training data information, (4) testing and validation results, (5) known limitations and failure modes, (6) risk management and mitigation, (7) data governance practices, (8) human oversight mechanisms, and (9) post-market monitoring procedures.
What is a FRIA and when is it required?
A FRIA (Fundamental Rights Impact Assessment) is a mandatory assessment for deployers of high-risk AI systems. It evaluates the potential impact of the system on fundamental rights such as privacy, non-discrimination, freedom of expression, and human dignity. The FRIA must be conducted before deploying the system and must inform risk mitigation strategies.