ConformySV

High-Risk AI Systems: Requirements, Classification and Compliance

Everything you need to know about high-risk AI systems under the EU AI Act — from the Annex III categories to concrete requirements and deadlines.

Last updated: March 14, 2026

What is a high-risk AI system?

Under the EU AI Act (Regulation 2024/1689), AI systems are classified into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. High-risk AI systems are the category that carries the most extensive requirements for providers and deployers.

An AI system is classified as high-risk if it meets either of two criteria under Article 6:

The first criterion applies to AI systems that are safety components of products already covered by EU harmonisation legislation (Annex I), such as medical devices, machinery, or toys. The second — and for most businesses more relevant — criterion applies to AI systems that fall within any of the eight areas listed in Annex III.

Important: The classification determines which obligations apply. A high-risk system requires risk management, data governance, transparency, human oversight, accuracy, and cybersecurity — requirements that must be in place before the system is placed on the market.

To determine whether your AI system is high-risk, you can use our free classification tool that guides you through Article 6 and Annex III step by step.

The 8 Annex III Categories

Annex III of the EU AI Act lists eight areas where AI systems are automatically classified as high-risk. The EU selected these areas because AI use within them can affect people’s fundamental rights, safety, and access to essential services.

1

Biometrics

Remote biometric identification systems, emotion recognition in workplaces, and biometric categorisation of natural persons. Examples: facial recognition in public spaces, systems analysing employees’ emotional states.

2

Critical infrastructure

AI as safety components in the management of road traffic, water, gas, heating, and electricity supply. Examples: AI-controlled traffic management, predictive maintenance of power supply systems.

3

Education and vocational training

Systems that determine access to education, assess learning outcomes, or adapt education levels. Examples: AI-based grading, automated university admissions, adaptive learning platforms that steer students’ educational paths.

4

Employment and workforce management

AI for recruitment, screening applications, performance evaluation, and decisions on promotion or termination. Examples: CV screening with AI, automated interview assessment, systems that rank employees.

5

Access to public and private services

Systems that assess eligibility for benefits, creditworthiness, insurance premiums, or risk levels. Examples: AI-based credit scoring, automated decisions on social benefits, risk classification for insurance.

6

Law enforcement

AI systems for risk assessment of individuals, polygraph tests, evaluation of evidence reliability, and predictive policing. Examples: profiling in criminal investigations, AI-assisted evidence analysis.

7

Migration, asylum and border control

Systems for risk assessment at border crossings, processing asylum applications, and document verification. Examples: automated visa application review, AI-based identity checks at EU external borders.

8

Administration of justice and democratic processes

AI systems assisting judicial authorities in interpreting facts and legislation, and systems that may influence elections or voting behaviour. Examples: AI support in court decisions, chatbots advising on legal matters.

If your AI system falls within any of these eight categories, it is most likely classified as high-risk — and therefore subject to all requirements in Chapter III, Section 2 of the EU AI Act. Not sure? Classify your system here.

Provider Requirements for High-Risk AI Systems

Providers are organisations that develop or place AI systems on the EU market. For high-risk AI systems, Articles 8–15 establish six core requirement areas that must be fulfilled before the system is placed on the market or put into service:

1. Risk management system (Article 9)

A documented risk management system shall be established, implemented, and maintained throughout the AI system’s lifecycle. It shall identify and analyse known and foreseeable risks, estimate and evaluate risks arising from use in accordance with the intended purpose and under conditions of reasonably foreseeable misuse. Appropriate risk management measures shall be adopted.

2. Data and data governance (Article 10)

Training, validation, and testing data must meet quality criteria. Datasets shall be relevant, representative, as free of errors as possible, and complete with regard to the intended purpose. Special consideration shall be given to potential biases that may lead to discrimination.

3. Technical documentation (Article 11)

Before the system is placed on the market, technical documentation shall be drawn up in accordance with Annex IV. The documentation shall demonstrate that the system complies with the requirements of Chapter III, Section 2, and provide supervisory authorities with sufficient information to assess the system’s conformity.

4. Transparency and information obligations (Article 13)

High-risk AI systems shall be designed so that their operation is sufficiently transparent for deployers to interpret the system’s output appropriately. Instructions for use shall include clear information about the system’s capabilities, limitations, risk levels, human oversight, and maintenance needs.

5. Human oversight (Article 14)

The AI system shall enable effective human oversight throughout the period of use. The persons exercising oversight shall have the authority, competence, and tools to understand the system’s functions, correctly interpret outputs, and be able to intervene in or override the system’s decisions when necessary.

6. Accuracy, robustness and cybersecurity (Article 15)

High-risk AI systems shall achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. Accuracy levels shall be documented in the instructions for use. The system shall be resilient to errors, inconsistencies, and attempts at manipulation by third parties.

Penalties: Non-compliance can result in fines of up to €15 million or 3% of global annual turnover (whichever is higher). For SMEs, fine levels are proportionally lower, but still require significant compliance work.

Deployer Obligations for High-Risk AI Systems

Organisations that deploy high-risk AI systems in their operations have their own obligations under Article 26. These apply regardless of whether the system was purchased or licensed from an external provider.

Oversight

Ensure the system is used in accordance with the instructions for use, with appropriate human oversight and technical competence.

FRIA

Carry out a Fundamental Rights Impact Assessment (FRIA) in accordance with Article 27 before the system is put into service.

Incident reporting

Report serious incidents to the national supervisory authority and, where required, to the AI system provider.

Log management

Retain logs generated by the AI system for an appropriate period, at least six months unless otherwise required by applicable law.

Read more about FRIA and Article 27 — including practical steps and templates.

Technical Documentation under Annex IV

All high-risk AI systems require technical documentation in accordance with Annex IV of the EU AI Act. The documentation consists of nine sections that together provide a complete picture of the system’s design, development, performance, and risk management:

  1. General description of the AI system
  2. Detailed description of the system’s components and development process
  3. Information on monitoring, functioning and control
  4. Description of the risk management system
  5. Description of data and data governance
  6. Performance and accuracy
  7. Robustness and cybersecurity
  8. Quality management system
  9. Information on changes throughout the system’s lifecycle

Read our detailed Annex IV guide or generate the documentation directly with our AI-powered tool.

Timeline: When Do the Requirements Apply?

The EU AI Act entered into force on August 1, 2024, but requirements are phased in gradually. Here are the key dates for high-risk AI systems:

Feb 2, 2025Prohibition of AI systems with unacceptable risk takes effect.
Aug 2, 2025Requirements for general-purpose AI models (GPAI) take effect.
Aug 2, 2026All requirements for high-risk AI systems under Annex III take effect. Deadline for technical documentation, risk management, FRIA, and EU database registration.
Aug 2, 2027Requirements for high-risk AI systems that are safety components (Annex I) take effect.

Less than 5 months remaining. By August 2, 2026, all high-risk AI systems falling under Annex III must be compliant. Start with classification and documentation now.

Classify Your AI System

Not sure if your AI system is high-risk? Our free classification tool guides you through Article 6, Annex I, and Annex III to determine your risk level — in under 5 minutes.

Here’s how it works:

  1. Answer questions about your AI system, its use case, and impact.
  2. Get a clear classification with reasoning and references to relevant articles.
  3. Proceed with documentation, FRIA, or other actions based on the result.

Ready to get started?

Classify your AI system for free and find out if it’s high-risk. It only takes 5 minutes — no registration required.

Classify your AI system

Frequently Asked Questions About High-Risk AI Systems

How do I know if my AI system is high-risk?

Your system is high-risk if it is either a safety component in a product covered by EU harmonisation legislation (Annex I) or if it falls within any of the eight categories in Annex III. Our classification tool can help you determine this.

Do the requirements also apply to small businesses and startups?

Yes, the requirements apply to all organisations that develop or use high-risk AI systems on the EU market, regardless of size. However, the EU has introduced proportionality considerations — fines are lower for SMEs, and supervisory authorities shall take the company’s size into account during compliance checks.

What is the difference between a provider and a deployer?

A provider develops or places the AI system on the market. A deployer is the organisation that puts the system into service in its operations. Both have obligations, but the provider carries the heaviest responsibility for technical documentation, risk management, and CE marking.

Do I need to conduct a FRIA?

Organisations that are public bodies or providers of public services, as well as private actors using high-risk AI systems for credit scoring or insurance pricing, must conduct a FRIA under Article 27 before the system is put into service.

What documentation is required for high-risk AI systems?

Providers must draw up technical documentation in accordance with Annex IV, covering nine sections — from system description to risk management and performance. Additionally, registration in the EU public database, instructions for use, and a declaration of conformity are required.

What happens if I’m not compliant before the deadline?

From August 2, 2026, supervisory authorities can issue fines for non-compliance. For high-risk AI systems, fines can reach up to €15 million or 3% of global annual turnover. Beyond fines, the organisation risks having to cease using the system.