ConformySV

EU AI Act Timeline: All Deadlines 2024–2027

The EU AI Act phases in its requirements gradually between 2024 and 2027. Here you’ll find a visual overview of all key dates — focusing on what applies to businesses.

Last updated: March 14, 2026

127days until the high-risk deadline

Timeline: EU AI Act 2024–2027

The EU AI Act was adopted on June 13, 2024 and entered into force on August 1, 2024. Requirements are phased in gradually according to the following timeline:

August 1, 2024

AI Act enters into force

Regulation (2024/1689) is published in the EU Official Journal and enters into force. No compliance requirements apply yet — the deadlines for the various requirements start running.

Completed
February 2, 2025

Prohibited AI practices

Article 5 takes effect. AI systems with unacceptable risk are banned: social scoring, manipulative systems, real-time biometric categorisation (with exceptions), and emotion recognition in workplaces/education.

Completed
August 2, 2025

GPAI rules apply

Requirements for General-Purpose AI models take effect. GPAI model providers must provide technical documentation, copyright policy, and transparency information.

Completed
August 2, 2026

High-risk AI — main deadline

All requirements for high-risk AI systems under Annex III take effect. Providers and deployers must comply with Chapter III requirements — including technical documentation (Annex IV), risk management, FRIA, human oversight, and EU database registration.

127 days remaining
August 2, 2027

Remaining obligations

Requirements for high-risk AI systems that are safety components in products under Annex I (e.g., medical devices, machinery) take effect. These systems follow existing sectoral legislation in parallel with the AI Act.

August 2, 2026: What must be in place?

August 2, 2026 is the critical deadline for the majority of businesses that develop or use AI systems. From this date, all high-risk AI systems falling under Annex III must comply with all requirements.

Specifically, this means the following must be completed and documented:

  • Risk classification: The AI system must be classified under Article 6. The organisation must know whether the system is high-risk and be able to justify the assessment.
  • Technical documentation (Annex IV): Complete documentation with all nine sections — system description, risk analysis, data quality, performance, oversight, and more.
  • FRIA (Article 27): Deployers who are covered must have conducted a fundamental rights impact assessment before the system is put into service.
  • Risk management system: A documented risk management system under Article 9 must be in place and continuously updated.
  • EU database registration: High-risk AI systems must be registered in the EU-wide public database before being placed on the market.

Fines for non-compliance: From August 2, 2026, supervisory authorities can issue fines of up to €15 million or 3% of global annual turnover for failure to meet high-risk AI system requirements.

Prepare — start now

With less than five months until the main deadline, we recommend starting compliance work immediately. Here are the five most important steps:

  1. Classify your AI system. Use our free classification tool to find out if your system is high-risk.
  2. Map existing documentation. Inventory what technical documentation already exists and identify gaps against Annex IV requirements.
  3. Generate technical documentation. Use our Annex IV tool to create a structured draft that your legal team can review.
  4. Conduct FRIA. If your organisation is covered by Article 27, start the impact assessment well in advance.
  5. Establish ongoing processes. Implement risk management, monitoring, and incident reporting as part of your organisation’s routines.

Need help with FRIA? Read our complete FRIA guide.

FAQ about EU AI Act deadlines

Do all deadlines apply to all AI systems?

No, different deadlines apply to different categories of AI systems. The prohibition on unacceptable AI practices has been in effect since February 2025. GPAI requirements since August 2025. High-risk AI (Annex III) applies from August 2026, and Annex I systems from August 2027. Most businesses should focus on the August 2, 2026 deadline.

What happens if I’m not compliant before August 2, 2026?

Supervisory authorities can issue fines and require the AI system to be taken out of service. Fines can reach up to 15 million euros or 3% of global annual turnover. In practice, authorities are expected to initially focus on the most serious cases, but risk increases over time.

Are there exemptions for small and medium-sized enterprises?

The requirements apply regardless of company size, but fines are proportionally lower for SMEs and startups. The EU AI Act also includes requirements for supervisory authorities to provide guidance and innovation sandboxes. However, the substantive requirements (documentation, risk management, etc.) apply in full.

I use ChatGPT/GPT-4 in my business — am I affected?

It depends on how you use it. If you integrate GPT-4 into a system that falls under Annex III categories (e.g., recruitment, credit scoring), you may be classified as a deployer of a high-risk system and subject to requirements from August 2026. General-purpose use (text processing, brainstorming) is not affected by the high-risk requirements.

Is your AI system ready?

Classify your AI system today and find out which requirements apply — free and without registration.

Classify now