Conformy

EU AI Act for startups and SMEs

Practical guide for small businesses and startups to understand, classify, and comply with the EU AI Act. Focused on proportionality, minimal burden, and cost-effective compliance.

Last updated: 7 April 2026

Does the EU AI Act apply to startups?

Yes. The EU AI Act applies to all companies that develop, distribute, or use AI systems in the EU, regardless of size. There is no blanket exemption for startups or small companies.

But it's crucial to understand: the AI Act is built on a <strong>proportionality principle</strong>. The smaller your company and the lower the risk your AI system poses, the smaller the compliance burden in practice.

Proportionality is key: A startup using ChatGPT for customer support doesn't need the same documentation as a healthcare AI startup. The AI Act scales requirements by risk, not by company size.

Most AI systems from startups are classified as <strong>minimal risk</strong>, which means there are no special legal requirements. Many startups literally need to do nothing under the regulation.

If your system is classified as high-risk—for example, AI for recruitment, fintech, healthcare, or education—you must follow more comprehensive requirements. But even then, there are ways to reduce the burden through proportionality rules and regulatory sandboxes.

The key for startups is to correctly classify your system, understand what requirements actually apply, and plan a realistic compliance strategy within limited resources.

SME provisions and startup-friendly measures

The EU AI Act contains several provisions designed to ease compliance for small and medium-sized companies. Here are the key ones:

Proportionality in requirements (Article 17(2))

The AI Act requires that member state supervisory authorities must consider proportionality when enforcing the regulation. For SMEs, this means:

Requirements for documentation, testing, and risk management can be scaled based on the system's complexity and actual risk. A simple rule-based AI classifier doesn't need the same extensive documentation as a machine learning model. Supervisors are expected to account for small companies' limited resources during compliance reviews.

Reduced fees for conformity assessment

For high-risk AI systems, SMEs can apply for reduced fees for third-party conformity assessment. In some member states, these fees can be reduced by 50% for small companies.

Regulatory sandboxes for testing (Articles 57-58)

The AI Act allows member states to establish regulatory sandboxes—controlled environments where startups can test high-risk AI systems under supervision without having to fully comply with all requirements from day one. Sweden and other countries plan to offer this.

Technical support from supervisory authorities

Member states can provide guidance and technical support to small companies to ease compliance. Several countries (e.g., Germany, France) already offer free consultation services for startups.

Practical implication for startups: You have the right to expect proportional treatment from regulators. Document your efforts and reasoning—it's enough to show you did what you could with available resources.

Startup risk classification—what is your system?

Most startup AI systems fall under minimal or limited risk. Here are some practical examples to classify your system:

Minimal risk (no special legal requirements)

Classic minimal-risk systems include:

  • Customer support chatbots (technical support, FAQ automation)
  • Recommendation systems (e-commerce, music streaming, news)
  • Spam and fraud filtering
  • Simple predictive models for internal operations (e.g., inventory, resource planning)

Limited risk (transparency obligation)

Systems here require users to be informed they're interacting with AI:

  • Generative AI for content (GPT, image generators)—must be labeled
  • Emotion recognition systems that detect how people feel

High-risk (extensive documentation and requirements)

Some startup AI becomes high-risk—this category affects people's opportunities and rights:

  • Recruitment AI or CV screeners (impacts job opportunities)
  • Fintech AI for credit assessment (impacts access to capital)
  • EdTech AI that impacts access to education
  • MedTech with AI components (diagnostics, treatment suggestions)

Many startups build AI in one of these high-risk categories without realizing it. An AI-assisted hiring app? High-risk. A lending API? High-risk. Classification is the first step.

Classify your AI system here in 5 minutes and get a clear answer.

Minimal compliance burden (minimal risk systems)

If your startup system is classified as minimal risk, here's good news: there are <strong>no legal requirements</strong> under the AI Act. You don't need to create Annex IV documentation or conduct formal risk management.

What you legally must do: almost nothing

For minimal risk systems, the legal requirements are:

  • Nothing mandatory under the AI Act. The regulation imposes no formal requirements.
  • However, you should have good internal documentation for your own purposes and possible regulatory questions later.
  • You must follow other laws like GDPR (if the system processes personal data) and product safety (if applicable).

Best practice (recommended, not mandatory)

To protect your company and build trust, you should consider:

  • A simple risk assessment: What risks does my AI system pose? Document this in a 1-2 page note.
  • Data quality: Ensure training data is representative and (as much as possible) free from serious bias.
  • Transparency: Document what the system can and cannot do. If it's a chatbot, tell users it's AI.

Why this matters: If something goes wrong and a regulator later reviews your AI, it's much better to show you made a thoughtful attempt to get it right than to have done nothing at all.

If your system is high-risk: practical checklist for startups

If your AI is classified as high-risk, you must follow the AI Act's high-risk requirements. For startups with small teams, the key is to <strong>prioritize, outsource if possible, and use tools</strong>. Here's a realistic phased plan:

Phase 1: Classification and planning (1-2 weeks)

Before doing anything else:

  • <strong>Confirm the classification.</strong> Use our classifier to verify the system is high-risk. Read Annex III to be sure.
  • <strong>Understand the requirements.</strong> Read Chapter III (Articles 8-15) and Annex IV of the regulation or use our guide.
  • <strong>Plan resources.</strong> Who on your team owns compliance? What resources are needed (time, possible external help)?

Phase 2: Core requirements (1-3 months)

This phase implements the most important requirements:

  • <strong>Risk Management System (RMS):</strong> Document identified risks, how they're addressed, and mitigation. Use a simple template or tool.
  • <strong>Technical documentation (Annex IV):</strong> Use a documentation generator to speed up the process. Answer questions and get a structured draft.
  • <strong>Data quality:</strong> Review training data, test the model for bias, document results.

Phase 3: Additional measures (ongoing)

After core requirements:

  • <strong>FRIA assessment</strong> (if the system makes decisions that affect fundamental rights). Use a tool for this.
  • <strong>Logging and tracing:</strong> Implement basic logging so you can track the system's decisions.
  • <strong>Testing and iteration:</strong> Test the system, gather feedback, update documentation.

Time to deadline: It's less than 5 months to 2 August 2026. Start now. Many high-risk startups need to already be working on this.

Regulatory sandboxes—testing high-risk AI under supervision

Regulatory sandboxes are a key opportunity for startup AI developers. They let you test high-risk AI systems in a controlled environment before you have to meet all formal requirements.

What is a regulatory sandbox?

A regulatory sandbox is an environment created by a supervisory authority where:

  • You can test high-risk AI under supervision without having fully completed conformity assessment.
  • The supervisor provides guidance and technical advice during development.
  • You get a clear timeline and milestones for compliance rather than immediate full compliance.

Benefits for startups

Regulatory sandboxes are designed to ease innovation:

  • Flexibility to iterate and improve AI before full compliance.
  • Direct contact with the supervisor for guidance.
  • Market opportunity to launch earlier than competitors waiting for full compliance.

How do you apply?

The process varies by member state. General process:

  1. Contact your national AI supervisor (in Sweden: <strong>Datainspektionen</strong> or <strong>DIGG</strong>) and ask about sandbox programs.
  2. Submit an application describing your AI system, its purpose, risks, and compliance plan.
  3. If approved, you'll get a sandbox agreement with timeline and requirements.
  4. Work under supervision until you're fully compliant or the project ends.

Status 2026: Sandboxes are being established in several EU countries. Many launch in fall 2026. Contact your supervisor now to express interest.

Cost of AI Act compliance—startups

A common question: How much does AI Act compliance cost for a small startup? The answer depends entirely on risk level.

Minimal risk systems: nearly free

If your system is minimal risk, compliance under the AI Act costs: <strong>€0</strong>. No formal requirements means no direct compliance costs. Internal labor requirement is minimal (maybe a few hours of review).

Limited risk systems: low cost

Limited risk systems require transparency agreement. Cost: not much more than minimal risk. You need to document transparency information but no extensive RMS or Annex IV. Budget: 5-20 hours of work or €500-2,000 if you hire a consultant.

High-risk systems: significant investment

High-risk is where costs increase. Budget depends on system complexity:

  • DIY approach with tools: 100-300 hours of internal work (equivalent to €10k-30k value) for risk management, documentation, and testing.
  • External consulting: €30k-100k depending on complexity (simple lending API might be €30k, complex medical AI could be €100k+).
  • Third-party conformity assessment: €5k-20k for an independent party to review your documentation.

Costs drop with tools

Tools like Conformy reduce costs. Instead of hiring a consultant for €50k, you can use a documentation generator (€100-1,000) that builds structure and drafts for you. You focus on content, not format. Many startups do compliance for 1/10 of consultant costs with the right tools.

aiActStartups.s7Info

Timeline for startups—milestones to 2 August 2026

Here's a realistic timeline for startups to follow. If you haven't started: begin this week.

Now (April 2026—May 2026): classification and planning

This month is about understanding what you need to do:

  • Classify your AI system (takes 30 minutes).
  • If minimal risk: document a simple risk assessment (takes 1-2 hours). You're nearly done.
  • If high-risk: read this guide fully, read Annex III, and start sketching your compliance plan.

May 2026: core requirements for high-risk systems

If your system is high-risk, this is the critical month. You should:

  • Begin documenting your Risk Management System (RMS).
  • Start gathering or generating Annex IV technical documentation.
  • Test the system for bias, accuracy, and robustness.

2 August 2026: deadline

From this date, requirements are fully in effect:

  • Minimal/limited risk systems must meet their respective requirements (nearly none for minimal).
  • High-risk systems must have documentation and RMS in place. If you're in a regulatory sandbox, you can continue under supervision.

Reality check: You can't do compliance work the last week before the deadline. Start now. Many startups wait too long and end up stressed or skip important steps. It's not worth the risk.

Frequently asked questions for startups

Do startups have to comply with the EU AI Act?

Yes, the AI Act applies to all companies regardless of size. But proportionality is key: most startup AI is minimal risk, meaning almost no legal requirements. Only high-risk systems have extensive requirements.

Are there exemptions for small companies?

No blanket exemption based on size. But the AI Act's proportionality principle means supervisors must account for small companies' resources. Plus, regulatory sandboxes and reduced conformity assessment fees offer relief for SMEs.

What if my startup only serves US customers?

If you develop AI in Europe (or the EU) and market or use it for customers anywhere, you must follow the AI Act. Extraterritorial reach (like GDPR) means the law applies even if your customers are outside the EU.

How much does compliance cost for a high-risk startup?

Without tools: €30k-100k for consulting and testing. With tools like documentation generators: 1/5 to 1/10 of that cost by automating structure and drafts. DIY approach costs about 100-300 hours of work.

Can we use regulatory sandboxes?

Yes, if your system is high-risk. Sandboxes allow testing under supervision without immediate full compliance. Sweden and other countries plan to launch sandbox programs in 2026. Contact your supervisor to express interest.

What happens if we don't comply by 2 August 2026?

From this date, supervisors can force you to stop using the AI system or impose significant fines (up to 6% of revenue for minor violations, up to 35% for serious violations like prohibited AI Act systems). For startups, this could mean the end of the business.

Classify your startup AI today

The first step is always classification. It takes just 5 minutes, and you get a clear answer about what requirements apply to your startup.

Classify now