Classify your AI system under the EU AI Act
Before you can draft compliance documentation, you need to know which risk tier your AI system falls into. This free 4-step classifier walks you through the same decision criteria used in Articles 5–6 and Annex III of Regulation (EU) 2024/1689.
What you’ll get
- A clear risk level — prohibited, high-risk, limited, or minimal
- The specific Annex III category that applies (if any)
- A list of the obligations your system must meet and when they take effect
Describe your AI system
Briefly describe what your AI system does, who it targets, and what industry it’s used in.
Frequently asked questions
Is this classification legal advice?
No. The Conformy classifier is a guided self-assessment tool that applies the EU AI Act’s classification logic to your inputs. It does not replace counsel. For systems flagged as high-risk or prohibited, we recommend reviewing the result with a qualified legal advisor before making compliance decisions.
How accurate is the result?
The classifier uses the same legal criteria found in Articles 5–6 and Annex III of Regulation (EU) 2024/1689. Results are deterministic for clear-cut cases. For edge cases — for example, systems that touch multiple Annex III categories — the tool shows the closest match and flags areas that need human judgment.
What happens after my system is classified?
You get a results page showing your risk level, the obligations that apply (for example: risk management, data governance, human oversight), and the deadlines that attach to each. If you sign up, you can save the classification to an AI system and start filling in the required compliance documents directly from the requirements list.
Do I need to create an account to use this?
No. Classification is free and does not require registration. An account is only needed if you want to save the result, generate compliance documents, or track multiple AI systems over time.
What’s the difference between high-risk and limited-risk systems?
High-risk systems — those used in Annex III areas such as employment, credit scoring, or critical infrastructure — must meet the full compliance regime: risk management, data governance, technical documentation, logging, human oversight, accuracy, and cybersecurity. Limited-risk systems, such as chatbots or content generators, only need to meet transparency obligations: users must be told they are interacting with AI.