EU AI Act Risk Classification: The Complete Guide for 2026
The EU AI Act classifies every AI system into one of four risk levels. Your classification determines your obligations — and your penalties for non-compliance. Here's everything you need to know.
⏰ Key Deadline
High-risk AI system obligations take effect August 2, 2026. Non-compliance penalties: up to €35 million or 7% of global turnover.
The Four Risk Levels
The EU AI Act (Regulation 2024/1689) establishes a risk-based framework. Every AI system deployed in or affecting the EU falls into one of these categories:
🚫 Prohibited (Unacceptable Risk)
These AI systems are banned entirely from the EU market:
- Social scoring by governments
- Real-time biometric identification in public spaces (with limited exceptions)
- Manipulation techniques targeting vulnerabilities
- Emotion recognition in workplaces and educational institutions
- Untargeted facial recognition database scraping
Enforced since: February 2, 2025
⚠️ High-Risk (Annex III)
Systems that must comply with Articles 9-15 including risk management, data governance, transparency, human oversight, accuracy, and cybersecurity:
- Healthcare: AI-assisted diagnosis, triage, medical imaging
- Employment: CV screening, interview analysis, workforce management
- Education: Automated grading, admission decisions
- Financial: Credit scoring, insurance risk assessment
- Law enforcement: Predictive policing, evidence analysis
- Critical infrastructure: Energy, water, transport management
- Migration: Visa processing, border control
Deadline: August 2, 2026
ℹ️ Limited Risk
Systems with transparency obligations — users must know they're interacting with AI:
- Chatbots and conversational AI
- AI-generated content (deepfakes, synthetic media)
- Emotion recognition systems (where not prohibited)
- Biometric categorization systems
Key requirement: Clear disclosure that the user is interacting with AI
✅ Minimal Risk
No mandatory requirements, but voluntary codes of conduct are encouraged:
- Spam filters
- AI-powered video games
- Inventory management
- Most recommendation engines
Note: Even minimal-risk systems benefit from a signed classification certificate for investor and customer confidence.
How to Classify Your AI System
Classification follows a clear decision tree:
- Check Article 5 — Is your system in the prohibited list? If yes, you must discontinue it.
- Check Annex III — Does your system fall into one of the 8 high-risk categories? Consider both the domain and the specific use case.
- Check Article 50 — Does your system interact directly with humans or generate content? If yes, you have transparency obligations (Limited Risk).
- None of the above? — Your system is Minimal Risk with no mandatory obligations.
🔍 Not sure where your system falls?
Our free classification wizard walks you through the decision tree in 5 minutes and gives you an instant result.
What High-Risk Means: Articles 9–15
If your system is classified as high-risk, you must comply with 7 articles containing 59 individual requirements:
- Article 9 — Risk Management: Continuous risk identification, analysis, and mitigation throughout the AI lifecycle.
- Article 10 — Data Governance: Training data must be relevant, representative, and free from errors. Bias examination is mandatory.
- Article 11 — Technical Documentation: Detailed documentation of design, development, and testing before market placement.
- Article 12 — Record-Keeping: Automatic logging of events for traceability during system operation.
- Article 13 — Transparency: Clear instructions for deployers including capabilities, limitations, and intended purpose.
- Article 14 — Human Oversight: Systems must be designed for effective human oversight, including the ability to override or stop the system.
- Article 15 — Accuracy, Robustness, Cybersecurity: Appropriate levels of accuracy, robustness against errors, and resilience to attacks.
The Cost of Non-Compliance
The EU AI Act has real teeth:
- Prohibited practices: Up to €35 million or 7% of global annual turnover
- High-risk non-compliance: Up to €15 million or 3% of turnover
- Incorrect information: Up to €7.5 million or 1% of turnover
For context: a company with €500M annual revenue faces up to €35M in fines for using a prohibited AI system.
Timeline: What's Enforced When
- February 2, 2025: Prohibited AI practices enforced ✅
- August 2, 2025: GPAI model obligations, governance rules
- August 2, 2026: High-risk system requirements (Articles 9–15) ⚠️
- August 2, 2027: Full enforcement including Annex I systems