Is My AI System High-Risk Under the EU AI Act?
The EU AI Act's high-risk classification triggers the most demanding compliance requirements. Here's a quick framework to determine if your system qualifies.
The 3-Question Test
Before diving into the full Annex III categories, answer these three questions:
1️⃣ Does your AI make or influence decisions about people?
If your AI system affects hiring, creditworthiness, medical treatment, education access, or legal outcomes — it's likely high-risk. The key indicator: does the output materially affect someone's rights, opportunities, or wellbeing?
2️⃣ Does your AI operate in a regulated sector?
Healthcare, finance, education, employment, law enforcement, migration, and critical infrastructure are all Annex III sectors. If your AI touches any of these, assume high-risk until proven otherwise.
3️⃣ Is your AI a safety component of a regulated product?
AI embedded in medical devices, vehicles, machinery, toys, or aviation equipment is automatically high-risk under Annex I. This includes any AI that affects the safety of a CE-marked product.
If you answered yes to any of these, your system is almost certainly high-risk. But let's verify with the full Annex III breakdown.
The 8 Annex III Categories
The EU AI Act defines exactly 8 areas where AI systems are classified as high-risk:
1. Biometric Identification
Remote biometric identification, biometric categorization by sensitive attributes (race, political opinions, religion)
2. Critical Infrastructure
AI managing electricity, gas, water, heating, or digital infrastructure. Traffic management systems.
3. Education & Training
Automated grading, admission decisions, learning analytics that determine educational paths, proctoring systems
4. Employment & Workers
CV screening, interview evaluation, promotion decisions, task allocation, performance monitoring, termination decisions
5. Essential Services
Credit scoring, insurance pricing, emergency dispatch prioritization, benefit eligibility assessment
6. Law Enforcement
Predictive policing, evidence analysis, profiling during investigations, lie detection, recidivism assessment
7. Migration & Border
Visa application assessment, border crossing risk, asylum claim processing, document authenticity verification
8. Justice & Democracy
Judicial fact analysis, sentencing assistance, alternative dispute resolution, election influence assessment
Common Misconceptions
“My AI just recommends — it doesn't decide.”
The EU AI Act covers systems that assist human decisions in high-risk areas, not just autonomous ones. If a human rubber-stamps your AI's output, it's still high-risk.
“We're a startup — surely this doesn't apply to us.”
The Act applies based on risk level, not company size. A 3-person startup building an AI hiring tool has the same obligations as Microsoft. (SMEs do get some support measures, but the core requirements remain.)
“We're not in the EU.”
If your AI system is used by people in the EU or its output affects people in the EU, the Act applies to you. Same extraterritorial reach as GDPR.
“It's just internal tooling.”
Internal HR tools, employee monitoring systems, and internal credit/risk assessment are all explicitly covered. “Internal” doesn't mean “exempt.”
What If I'm Not Sure?
When in doubt, classify conservatively. The penalties for operating an unclassified high-risk system (€15M / 3% of turnover) far outweigh the cost of compliance verification.