Article 9 Risk Management: What the EU AI Act Actually Requires
Article 9 is the backbone of EU AI Act compliance for high-risk systems. It mandates a continuous risk management system throughout the entire AI lifecycle — not a one-time checklist. Here's what it means in practice.
⏰ Deadline Alert
Article 9 obligations for high-risk AI systems take full effect August 2, 2026. Non-compliance penalties reach up to €35 million or 7% of global annual turnover.
What Article 9 Actually Says
Article 9 of the EU AI Act (Regulation 2024/1689) requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. This isn't a one-off assessment — it's a continuous, iterative process that runs throughout the entire lifecycle of the AI system.
The regulation is explicit: the risk management system must be "a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating." This language is intentional. The EU lawmakers saw how companies treat compliance as a checkbox exercise and designed Article 9 specifically to prevent that.
The scope is comprehensive. It covers risks to health, safety, and fundamental rights. It applies from design through development, testing, deployment, and ongoing operation. And it must account for the system's intended purpose as well as reasonably foreseeable misuse.
The 4 Mandated Steps
Article 9(2) specifies four concrete steps that every risk management system must include:
Step 1: Identify and Analyse Known and Foreseeable Risks
You must systematically identify risks associated with each high-risk AI system. This includes risks from the intended use, from reasonably foreseeable misuse, and from interaction with other systems. The analysis must consider both the probability and severity of potential harm.
This isn't just about obvious technical failures. You need to consider bias risks, risks to vulnerable populations, risks from adversarial inputs, and risks that emerge from how the system is actually used in the real world — which may differ from your intended use case.
Step 2: Estimate and Evaluate Risks
Once identified, each risk must be estimated and evaluated. Article 9(2)(b) requires you to assess risks that "may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse."
This means quantifying where possible: what is the likelihood of this risk materializing? What is the impact if it does? Who is affected? The evaluation must consider the cumulative effect of risks, not just individual risk items in isolation.
Step 3: Evaluate Risks from Post-Market Monitoring
Article 9(2)(c) mandates evaluation of risks based on data gathered from the post-market monitoring system required under Article 72. This creates a feedback loop: real-world performance data feeds back into your risk assessment.
This is where most organizations stumble. It requires infrastructure to collect, analyse, and act on post-deployment data — incident reports, performance metrics, user complaints, and drift detection.
Step 4: Adopt Risk Management Measures
Finally, you must adopt appropriate risk management measures. Article 9(2)(d) requires that these measures give "due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Section."
Risk treatment options include elimination of risk through design, technical mitigation measures, adequate information and training for deployers, and — as a last resort — restriction of the system's use. The measures must be proportionate to the risk level and consider the state of the art.
Documentation Requirements
Article 9 doesn't exist in isolation. It connects directly to Article 11 (Technical Documentation) and Annex IV, which specify what must be documented:
- Risk identification methodology: How you identified risks, what frameworks you used, who was involved
- Risk register: All identified risks with probability and severity assessments
- Mitigation measures: What you did about each risk and why you chose that approach
- Residual risk assessment: What risks remain after mitigation and why they're acceptable
- Testing results: Evidence that your risk management measures actually work
- Update history: When the risk assessment was reviewed, what changed, and why
This documentation must be available to national competent authorities upon request. It must be kept up to date throughout the system's lifecycle. And it must be detailed enough for an external auditor to understand your risk management process.
How ThoughtProof Comply Automates Article 9 Evidence
The challenge with Article 9 isn't understanding the requirements — it's implementing them at scale. Every AI output decision needs documented risk assessment. Every update to the model needs re-evaluation. Every deployment change needs risk analysis.
ThoughtProof Comply addresses this through multi-model verification. When your AI system produces an output, ThoughtProof automatically routes it through independent model families for verification. Each verification creates an Epistemic Block — a cryptographically signed record that documents:
- The original output and its context
- Independent assessments from multiple model families
- Consensus or dissent between verifiers
- Confidence scores and uncertainty flags
- Timestamp and immutable attestation
This creates the continuous, documented risk assessment that Article 9 demands — not as a manual process, but as an automated byproduct of your system's operation. Every decision is assessed, every assessment is recorded, and every record is cryptographically verifiable.
🔍 Check your Article 9 compliance
Our free classification tool identifies your risk level and shows exactly which Article 9 requirements apply to your system.
Common Mistakes: What Gets Companies Fined
Based on analogous enforcement under GDPR (which shares the same "risk-based approach" philosophy), here are the most common mistakes organizations make with risk management:
Mistake 1: One-Time Assessment
The most common failure. Organizations conduct a risk assessment during development, file it, and never look at it again. Article 9 explicitly requires continuous, iterative risk management. A risk assessment from 2025 will not satisfy an auditor in 2027.
Mistake 2: Ignoring Foreseeable Misuse
Your system is designed for medical imaging analysis, but someone uses it to screen job applicants' health. That's foreseeable misuse, and Article 9 requires you to identify and mitigate it. "We didn't intend that use" is not a defence.
Mistake 3: No Post-Market Feedback Loop
Article 9(2)(c) connects risk management to post-market monitoring (Article 72). Without a functioning feedback loop from deployment data back to risk assessment, your system is non-compliant by design.
Mistake 4: Paper Compliance
Writing a risk management policy without implementing it operationally. Regulators will look for evidence of actual risk management activities — meeting minutes, updated risk registers, incident response records — not just a policy document.
Mistake 5: Treating Residual Risk as Zero
Every AI system has residual risk. Claiming zero residual risk signals either dishonesty or incompetence to auditors. Article 9(4) explicitly requires that residual risks be communicated to deployers.
Penalties for Non-Compliance
Article 99 of the EU AI Act sets out the penalty framework. For high-risk system requirements like Article 9:
- Standard non-compliance: Up to €15 million or 3% of global annual turnover, whichever is higher
- If the violation also constitutes a prohibited practice: Up to €35 million or 7% of global annual turnover
- Supplying incorrect information to authorities: Up to €7.5 million or 1% of turnover
These penalties apply per infringement. A system with multiple Article 9 failures could face cumulative fines. And unlike GDPR's early years, the EU AI Act enforcement framework is designed to move faster — national competent authorities are already being established.
For SMEs and startups, the regulation provides proportionate caps, but even these can be business-ending. The message is clear: compliance is cheaper than non-compliance.
Getting Started: Practical Steps
If you're building a high-risk AI system, here's how to approach Article 9 compliance pragmatically:
- Classify your system first. You can't do risk management until you know your risk classification level.
- Establish your risk management framework. Choose a methodology (ISO 31000 or ISO 23894 for AI-specific risk management are good starting points).
- Build the feedback loop early. Don't bolt on post-market monitoring later. Design it in from the start.
- Automate where possible. Manual risk assessment doesn't scale. Tools like ThoughtProof Comply can automate continuous verification and evidence generation.
- Document everything. If it's not documented, it didn't happen. Build documentation into your workflow, not as an afterthought.