Deep DiveMarch 8, 2026· 9 min read

Article 13 Transparency: What the EU AI Act Requires

Transparency isn't just about telling users they're talking to AI. For high-risk systems, Article 13 demands comprehensive documentation, clear instructions for use, and meaningful disclosure of capabilities and limitations. Here's what that means in practice.

⏰ Key Deadline

Article 13 obligations for high-risk AI systems take effect August 2, 2026. Non-compliance can result in penalties up to €15 million or 3% of global turnover.

What Article 13 Actually Requires

Article 13 of the EU AI Act (Regulation 2024/1689) establishes that high-risk AI systems "shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately."

This is a design-level requirement, not an afterthought. Transparency must be built into the system from the start. The article specifies that high-risk AI systems must be accompanied by instructions for use that include specific, comprehensive information for deployers.

Article 13(3) lists what these instructions must contain:

  • The identity and contact details of the provider
  • The characteristics, capabilities, and limitations of the system's performance
  • The intended purpose and any preconditions for use
  • The level of accuracy, robustness, and cybersecurity (per Article 15) against which the system has been tested and validated
  • Any known or foreseeable circumstances that may lead to risks to health, safety, or fundamental rights
  • The technical capabilities and characteristics for explaining output
  • The specifications for input data, where applicable
  • Where applicable, information to enable deployers to interpret the output

What "Appropriate Level of Transparency" Means in Practice

The regulation uses the phrase "appropriate type and degree of transparency." This is context-dependent — a medical diagnostic AI requires more transparency than an AI-powered spam filter. But the principle is clear: the people deploying and affected by AI systems must understand what they're dealing with.

In practice, "appropriate transparency" means three things:

1. Functional Transparency

Users and deployers must understand what the system does, what it doesn't do, and where it might fail. This goes beyond marketing materials. You need honest, detailed documentation of:

  • What inputs the system accepts and expects
  • How the system processes those inputs (at a meaningful level of abstraction)
  • What outputs the system produces and how to interpret them
  • Known failure modes and edge cases

2. Performance Transparency

Deployers must know how well the system performs. Article 13 connects to Article 15 here — you must disclose the accuracy metrics, the conditions under which those metrics were measured, and any known performance degradation scenarios.

This means publishing your evaluation results, including performance across different demographic groups if the system affects people differently. Saying "95% accuracy" without context is not transparent — saying "95% accuracy on the test set, with known degradation to 82% for inputs in language X" is.

3. Limitation Transparency

Perhaps the most important and most neglected aspect: what can't your system do? What are its known limitations? Under what conditions should deployers not rely on it?

Article 13(3)(b)(ii) specifically requires disclosure of "known or foreseeable circumstances in which the use of the AI system, including its intended purpose, may lead to risks to the health and safety or to fundamental rights."

Technical Documentation Requirements

Article 13 works in conjunction with Article 11 (Technical Documentation) and Annex IV. Together, they require:

  • General system description: Intended purpose, provider identity, system version, hardware/software requirements
  • Design specifications: System architecture, algorithmic logic, key design choices and rationale
  • Development process: Data requirements, training methodologies, validation and testing procedures
  • Monitoring and updating: How the system is monitored post-deployment, update procedures
  • Interaction with other systems: How the system interfaces with hardware and software
  • Risk management documentation: Connected to Article 9 requirements

This documentation must be created before the system is placed on the market and kept up to date throughout its lifecycle. National authorities can request it at any time.

How Epistemic Blocks Provide Transparency by Design

The fundamental challenge with AI transparency is that most systems are black boxes. You can document what goes in and what comes out, but the reasoning process in between is opaque. This makes it nearly impossible to provide the "technical capabilities and characteristics for explaining output" that Article 13 demands.

ThoughtProof Comply approaches this differently through Epistemic Blocks — structured, cryptographically signed records of AI decision-making that provide transparency as an inherent property of the system, not a bolt-on feature.

Each Epistemic Block contains:

  • The input context: What information the AI system received
  • Multi-model assessments: How independent model families evaluated the same input
  • Consensus analysis: Where models agreed and where they diverged
  • Confidence metrics: Quantified uncertainty for every assessment
  • Dissent records: When a model disagrees, its reasoning is preserved — not suppressed
  • Cryptographic attestation: Immutable proof that the assessment occurred and wasn't altered

This directly addresses Article 13's requirement for "information to enable deployers to interpret the output." Instead of a single opaque decision, deployers see a structured breakdown of how the output was reached, where uncertainty exists, and whether independent assessments agree.

Real Examples of Transparency Failures

Understanding what transparency failure looks like helps clarify what compliance requires:

Healthcare: Undisclosed Bias in Diagnostic AI

A diagnostic AI system deployed across European hospitals showed 15% lower accuracy for patients from certain ethnic backgrounds. The performance documentation showed only aggregate accuracy metrics. Under Article 13, the provider would need to disclose this performance differential — and the deploying hospitals would need to know about it to use the system appropriately.

Employment: Opaque Rejection Criteria

An AI CV-screening tool rejected candidates without explaining which factors drove the decision. Deployers (HR departments) couldn't tell candidates why they were rejected, and couldn't evaluate whether the criteria were lawful. Article 13 requires that deployers receive enough information to "interpret the system's output and use it appropriately" — which includes understanding rejection criteria.

Financial Services: Missing Limitation Disclosure

A credit scoring AI was validated on historical data from one geographic region but deployed EU-wide. The system's documentation didn't disclose that performance was only validated for one market. Under Article 13, this limitation must be explicitly communicated to deployers.

Content Moderation: Undisclosed Language Gaps

An AI content moderation system performed well in English but had significant accuracy gaps in smaller EU languages. The instructions for use didn't mention this limitation. Article 13(3)(b) requires disclosure of characteristics "with regard to specific persons or groups of persons on which the system is intended to be used."

Connecting Transparency to Other Requirements

Article 13 doesn't stand alone. It's part of a system of requirements that reinforce each other:

  • Article 9 (Risk Management): Risk assessments feed into transparency documentation — identified risks must be communicated
  • Article 14 (Human Oversight): Meaningful human oversight is impossible without transparency — oversight requires understanding
  • Article 15 (Accuracy): Performance metrics are a core component of transparency documentation
  • Article 50 (Limited Risk): Even non-high-risk systems have basic transparency obligations (disclosure of AI interaction)

Practical Implementation Steps

Here's how to approach Article 13 compliance:

  1. Start with classification. Determine your risk level to understand which transparency obligations apply.
  2. Audit your current documentation. Compare what you have against Article 13(3) requirements. Identify gaps.
  3. Document honestly. Include limitations, failure modes, and performance differentials. Regulators and deployers respect honesty; they punish concealment.
  4. Build transparency into the system. Tools like ThoughtProof Comply generate transparency artefacts automatically through multi-model verification.
  5. Test with deployers. Give your documentation to a non-technical deployer. Can they understand what the system does, how to use it safely, and when not to trust it? If not, iterate.

🔍 Verify your transparency compliance

Our classification tool identifies your transparency obligations and shows you exactly what documentation Article 13 requires for your system.

Ready to build transparency into your AI system?

Free classification takes 5 minutes. Get a signed compliance certificate from €49.