Blog/EU AI Act
EU AI ActMarch 15, 2026·9 min read

EU AI Act Conformity Assessments: The Complete Guide for US Deployers

If your organization deploys high-risk AI systems that touch European users, customers, or data subjects, you will need to complete a conformity assessment under the EU AI Act. This isn't optional. It's the legal gateway between building an AI system and being allowed to put it on the EU market.

For US companies — especially those already navigating Texas TRAIGA and Colorado SB 24-205 — the conformity assessment introduces a different compliance paradigm. Where TRAIGA focuses on prohibited practices and NIST alignment, and Colorado centers on impact assessments for consequential decisions, the EU AI Act mandates a structured technical evaluation across seven distinct requirement categories before your system can carry a CE marking.

This guide walks CISOs, General Counsel, and VP-level compliance leaders through every requirement, explains what triggers the obligation, and shows how TXAIMS Enterprise ($1,499/mo) provides a digital conformity assessment workflow that transforms a months-long paper exercise into a structured, trackable process.

What Triggers a Conformity Assessment

Not every AI system requires a conformity assessment. The obligation applies exclusively to high-risk AI systems as defined in Article 6 and Annex III of the EU AI Act. The four risk categories create a tiered framework: unacceptable risk (banned), high risk (conformity assessment required), limited risk (transparency obligations), and minimal risk (no obligations).

High-risk classification is triggered by two pathways:

  • Annex II products: AI systems that serve as safety components of products already subject to EU harmonized legislation (medical devices, machinery, aviation, automotive, toys, marine equipment). These systems must undergo conformity assessment as part of the product's existing CE marking process.
  • Annex III use cases: AI systems deployed in enumerated high-risk domains — biometric identification, critical infrastructure management, education and vocational training access, employment and worker management, access to essential services (credit, insurance, public benefits), law enforcement, migration and border control, and administration of justice.

For US enterprises, the most common triggers are employment-related AI (resume screening, performance evaluation, workforce management), credit decisioning, insurance underwriting, and biometric systems. If you're deploying any of these in markets that include EU residents, you almost certainly have a conformity assessment obligation.

The penalties for deploying a high-risk system without completing a conformity assessment are severe: up to €35 million or 7% of global annual turnover, whichever is higher. For a mid-market company with $500M in revenue, that exposure reaches $35M. This is not theoretical — the EU AI Office has signaled that enforcement will begin in phases starting August 2026.

The 7 Requirements: Articles 9 Through 15

The conformity assessment evaluates your AI system against seven mandatory requirements. Each maps to a specific Article and demands documented evidence. Here's what each requires and how sophisticated compliance teams approach them.

Article 9: Risk Management System

Article 9 requires a continuous, iterative risk management system that spans the entire AI system lifecycle. This is not a one-time risk assessment — it's an ongoing process that must be documented, updated, and reviewed throughout the system's operational life.

The risk management system must include:

  • Identification and analysis of known and reasonably foreseeable risks to health, safety, and fundamental rights
  • Estimation and evaluation of risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse
  • Adoption of appropriate risk management measures, including design choices and technical safeguards
  • Testing procedures to ensure risks are effectively mitigated, including testing against real-world conditions

For organizations already maintaining a NIST AI RMF alignment, there is substantial overlap. The NIST “Map” and “Measure” functions align directly with Article 9's risk identification and evaluation requirements. TXAIMS Enterprise maps these overlaps automatically, so evidence generated for your NIST compliance posture populates the corresponding Article 9 fields in your conformity assessment.

Article 10: Data and Data Governance

Article 10 imposes rigorous requirements on training, validation, and testing datasets. This is where many US companies encounter their first compliance gap — the EU AI Act treats data governance as a first-class requirement, not an afterthought.

Your data governance practices must address:

  • Design choices: Documented rationale for data collection, processing, and labeling methodologies
  • Data preparation: Processes for annotation, aggregation, and feature engineering with quality controls
  • Relevance and representativeness: Evidence that datasets reflect the populations and conditions under which the system will operate
  • Bias examination: Proactive identification and mitigation of possible biases, including bias related to protected characteristics under EU law
  • Data gap identification: Assessment of whether datasets adequately cover the system's intended operating environment

The critical nuance: Article 10 permits processing of special categories of personal data (race, ethnicity, political opinions, health data) strictly for bias monitoring purposes, subject to GDPR safeguards. This is a departure from many US companies' default position of avoiding sensitive attribute collection entirely.

Article 11: Technical Documentation

Article 11 requires comprehensive technical documentation that must be completed before the system is placed on the market and kept up to date throughout its lifecycle. The documentation must enable authorities to assess the system's compliance with all requirements.

Annex IV specifies the minimum contents:

  • General description of the AI system (intended purpose, developer identity, versions)
  • Detailed description of system elements and development process
  • Information about monitoring, functioning, and control of the system
  • Description of the risk management system (linking back to Article 9)
  • Description of changes made throughout the system lifecycle
  • A list of harmonized standards applied, or alternative means of compliance
  • A copy of the EU declaration of conformity
  • Description of the system's performance, including metrics for the specific persons or groups on which the system is intended to be used

This is where many organizations stall. Producing Annex IV documentation from scratch is a 200+ hour exercise for a single system. TXAIMS Enterprise's conformity assessment dashboard pre-populates documentation fields from your system inventory, risk classifications, and control mappings — reducing that effort to targeted review and gap-filling.

Article 12: Record-Keeping

High-risk AI systems must include automatic logging capabilities that enable traceability of the system's functioning throughout its lifecycle. Article 12 specifies that logs must capture:

  • The period of each use of the system (start and end dates/times)
  • The reference database against which input data was checked
  • Input data that led to a match or result, particularly data triggering further verification by a human
  • Identification of natural persons involved in the verification of results

For US deployers, this requirement often necessitates architectural changes. If your AI system wasn't built with EU-compliant audit logging, retrofitting it can be significant. The conformity assessment evaluates whether your logging architecture meets these standards — not just whether you have “some logging.”

Article 13: Transparency and Provision of Information to Deployers

Article 13 requires that high-risk AI systems be designed to operate with sufficient transparency that deployers can interpret the system's output and use it appropriately. This goes beyond a transparency notice — it mandates structured information delivery.

Instructions for use must include:

  • The provider's identity and contact details
  • The system's capabilities and limitations, including known accuracy levels for specific populations
  • Foreseeable misuse scenarios and recommended safeguards
  • The technical measures for human oversight built into the system
  • Specifications for input data (format, quality, domain)
  • Computational and hardware requirements
  • Expected system lifetime and maintenance requirements

The transparency requirement connects directly to broader EU AI Act compliance obligations. Your conformity assessment must demonstrate that deployers receive adequate information to use the system within its validated parameters.

Article 14: Human Oversight

Article 14 requires that high-risk AI systems be designed to allow effective human oversight during the period the system is in use. The measures must be appropriate to the risks, level of autonomy, and context of use.

Human oversight must enable the designated human to:

  • Fully understand the system's capacities and limitations
  • Monitor the system's operation and detect anomalies, dysfunctions, or unexpected performance
  • Interpret the system's output correctly, accounting for the characteristics of the tool and available interpretation methods
  • Decide not to use the system, disregard, override, or reverse the output in any particular situation
  • Intervene or interrupt the system through a “stop” mechanism

For enterprise AI systems that operate at scale — processing thousands of credit decisions or resume screenings per day — demonstrating “effective” human oversight is the hardest conformity requirement to satisfy. A rubber-stamp review process will not pass scrutiny. TXAIMS Enterprise helps you document your oversight architecture, escalation thresholds, and human-in-the-loop intervention rates as part of the conformity workflow.

Article 15: Accuracy, Robustness, and Cybersecurity

Article 15 is the technical performance requirement. High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.

  • Accuracy: Performance levels must be declared in accompanying instructions for use, including metrics relevant to the system's intended purpose and the persons affected
  • Robustness: Systems must be resilient to errors, faults, and inconsistencies in their operating environment, including adversarial inputs
  • Cybersecurity: Technical solutions must protect against unauthorized third-party manipulation of training data, model parameters, or operational inputs

This requirement directly connects to CISO concerns. If you're responsible for both AI compliance and security posture, Article 15 is where those domains converge. Your conformity assessment must include adversarial testing results, data poisoning defenses, and model integrity verification.

Self-Assessment vs. Third-Party Conformity Assessment

The EU AI Act establishes two conformity assessment pathways, and understanding which applies to your system is essential for planning timelines and budgets.

Self-assessment (Internal Control — Annex VI): Most high-risk AI systems under Annex III can be assessed internally by the provider. You document compliance with Articles 9–15, complete the technical documentation, establish a quality management system, and issue your own EU declaration of conformity. This does not mean the assessment is trivial — it means you bear full responsibility for the assessment's thoroughness.

Third-party assessment (Notified Body — Annex VII): Required for specific categories, most notably real-time remote biometric identification systems used in publicly accessible spaces. A notified body conducts the assessment, reviews documentation, and may audit your quality management system and technical documentation on-site.

DimensionSelf-Assessment (Annex VI)Third-Party (Annex VII)
When requiredMost Annex III high-risk systemsBiometric ID in public spaces; certain critical infrastructure
AssessorProvider (internal team)EU-designated notified body
Typical timeline8–16 weeks16–30 weeks
Cost rangeInternal resource allocation€50K–€250K+ per system
Ongoing obligationMaintain QMS, update documentationPeriodic surveillance audits

For most US enterprises deploying employment, credit, or insurance AI into EU markets, the self-assessment pathway applies. This is the pathway TXAIMS Enterprise is optimized for — providing the structured workflow, toggle checklists, and auto-completion tracking that transform self-assessment from a freeform exercise into a guided process.

Timelines: When Does This Become Enforceable?

The EU AI Act entered into force on August 1, 2024, with a phased enforcement schedule:

  • February 2, 2025: Prohibited AI practices (Chapter II) become enforceable
  • August 2, 2025: General-purpose AI model obligations (Chapter V) become enforceable
  • August 2, 2026: High-risk AI system requirements (Chapters III and IV), including conformity assessments, become enforceable
  • August 2, 2027: High-risk requirements for Annex II products (systems embedded in regulated products) become enforceable

That means enterprises have until August 2, 2026, to complete conformity assessments for Annex III high-risk systems. Given that a thorough self-assessment takes 8–16 weeks, organizations should be actively working on assessments now if they haven't started.

NIST Alignment: Where Conformity Assessments Overlap

Organizations already aligned with the NIST AI Risk Management Framework have a meaningful head start. The overlap between NIST AI RMF and EU AI Act conformity requirements is substantial — but not complete.

EU AI Act RequirementNIST AI RMF AlignmentGap
Art. 9 Risk ManagementMap, Measure functionsEU requires fundamental rights impact focus
Art. 10 Data GovernanceMap 2.3, Measure 2.6EU mandates bias monitoring with special-category data
Art. 11 DocumentationGovern 1.1, Map 1.1Annex IV specificity exceeds NIST guidance
Art. 12 Record-KeepingGovern 1.2EU requires automatic logging architecture
Art. 13 TransparencyMap 3.4, Manage 2.2EU mandates structured deployer instructions
Art. 14 Human OversightGovern 3.2, Manage 3.1EU requires “stop button” mechanism
Art. 15 Accuracy/SecurityMeasure 2.5, Manage 2.3EU requires adversarial robustness testing

TXAIMS Enterprise automatically identifies these overlaps in your compliance posture. When you complete NIST AI RMF controls, the platform flags which conformity assessment fields are satisfied and which gaps remain. This cross-framework mapping is particularly valuable for organizations managing compliance across Texas, Colorado, and the EU simultaneously.

How the TXAIMS Conformity Assessment Dashboard Works

TXAIMS Enterprise provides a purpose-built conformity assessment workflow designed for enterprise compliance teams managing multiple AI systems across jurisdictions.

System-level assessment tracking: Each AI system in your inventory gets its own conformity assessment workspace. Toggle checklists map one-to-one to Articles 9–15 requirements, with sub-items corresponding to the specific evidence elements each Article demands.

Auto-completion from existing controls: When your NIST AI RMF posture, TRAIGA prohibited practice screening, or Colorado impact assessment data already satisfies a conformity assessment field, the platform marks it complete and links to the underlying evidence. This eliminates redundant documentation across jurisdictions.

Evidence attachment and versioning: Each checklist item accepts document uploads, links, and narrative evidence. Version history ensures you can demonstrate the state of your compliance posture at any point in time — critical for responding to regulator inquiries months or years after the initial assessment.

Gap analysis reporting: The dashboard surfaces unaddressed requirements with jurisdiction-specific deadlines, priority scoring based on enforcement timelines, and estimated effort to close each gap. Board-ready reports summarize conformity status across your entire AI portfolio.

Multi-system portfolio view: For enterprises operating 10, 50, or 100+ AI systems, the portfolio view shows conformity assessment status across all systems, highlighting which are ready for market, which have critical gaps, and which need reassessment after modifications.

Building Your Conformity Assessment Action Plan

For compliance leaders preparing to execute conformity assessments, here's the recommended sequencing:

  • Weeks 1–2: Complete AI system inventory and risk classification. Identify which systems are high-risk under Annex III and confirm the applicable conformity assessment pathway (self-assessment vs. notified body).
  • Weeks 3–4: Map existing controls and documentation to Articles 9–15. Identify reusable evidence from NIST, ISO 42001, TRAIGA, or Colorado compliance activities.
  • Weeks 5–8: Close documentation gaps. This typically involves producing Annex IV technical documentation, formalizing data governance procedures, and documenting human oversight mechanisms.
  • Weeks 9–10: Conduct internal testing and validation. Verify accuracy metrics, run adversarial robustness tests, and validate logging architecture meets Article 12 requirements.
  • Weeks 11–12: Complete the EU declaration of conformity, finalize the CE marking dossier, and register in the EU database (when operational).
  • Ongoing: Maintain the quality management system, update documentation for material system changes, and conduct periodic compliance reviews.

TXAIMS Enterprise at $1,499/mo supports this entire lifecycle — from initial classification through ongoing maintenance — with structured workflows, evidence bundles, and multi-jurisdiction dashboards that keep your conformity assessments current alongside your TRAIGA and Colorado obligations.

Related Resources

Ready to automate your TRAIGA compliance?

TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.

Start 14-day free trial