Blog/EU AI Act
EU AI ActMarch 15, 2026·9 min read

EU AI Act for US Companies: What American Businesses Must Know in 2026

The EU AI Act is not a European regulation that American companies can safely ignore. If your AI system processes data from EU citizens, is marketed within the EU, or produces outputs that are used in the EU — the regulation applies to you. Full stop. The extraterritorial scope mirrors GDPR, and the penalties are even more severe: up to €35 million or 7% of global annual turnover, whichever is higher.

For US-based CISOs, General Counsel, and Chief Privacy Officers, the AI Act creates a dual compliance mandate. You're already navigating domestic frameworks like Texas TRAIGA and Colorado SB 24-205. Now you must layer EU requirements on top — requirements that use different classification schemes, demand different documentation, and impose different enforcement mechanisms. This guide breaks down exactly what US companies need to know and do.

Extraterritorial Scope: Why the AI Act Applies to You

Article 2 of the AI Act defines three scenarios where non-EU companies fall under its jurisdiction:

  • You place AI systems on the EU market. If your AI product or service is available for purchase or use by EU customers — even if sold through a distributor, reseller, or SaaS model — you are a “provider” under the Act and subject to its full obligations.
  • Your AI system's output is used in the EU. If your AI system generates predictions, recommendations, decisions, or content that is consumed by persons located in the EU, the Act applies. This captures US companies whose AI products are used by EU-based subsidiaries, partners, or customers.
  • You deploy AI systems that affect EU persons. If you are a US company using AI to make decisions about EU citizens — employment decisions for EU-based employees, credit decisions for EU applicants, or content moderation for EU users — you are a “deployer” under the Act.

The practical implication: any US enterprise with EU customers, EU employees, or EU-facing products almost certainly falls within scope. And unlike some regulations where the extraterritorial provisions are rarely enforced, the EU has already demonstrated with GDPR that it will pursue non-EU companies aggressively. Over €4.5 billion in GDPR fines have been levied, including major actions against US tech companies.

The Four Risk Tiers: Classification That Determines Your Obligations

The AI Act classifies every AI system into one of four risk tiers. Your tier determines your compliance obligations — and there is no opt-out.

Tier 1: Unacceptable Risk (Banned). These AI practices are prohibited entirely within the EU. They include social scoring systems used by governments, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), AI systems that exploit vulnerabilities of specific groups (age, disability, social situation), and subliminal manipulation techniques that cause harm. If your AI system falls into this category, you cannot deploy it in the EU under any circumstances. The parallel to TRAIGA's prohibited practices is notable, though the specific practices differ.

Tier 2: High-Risk. This is where the bulk of enterprise compliance obligations concentrate. High-risk AI systems are those listed in Annex III, covering eight domains: biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential services (credit, insurance, public benefits), law enforcement, migration and border control, and administration of justice. High-risk systems must comply with Articles 9 through 15:

  • Article 9 — Risk management system. A continuous, iterative process for identifying, analyzing, evaluating, and mitigating risks throughout the system's lifecycle.
  • Article 10 — Data governance. Training, validation, and testing datasets must meet quality criteria including relevance, representativeness, and freedom from errors.
  • Article 11 — Technical documentation. Comprehensive documentation per Annex IV that enables assessment of the system's compliance before it is placed on the market.
  • Article 12 — Record-keeping. Automatic logging of events (“logs”) throughout the system's operational lifetime, with traceability of the system's functioning.
  • Article 13 — Transparency. The system must be designed to enable deployers to interpret outputs and use the system appropriately.
  • Article 14 — Human oversight. Systems must be designed to allow effective human oversight, including the ability to override or reverse AI decisions.
  • Article 15 — Accuracy, robustness, and cybersecurity. Systems must achieve appropriate levels of accuracy and resilience against errors and attacks.

Tier 3: Limited Risk. AI systems that interact with humans (chatbots), generate synthetic content (deepfakes), or perform emotion recognition or biometric categorization must meet transparency obligations. Users must be informed they are interacting with AI, and AI-generated content must be labeled as such.

Tier 4: Minimal Risk. AI systems that pose minimal risk — spam filters, AI-enabled video games, inventory management systems — have no mandatory obligations under the Act, though voluntary codes of conduct are encouraged.

Conformity Assessment: The EU's Compliance Gatekeeping

For high-risk AI systems, the EU requires a conformity assessment before the system can be placed on the market or put into service. This is fundamentally different from the US approach, where compliance is typically self-assessed and enforced through after-the-fact penalties.

The conformity assessment process varies by domain. For most high-risk systems, providers may conduct an internal conformity assessment using the procedure in Annex VI. However, AI systems used in biometric identification, critical infrastructure management, and certain other sensitive domains require assessment by a notified body — an independent third-party auditor designated by an EU member state.

Upon successful conformity assessment, the system receives a CE marking — the EU's mandatory conformity mark indicating it meets the Act's requirements. The system must also be registered in the EU AI database before deployment. For US companies, this means you need either an in-house team capable of conducting conformity assessments to EU standards, or a relationship with a notified body. Neither is optional for high-risk systems.

The Penalty Structure: Why This Can't Wait

The EU AI Act establishes a three-tier penalty structure that scales with the severity of the violation:

Violation TypeMaximum FineExamples
Prohibited practices€35M or 7% global turnoverDeploying banned AI systems (social scoring, subliminal manipulation)
High-risk non-compliance€15M or 3% global turnoverFailing Articles 9–15 requirements, missing conformity assessment
Incorrect information€7.5M or 1% global turnoverSupplying false data to notified bodies or national authorities

For a US company with $5 billion in global revenue, the maximum penalty for deploying a prohibited AI system is $350 million (7% of turnover). For failing to comply with high-risk obligations, it's $150 million. These are not theoretical maximums — the EU has demonstrated with GDPR that it will impose penalties in the billions when warranted. Meta received a €1.2 billion GDPR fine in 2023. Amazon received €746 million in 2021.

The Enforcement Timeline: Where We Are Now

The AI Act entered into force on August 1, 2024, but full enforcement is phased:

  • February 2, 2025: Prohibited practices ban takes effect. If you operate any AI system that engages in social scoring, subliminal manipulation, exploitation of vulnerabilities, or real-time biometric identification (outside exceptions), you are already in violation.
  • August 2, 2025: General-purpose AI (GPAI) model obligations take effect. Providers of foundation models and GPAI systems must comply with transparency requirements, copyright compliance, and — for models with systemic risk — adversarial testing and incident reporting.
  • August 2, 2026: High-risk AI system obligations take effect for systems listed in Annex III. This is the critical deadline for most US enterprises. Conformity assessments, technical documentation, human oversight measures, and EU database registration must be complete by this date.
  • August 2, 2027: Full enforcement for all remaining provisions, including AI systems embedded in products already covered by existing EU product safety legislation.

As of March 2026, you have five months until the high-risk obligations take effect. If you operate high-risk AI systems that affect EU persons, the time to begin conformity assessment preparation is now — not in June.

Dual Compliance: EU AI Act + Texas TRAIGA

For US companies subject to both Texas TRAIGA and the EU AI Act, the compliance picture is particularly complex. The frameworks share surface-level similarities — both prohibit certain AI practices, both require risk classification, both impose documentation requirements — but the implementation details diverge significantly.

Risk classification mismatch. Texas uses six risk levels (prohibited, unacceptable, high, medium, low, exempt) based on the system's intent and design. The EU uses four tiers (unacceptable, high, limited, minimal) based on the risk to fundamental rights. A system classified as “medium risk” under Texas might be “high-risk” under the EU if it falls into an Annex III domain — or vice versa.

Safe harbor vs. conformity assessment. Texas offers a NIST AI RMF safe harbor as an affirmative defense against enforcement. The EU offers no such safe harbor — compliance requires a formal conformity assessment, and NIST alignment alone does not satisfy EU requirements. However, organizations with strong NIST alignment will find that many NIST controls map to EU requirements (risk management, documentation, testing), giving them a head start.

Documentation format differences. Texas evidence bundles are structured around NIST AI RMF functions (GOVERN, MAP, MEASURE, MANAGE). EU technical documentation follows the Annex IV structure. The underlying information may overlap — risk assessments, testing results, data governance practices — but the format, structure, and specific data points differ.

How TXAIMS Enterprise Handles EU Compliance

TXAIMS Enterprise was designed for organizations that operate across jurisdictions. For EU AI Act compliance specifically, the platform provides:

  • EU risk tier classification. Every AI system in your inventory is automatically classified under the EU's four-tier model in addition to the Texas six-level model and Colorado's binary model. The classification uses Annex III mapping to determine whether a system qualifies as high-risk based on its domain and function.
  • Annex IV documentation generation. For high-risk systems, TXAIMS generates technical documentation packages aligned with the Annex IV structure. The platform maps your existing system metadata, risk assessment data, and compliance artifacts to the EU-required format — eliminating the need to recreate documentation from scratch.
  • Conformity assessment preparation. TXAIMS provides a guided workflow for conformity assessment readiness, identifying gaps between your current documentation and the EU's requirements. For systems requiring notified body assessment, the platform produces the documentation package in the format notified bodies expect.
  • Multi-framework compliance bitmap. A single view showing compliance status across Texas, Colorado, and EU frameworks for every AI system. Your CISO can see at a glance which systems need attention under which framework — and which are fully compliant across all jurisdictions.
  • Regulatory timeline tracking. The platform monitors upcoming enforcement deadlines and maps them to your specific systems, alerting you when a deadline is approaching for a system that isn't yet compliant.

At $1,499/mo for the Enterprise tier, TXAIMS provides multi-jurisdiction compliance management that would otherwise require a dedicated EU compliance team, a specialized documentation platform, and significant legal counsel hours. For an enterprise operating 50+ AI systems across multiple jurisdictions, the platform delivers six-figure annual savings before accounting for risk reduction.

What to Do Now: A Five-Step Action Plan for US Enterprises

With the August 2026 high-risk deadline approaching, US companies should take the following steps immediately:

  • Step 1: Inventory your EU-touching AI systems. Identify every AI system that processes EU citizen data, is available to EU customers, or produces outputs used by EU-based persons. Many enterprises discover they have 2–3× more EU-scoped systems than initially estimated.
  • Step 2: Classify each system under the EU's four-tier model. Map each system against Annex III to determine whether it qualifies as high-risk. Pay particular attention to HR systems (employment decisions for EU employees), customer-facing AI (credit, insurance), and content moderation tools.
  • Step 3: Assess conformity assessment requirements. For each high-risk system, determine whether an internal conformity assessment is sufficient or whether a notified body assessment is required. Begin identifying notified bodies in relevant EU member states.
  • Step 4: Begin Annex IV documentation. Start compiling technical documentation for high-risk systems. This includes system descriptions, risk management documentation, data governance records, testing and validation results, human oversight measures, and accuracy metrics.
  • Step 5: Appoint an EU authorized representative. Non-EU providers of high-risk AI systems must designate an authorized representative established in the EU. This representative serves as your point of contact with EU market surveillance authorities.

Start your TXAIMS Enterprise trial to get your EU-touching systems classified and documented before the August 2026 deadline.

Related Resources

Ready to automate your TRAIGA compliance?

TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.

Start 14-day free trial