Blog/Legal Analysis
Legal AnalysisFebruary 12, 2026·5 min read

Responsible Artificial Intelligence Governance Act: The Complete Texas Law Guide

The Responsible Artificial Intelligence Governance Act — officially designated as HB 149 and commonly referred to as TRAIGA (Texas Responsible AI Governance Act) — is the comprehensive AI governance framework signed into Texas law. It establishes prohibited AI practices, creates an enforcement mechanism through the Texas Attorney General, and recognizes NIST AI Risk Management Framework compliance as a statutory affirmative defense.

This is the most operationally significant state-level AI law in effect in the United States. Here is the full statutory analysis.

Legislative History and Enactment

The Responsible Artificial Intelligence Governance Act was introduced as House Bill 149 during the 89th Texas Legislature. It passed both chambers and was signed into law by Governor Abbott. The Act went into effect on January 1, 2026, making Texas one of the first states to enact comprehensive AI governance legislation.

Texas's approach is notable because it represents a red state model for comprehensive AI regulation — combining pro-innovation provisions (regulatory sandbox, NIST safe harbor, no private right of action) with meaningful enforcement teeth ($200K penalties, AG authority). The design reflects a philosophy of enabling AI development while prohibiting specific harmful uses.

What the Responsible Artificial Intelligence Governance Act Covers

The Responsible Artificial Intelligence Governance Act applies to any deployer of artificial intelligence systems that affect people in Texas. A deployer is any person or entity that uses, integrates, or makes available an AI system — including through third-party tools and SaaS platforms with embedded AI capabilities.

The Act does not distinguish by company size, revenue, or industry. If your organization deploys AI systems that interact with, make decisions about, or affect Texas residents, the Responsible Artificial Intelligence Governance Act applies.

Entities Covered

  • Private companies operating in or serving customers in Texas
  • Out-of-state companies whose AI products or services reach Texas consumers
  • Texas state agencies (with additional obligations under SB 1964 and HB 3512)
  • Local government entities and school districts
  • Healthcare providers (with additional obligations under SB 1188)
  • Nonprofits and universities deploying AI in operational contexts

Prohibited Practices Under the Act

Section 2 of the Responsible Artificial Intelligence Governance Act defines seven categories of prohibited AI practices. These are intent-based prohibitions — the statute targets AI systems designed or deployed with the intent to engage in these practices:

  1. Subliminal manipulation — AI designed to influence behavior through techniques the person is not aware of
  2. Vulnerability exploitation — targeting cognitive, physical, or economic vulnerabilities to distort behavior
  3. Social scoring — classifying individuals based on behavioral or social data for unrelated detrimental purposes
  4. Biometric categorization — inferring race, religion, political opinion, or sexual orientation from biometric data
  5. Real-time biometric identification — identification in public spaces (with narrow law enforcement exemptions)
  6. Predictive policing — profiling based solely on personal characteristics without factual basis
  7. Emotion inference — detecting emotions in workplace, school, and law enforcement settings

The intent-based enforcement model is the defining feature of the Act. Unlike impact-based frameworks (such as Colorado's SB 24-205), the Texas Attorney General does not need to prove that an AI system caused measurable harm. The prosecution needs to demonstrate that the system was designed or deployed with prohibited intent.

The NIST AI RMF Affirmative Defense

Section 546.103 of the Responsible Artificial Intelligence Governance Act establishes compliance with the NIST AI Risk Management Framework as an affirmative defense in enforcement proceedings. This is the most strategically significant provision in the Act for deployers.

The defense requires demonstrable alignment — documented evidence of compliance across all four NIST AI RMF functions:

  • Govern — AI governance structure, policies, risk appetite, accountability
  • Map — system-by-system risk mapping, context of use, affected populations
  • Measure — testing protocols, performance metrics, bias monitoring, benchmarks
  • Manage — incident response, remediation history, lifecycle management, monitoring

This is the first major state AI law to give NIST AI RMF compliance direct statutory weight. Organizations that build and document their NIST alignment before an investigation have a pre-constructed legal defense.

Enforcement Architecture

The Responsible Artificial Intelligence Governance Act creates an enforcement framework with several distinctive features:

  • Exclusive AG enforcement — only the Texas Attorney General can bring enforcement actions. No private right of action, no class action exposure.
  • Penalties up to $200,000 per violation — violations stack per incident, not per system, creating significant cumulative exposure for systemic non-compliance.
  • 60-day cure period — after AG notification of a violation, the deployer receives 60 days to remediate and demonstrate the fix. This provision rewards organizations with pre-built response capabilities.
  • Mitigation factors — NIST alignment, good-faith compliance effort, cooperation with the AG, voluntary disclosure, and remediation history all reduce exposure.

Companion Statutes

The Responsible Artificial Intelligence Governance Act does not operate in isolation. Texas enacted three companion statutes that create sector-specific compliance stacking:

StatuteApplies ToAdditional Requirements
SB 1964State agenciesAI ethics code, public AI inventory, heightened scrutiny assessments, annual DIR reporting
SB 1188Healthcare providersPatient disclosure when AI assists in care, dark-pattern-free notices, documentation
HB 3512State employeesAnnual DIR-certified AI training for employees using computers 25%+ of duties

A private company faces the Responsible Artificial Intelligence Governance Act alone. A government agency faces TRAIGA + SB 1964 + HB 3512. A healthcare provider faces TRAIGA + SB 1188. Understanding your deployer type determines your full compliance surface.

DIR Regulatory Sandbox

The Act includes a provision for the Department of Information Resources (DIR) to operate a regulatory sandbox — a controlled environment where startups and researchers can test AI systems with relaxed compliance requirements. Participants must apply to DIR, agree to quarterly reporting, and operate within defined parameters. This provision reflects the Act's balance between innovation and governance.

How Texas Compares to Other State AI Laws

The Responsible Artificial Intelligence Governance Act represents a distinct regulatory model compared to other state approaches:

  • Texas (intent-based) — targets prohibited intent; NIST safe harbor; AG-only enforcement; 60-day cure
  • Colorado (impact-based) — targets measurable consumer harm; mandatory risk assessments; broader enforcement
  • EU AI Act (risk-classification) — tiered obligations based on risk level; multiple enforcement bodies

The Texas model is being watched nationally as a potential template for other red states seeking to regulate AI without discouraging innovation. The combination of meaningful penalties with pro-deployer provisions (NIST safe harbor, cure period, no private lawsuits, regulatory sandbox) creates a framework that both governs and enables.

Compliance Requirements Summary

Organizations subject to the Responsible Artificial Intelligence Governance Act should implement the following:

  1. AI system inventory — catalog every AI system deployed, including embedded AI in third-party tools
  2. Prohibited practice screening — document that each system has been screened against all 7 categories
  3. NIST AI RMF alignment — build scored evidence across Govern, Map, Measure, Manage
  4. Evidence bundle capability — produce compliance documentation on demand for the AG, procurement, boards
  5. 60-day cure readiness — pre-built response workflow with milestones and documentation
  6. Regulatory monitoring — track DIR guidance, AG enforcement actions, and legislative updates
  7. Deployer-specific requirements — activate government or healthcare compliance surfaces as applicable

The full step-by-step compliance checklist maps each requirement to the statute with implementation guidance by deployer type.

Automate Compliance with the Responsible Artificial Intelligence Governance Act

TXAIMS is the compliance platform built specifically for the Responsible Artificial Intelligence Governance Act and its companion statutes. Prohibited practice screening, NIST AI RMF scoring, deployer-type-aware compliance scoring, evidence bundle generation, 60-day cure workflow management, and regulatory monitoring — all automated, all continuous.

Ready to automate your TRAIGA compliance?

TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.

Start 14-day free trial