What Is the Texas Responsible AI Governance Law?
The Texas Responsible AI Governance Act (TRAIGA) is the state's comprehensive artificial intelligence regulation law. Officially filed as House Bill 149 in the 89th Texas Legislature, it represents one of the most consequential state-level AI governance frameworks in the United States — and it's enforceable right now.
Why Texas Passed an AI Governance Law
Texas took a pragmatic approach to AI regulation. Rather than following the EU's risk-classification model or Colorado's impact-assessment framework, Texas built a law designed for rapid enforcement with clear boundaries.
The legislative intent was threefold:
- Protect Texans from harmful AI — prohibit specific AI practices that could manipulate, deceive, or discriminate
- Provide a clear compliance pathway — NIST AI RMF as a documented safe harbor, not a vague standard
- Avoid over-regulation — intent-based enforcement rather than mandatory audits or registration systems
This balance is what makes TRAIGA distinctive. It has real teeth ($200K per violation, AG enforcement) but also provides organizations with a clear roadmap to protection.
The Three Layers of Texas AI Governance
Layer 1: Universal Obligations (All AI Deployers)
Every organization deploying AI systems in Texas must:
- Not engage in prohibited practices — the 7 categories cover manipulation, social scoring, biometric surveillance, discrimination intent, constitutional rights infringement, vulnerability exploitation, and CSAM
- Be able to demonstrate intent — your AI systems should have documented purposes that clearly fall outside prohibited categories
- Build an affirmative defense — NIST AI RMF alignment isn't mandatory, but it's the only codified defense if the AG investigates
Layer 2: Government-Specific (SB 1964 + HB 3512)
State agencies and government entities face additional mandatory requirements:
- AI ethics code adoption — formal organizational policy governing AI use
- Public AI system inventory — transparent listing of all AI tools in government operations
- Heightened scrutiny assessments — enhanced review for AI systems making decisions affecting rights or benefits
- Annual DIR-certified training — every employee using computers for 25%+ of duties must complete AI training each fiscal year
- Prohibited social scoring and biometric restrictions — stricter than the private sector standards
Layer 3: Healthcare-Specific (SB 1188)
Healthcare providers using AI for patient care must:
- Disclose AI involvement to patients — written notification before or at the time of service
- Use plain language — no technical jargon or legal obfuscation
- Avoid dark patterns — disclosures cannot be designed to minimize or hide the AI involvement
- Maintain human oversight — a qualified professional must be able to review AI-driven diagnostic or treatment recommendations
The NIST Safe Harbor: Your Most Important Asset
TRAIGA Section 5 creates something unique in American AI regulation: a statutory safe harbor tied to an established federal framework. Here's how it works:
If an organization can demonstrate, through documented evidence, that its AI systems were developed and deployed in alignment with the NIST AI Risk Management Framework, this constitutes an affirmative defense in any enforcement proceeding under this chapter.
This means your NIST alignment evidence isn't just good governance — it's legal protection. The four NIST AI RMF functions map directly to TRAIGA compliance:
- GOVERN — organizational policies, roles, and accountability for AI
- MAP — identification of AI systems, their contexts, and stakeholders
- MEASURE — assessment of AI system risks, performance, and trustworthiness
- MANAGE — treatment of identified risks, monitoring, and response plans
Enforcement: What Actually Happens
Understanding enforcement architecture is critical for compliance strategy:
- The AG investigates — typically triggered by complaints, media reports, or proactive sector sweeps
- You receive notice — formal notification of alleged violation
- 60-day cure window opens — your chance to remediate without penalty
- AG reviews remediation — documented cure efforts are evaluated
- Penalty assessment (if cure fails) — up to $200K per violation, considering mitigation factors
The organizations that fare best are the ones with pre-existing compliance documentation. The AG's evaluation of "good faith" weighs heavily on what you had in place before the investigation, not what you scramble to create after.
Building Your Compliance Posture Today
The gap between "compliant" and "exposed" isn't technical complexity — it's documentation. Most organizations already operate within TRAIGA's bounds. What they lack is the evidence to prove it. The compliance sequence:
- Classify your deployer type — private, government, or healthcare. Each has different obligation layers.
- Register AI systems — document purpose, data inputs, decision scope, and human oversight for each.
- Screen for prohibited practices — systematic review against all 7 categories with documented findings.
- Score NIST alignment — evidence-backed assessment of Govern, Map, Measure, Manage for each system.
- Generate evidence bundles — audit-ready packages that demonstrate your compliance posture to any audience.
TXAIMS was built specifically for TRAIGA compliance. Every screen, scoring model, and evidence bundle is tailored to HB 149's requirements. Start your 14-day free trial — full platform, no credit card, compliance documented in under an hour.
Related Resources
- TRAIGA Is Already In Effect — the law is live, no grace period
- The Complete Guide to TRAIGA (HB 149) — 4,000-word deep dive
- The 7 Prohibited AI Practices — each one with examples
- Penalties and Enforcement — $200K per violation mechanics
- Texas vs Colorado AI Law — how TRAIGA compares to other states
- Employer Compliance Checklist — TRAIGA for HR and workforce AI
Ready to automate your TRAIGA compliance?
TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.
Start 14-day free trial