Blog/Definitive Guide
Definitive GuideMarch 11, 2026·12 min read

The Complete Guide to TRAIGA (HB 149): Texas AI Law Section-by-Section

The Texas Responsible AI Governance Act (TRAIGA) — enacted as House Bill 149 — is the most comprehensive state-level AI regulation in the American South. Signed into law on September 1, 2025, and enforceable since January 1, 2026, TRAIGA establishes a new compliance landscape for every organization that deploys AI systems affecting Texans. Law firms and government sites dominate search results for “Texas AI law” and “TRAIGA” — but compliance professionals need actionable, section-by-section guidance that goes beyond legal summaries. This guide is that resource.

Whether you're a compliance officer, CIO, legal counsel, or business leader, this article walks through every major provision of HB 149: what it prohibits, who it applies to, how to build your defense, and what happens when enforcement arrives. We'll also cover the companion statutes — SB 1964 for government agencies, SB 1188 for healthcare, and HB 3512 for state employee training — so you understand the full Texas AI governance regulation stack.

What Is TRAIGA?

TRAIGA stands for the Texas Responsible AI Governance Act. It is the official short name for the comprehensive Texas AI law enacted as House Bill 149. TRAIGA answers the question: “What is the Texas Responsible AI governance law?” — it is the law. The acronym appears throughout the statute and in all official guidance from the Texas Department of Information Resources (DIR) and the Attorney General's office.

TRAIGA takes an intent-based approach to AI regulation. Unlike Colorado's impact-based model (which focuses on whether your AI caused harm), Texas focuses on what your AI system was designed or deployed to do. If your system was built with the intent to engage in a prohibited practice — even if no harm has yet occurred — you are in violation. This design choice means compliance documentation must demonstrate the intended purpose of each AI system and show that prohibited uses were actively screened against.

Why intent matters: the Texas Legislature deliberately chose this framing to avoid the evidentiary challenges of proving causation. Under an impact-based model, the state would need to show that your AI caused specific harm to specific individuals. Under TRAIGA's intent-based model, the state need only show that your system was designed or deployed with prohibited intent. That shifts the burden of proof and makes documentation of legitimate design intent essential to your defense.

The law has four structural pillars: (1) seven prohibited AI practices with bright-line rules, (2) an affirmative defense for NIST AI RMF compliance, (3) exclusive AG enforcement with $200,000-per-violation penalties and a 60-day cure period, and (4) deployer-type awareness that stacks with SB 1964, SB 1188, and HB 3512 for government and healthcare entities.

What Is House Bill 149?

House Bill 149 is the legislative designation for TRAIGA. When someone asks “What is House Bill 149?” — they are asking about the Texas AI law. HB 149 was introduced in the 89th Texas Legislature, passed both chambers with bipartisan support in May 2025, and was signed by Governor Greg Abbott on September 1, 2025. It codifies the Texas Responsible AI Governance Act in the Texas Government Code.

A common confusion: many people search for “Has Texas HB 1481 passed?” — HB 1481 is not the Texas AI bill. The correct bill number is HB 149. If you or your team have been referencing HB 1481, you are looking for the wrong bill. Our dedicated article on HB 1481 vs. HB 149 clears this up in detail.

HB 149's core architecture includes: Section 2 (prohibited practices), Section 5 (NIST AI RMF affirmative defense), Sections 6–8 (enforcement, penalties, cure period), and cross-references to the companion statutes that create additional obligations for government agencies and healthcare providers.

Who Does TRAIGA Apply To?

TRAIGA applies to any deployer of AI systems that affect individuals in Texas. A deployer is a person or organization that deploys, develops, or makes available an AI system for use in Texas. The law is geography-based, not registration-based: you do not need to be headquartered in Texas, incorporated in Texas, or have a physical presence in Texas. If your AI system affects Texans — through hiring decisions, lending, healthcare, customer service, recommendations, or any consequential decision — TRAIGA applies.

Deployer types fall into three broad categories, each with different statutory obligations:

  • Private sector deployers: Companies, nonprofits, and organizations that use AI in commercial or operational contexts. They must comply with TRAIGA's prohibited practice screening and NIST alignment. No additional Texas statutes apply unless they are also healthcare providers.
  • Government agencies: Texas state agencies and local government entities. They face triple compliance — TRAIGA plus SB 1964 (AI ethics code, public inventory, heightened scrutiny) plus HB 3512 (annual DIR-certified AI training for employees using computers 25%+ of duties).
  • Healthcare providers: Hospitals, clinics, telehealth platforms, and practices using AI in patient care. They face TRAIGA plus SB 1188 (patient-facing AI disclosure before or at the time of service). Government hospitals face all four statutes.

If your organization uses chatbots, hiring tools, lending models, recommendation engines, diagnostic AI, or any AI-powered decision-making that touches Texas residents — you are a deployer. Our deployer-type guide helps you classify your organization and identify which statutes stack.

The 7 Prohibited AI Practices

TRAIGA Section 2 defines seven categories of AI practices that are prohibited by intent. If your AI system was designed or deployed with the intent to do any of the following, you are in violation — regardless of whether harm has occurred. Each violation carries up to $200,000 in civil penalties. Our complete list of prohibited practices provides detailed examples for each category; here is the section-by-section summary.

1. Subliminal or Manipulative Techniques

AI systems shall not deploy subliminal techniques beyond a person's consciousness or deliberately manipulative techniques to materially distort behavior. Example: An e-commerce AI that uses micro-targeted emotional triggers calibrated below conscious awareness to drive impulse purchases. A chatbot designed to create artificial urgency through deceptive countdown timers. Compliance: Document the intent behind every user-facing AI interaction. If it influences decisions, the mechanism must be transparent.

2. Exploiting Vulnerabilities

AI systems shall not exploit vulnerabilities of individuals due to age, disability, or specific social or economic situation. Example: A lending AI that routes elderly applicants to higher-interest products. A subscription service that makes cancellation harder for users with identified cognitive load patterns. Compliance: Audit your AI's treatment of protected populations. Implement equitable outcome testing across demographic groups.

3. Social Scoring

AI systems shall not evaluate or classify individuals based on social behavior or personal characteristics in a way that leads to detrimental treatment unrelated to the original context. Example: An HR system that factors social media activity into job fitness. A tenant screening tool that uses neighborhood crime data as a proxy for individual reliability. Compliance: Ensure AI decisions are scoped to their intended context. Cross-context data use is the trigger.

4. Real-Time Biometric Identification in Public Spaces

Prohibited in publicly accessible spaces, with narrow exceptions for law enforcement under judicial authorization. Example: A retail chain using facial recognition at store entrances. A property management company using biometric scanning in common areas. Compliance: If you use any biometric AI in public-facing spaces, get legal review immediately. The exceptions are extremely narrow.

5. Discrimination in Consequential Decisions

AI systems shall not discriminate based on protected characteristics in employment, housing, credit, education, or public accommodations. Example: A resume screening AI that down-ranks candidates from certain zip codes. A mortgage model with unexplained disparate impact. Compliance: Run disparate impact analysis. Document that your system was not designed with discriminatory intent and actively test for discriminatory outcomes.

6. Deceptive Content Generation Without Disclosure

AI-generated content that could be mistaken for human-created content must include disclosure. Example: AI-generated customer reviews without disclosure. Deepfake-style video testimonials. AI-written articles published as human journalism. Compliance: Label AI-generated content. Implement disclosure mechanisms in any AI content pipeline.

7. Surveillance Without Notice or Consent

AI-powered surveillance systems require notice to affected individuals and, in many contexts, consent. Example: Employee monitoring AI that tracks keystrokes without disclosure. Customer behavior tracking in physical spaces without posted notice. Compliance: Audit every AI system that collects behavioral data. Implement clear notice mechanisms.

The pattern across all seven: intent + documentation. The AG does not need to prove harm occurred — only that the system was designed or deployed with prohibited intent. Your defense is documented intent: showing that you built the system for legitimate purposes and actively screened for prohibited uses.

The NIST AI RMF Safe Harbor Defense

TRAIGA Section 5 establishes the single most valuable compliance provision in the law: demonstrable compliance with the NIST AI Risk Management Framework (AI RMF 1.0) constitutes an affirmative defense in enforcement proceedings. This is not aspirational guidance — it is statutory language. If the Texas AG brings a TRAIGA action against you, and you can demonstrate documented alignment with NIST AI RMF across all four functions, you have a codified legal shield.

The NIST AI RMF has four core functions: GOVERN (policies, roles, accountability), MAP (context and risk identification), MEASURE (testing and evaluation), and MANAGE (risk response and mitigation). Our NIST AI RMF safe harbor guide walks through each function with implementation steps.

What each function requires in practice: GOVERN means you have a written AI governance policy, a designated compliance owner, and board-level visibility into AI risk. MAP means you have an AI system inventory with purpose documentation, stakeholder impact mapping, and data provenance assessment. MEASURE means you run pre-deployment testing, ongoing monitoring, and annual red-team exercises for high-risk systems. MANAGE means you have remediation plans with tracked milestones, human-in-the-loop gates for consequential decisions, and incident response procedures.

Common mistake: “We follow NIST guidelines” is not an affirmative defense. You need documented, auditable evidence of alignment — scored metrics, evidence artifacts, continuous updates, and audience-ready packaging for the AG, procurement, and your board. A slide deck or dusty policy document does not count. NIST alignment scores above 70 demonstrate meaningful alignment; above 85 demonstrates strong alignment.

The strategic value: organizations with demonstrable NIST alignment are less likely to receive AG notices in the first place. The AG prioritizes targets where violations are clear and compliance efforts are absent. Building your NIST defense now is both legal protection and deterrence. Insurance carriers and enterprise procurement teams are increasingly requiring NIST alignment documentation — so the defense you build for the AG also unlocks business opportunities.

Penalties and Enforcement

TRAIGA imposes civil penalties up to $200,000 per violation, enforced exclusively by the Texas Attorney General. There is no private right of action — individual consumers cannot sue you for TRAIGA violations. But the penalty structure is severe: each prohibited practice violation, each undisclosed AI use, each failure to comply with a cure notice is a separate violation. If a single AI system commits the same prohibited practice across 100 consumer interactions, that is potentially 100 separate violations. Our penalties and enforcement guide covers the full breakdown.

The AG can also seek injunctive relief — court orders to stop the violating AI system from operating in Texas. For some businesses, that is equivalent to a shutdown. The state can recover attorney's fees and investigation costs on top of penalties.

What triggers an investigation? Consumer complaints are the most common trigger — a pattern of complaints about the same company or system gets attention. Public reporting of AI harms (media coverage of discriminatory hiring tools, biased lending models, or healthcare AI failures) can prompt AG scrutiny. Whistleblower tips from employees who observe prohibited practices are another source. Perhaps most importantly: visible absence of compliance — companies with no AI inventory, no governance documentation, and no NIST alignment are higher-priority targets than those with demonstrable compliance efforts. Cross-state coordination means a violation flagged in another state can trigger Texas scrutiny.

Mitigation factors matter: good-faith compliance efforts, speed of remediation, cooperation with the investigation, scope of harm, and history of violations. First-time violations with documented NIST alignment are treated differently than repeat violations with no compliance infrastructure. The real cost of non-compliance extends beyond fines — procurement disqualification, insurance premium increases, reputational damage, and operational disruption from injunctions can exceed any penalty amount.

The 60-Day Cure Period

TRAIGA provides a 60-day cure period after the Attorney General sends formal notice of an alleged violation. You have 60 calendar days to cure the violation and demonstrate good-faith remediation. If the AG accepts the cure, the matter closes without penalties. If the cure is rejected or not attempted, full enforcement proceeds.

Curing is not just fixing the bug. The AG expects: immediate containment (stop the violating behavior), root cause analysis, remediation plan with specific steps, recurrence prevention (systemic changes), and evidence of completion. A one-paragraph email saying “we fixed it” will not satisfy the cure requirement. Our 60-day cure period strategy guide provides a day-by-day playbook.

The 60-day window is worth up to $200,000 per violation — but only if you are ready to use it. Organizations without pre-existing compliance infrastructure typically burn 30+ days just understanding what AI systems they have. By then, the window is half gone. Pre-cure readiness means having an AI inventory, prohibited practice screenings, NIST alignment scores, a cure response playbook, and evidence bundles that can be updated rather than built from scratch.

Healthcare Requirements (SB 1188)

If you are a healthcare provider using AI in patient care, SB 1188 creates mandatory disclosure obligations on top of TRAIGA. You must disclose AI use to the patient before or at the time of the AI-assisted service. No exceptions for “it's just a tool” or “the doctor made the final call.”

The disclosure obligation applies when AI is used in: diagnostic support, treatment recommendations, triage and scheduling, patient communication (chatbots, virtual assistants), and clinical documentation. The disclosure must be clear, conspicuous, and free of dark patterns — no pre-checked consent boxes, no disclosure buried in paragraph 37 of a 40-page form, no “by continuing, you agree” passive consent.

The healthcare provider entity is liable — the hospital, clinic, or practice. Not the individual physician (unless they are the entity). Not the AI vendor. Compliance is an institutional responsibility. Our SB 1188 healthcare disclosure guide covers implementation in detail.

Healthcare providers face both TRAIGA prohibited practice screening and SB 1188 disclosure requirements. If your diagnostic AI exploits patient vulnerabilities or your triage system discriminates, you face violations under both statutes.

Government Requirements (SB 1964 + HB 3512)

Texas state agencies and local government entities face additional obligations beyond TRAIGA. SB 1964 mandates: (1) adoption of a formal AI ethics code aligned with DIR guidance, (2) a public AI system inventory, (3) heightened scrutiny assessments for AI in critical decisions (law enforcement, benefits, licensing, employment, child welfare, parole), and (4) annual reporting to DIR.

HB 3512 requires annual DIR-certified AI training for every state employee who uses computers for 25% or more of their job duties. This captures most state employees — administrative staff, analysts, case workers, IT professionals, managers. Training must be completed each fiscal year (September 1 – August 31), with documented completion records.

An agency that completes TRAIGA screening but ignores SB 1964's ethics code is non-compliant. An agency with a perfect ethics code but no NIST alignment has no affirmative defense. All three statutes — TRAIGA, SB 1964, and HB 3512 — must be addressed simultaneously. Our guides on SB 1964 and HB 3512 provide implementation details.

How to Comply Step-by-Step

Compliance is not a one-time project. TRAIGA requires continuous adherence. Here is the step-by-step sequence, aligned with our TRAIGA compliance checklist:

  1. Phase 1 — Inventory and classification: Catalog every AI system. Include third-party tools, embedded models, and API integrations — not just internally built systems. Many organizations miss AI embedded in enterprise software (HR screening, financial forecasting, communication platforms). Classify each system's risk level using TRAIGA's intent-based tiers. Identify your deployer type (private, government, healthcare). Document data categories for each system — especially biometric, health, and financial data.
  2. Phase 2 — Prohibited practice screening: Screen each system against all seven prohibited categories. Document each screening result — even clean results need to be recorded. The documentation itself is part of your defense. Run this screening quarterly; new features and model updates can alter risk classification.
  3. Phase 3 — NIST AI RMF alignment: Build alignment across Govern, Map, Measure, and Manage. Score each function. Generate evidence artifacts (test results, meeting minutes, policy versions). Aim for 70+ overall; 85+ for high-risk systems. Package everything into audit-ready evidence bundles for the AG, procurement, and your board.
  4. Phase 4 — Deployer-specific requirements: Government agencies: adopt ethics code aligned with DIR guidance, publish public inventory, conduct heightened scrutiny for critical-decision AI, complete HB 3512 training for eligible employees. Healthcare: implement SB 1188 disclosures before or at service, ensure dark-pattern-free formatting. Private sector: establish cure response playbook with assigned roles and 60-day timeline.
  5. Phase 5 — Ongoing operations: Re-screen systems quarterly. Update NIST alignment annually. Generate fresh evidence bundles after every significant system change or at least quarterly. Monitor for legislative updates — DIR rulemaking, AG guidance, and new bills can change requirements. Track training certifications for government deployers.

The organizations that stay compliant are not the ones who checked the box once. They are the ones who built the system to keep the box checked. A static checklist in a spreadsheet gets outdated the moment your AI systems change. Automation — system registration, prohibited practice screening, NIST scoring, evidence bundle generation — keeps compliance continuous.

Timeline and Key Dates

Understanding the legislative and enforcement timeline helps you prioritize. Key dates for the Texas AI governance regulation:

  • May 2025: HB 149 passes both chambers of the 89th Texas Legislature with bipartisan support.
  • September 1, 2025: Governor Abbott signs HB 149, SB 1964, SB 1188, and HB 3512 into law. Effective date set.
  • January 1, 2026: All four Texas AI laws become enforceable. AG can bring enforcement actions.
  • Q2 2026 onward: AG ramp-up period. First investigations expected. Organizations with documented compliance posture are lower-priority targets.

For the question “What are the new Texas laws for July 2025?” — the four AI laws were passed during the 2025 legislative session and signed in September 2025. They took effect January 1, 2026. Our new Texas laws July 2025 guide covers the full package.

Frequently Asked Questions

What is the Texas Responsible AI governance law?

The Texas Responsible AI governance law is TRAIGA (Texas Responsible AI Governance Act), enacted as House Bill 149. It defines 7 prohibited AI practices, establishes NIST AI RMF compliance as an affirmative defense, and is enforced by the Texas Attorney General with penalties up to $200,000 per violation. It became enforceable January 1, 2026.

What is House Bill 149?

House Bill 149 (HB 149) is the Texas Responsible AI Governance Act (TRAIGA). It is the state's comprehensive AI regulation law, applying to all organizations deploying AI systems that affect individuals in Texas. It was signed September 1, 2025, and became enforceable January 1, 2026.

Has Texas HB 1481 passed?

If you are searching for HB 1481, you are likely looking for HB 149 — the Texas Responsible AI Governance Act. HB 1481 is a commonly confused bill number. HB 149 passed, was signed, and is enforceable. See our HB 1481 vs. HB 149 clarification.

What are the new Texas laws for July 2025?

Texas passed four AI laws in 2025: HB 149 (TRAIGA), SB 1964 (government AI ethics), SB 1188 (healthcare AI disclosure), and HB 3512 (state employee AI training). All were signed September 1, 2025, and became enforceable January 1, 2026. See our new Texas laws guide for the full breakdown.

What is TRAIGA compliance?

TRAIGA compliance means: (1) screening all AI systems against the 7 prohibited practices, (2) building documented NIST AI RMF alignment as an affirmative defense, (3) maintaining evidence bundles and cure readiness, and (4) meeting deployer-specific requirements (SB 1964 for government, SB 1188 for healthcare, HB 3512 for state employees).

When did the Texas AI law take effect?

TRAIGA (HB 149) became fully enforceable on January 1, 2026. The Texas Attorney General has exclusive enforcement authority. Organizations should have compliance documentation in place now.


This guide is the most comprehensive TRAIGA resource available — but compliance is operational. The organizations that outrank law firm articles in search results are the ones that also out-execute them in compliance. TXAIMS automates prohibited practice screening, NIST alignment scoring, evidence bundle generation, and deployer-specific workflows. Start your 14-day free trial and document your Texas AI compliance before the next enforcement cycle. For more answers, see our FAQ.

Ready to automate your TRAIGA compliance?

TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.

Start 14-day free trial