Blog/TRAIGA Essentials
TRAIGA EssentialsFebruary 6, 2026·4 min read

TRAIGA Compliance Checklist 2026: Every Step Your Organization Needs

TRAIGA (the Texas Responsible AI Governance Act, HB 149) is enforceable now. Whether you're a private deployer, a state agency, or a healthcare provider, the law creates specific obligations you need to meet — and an affirmative defense you want to build. This is the complete checklist, organized by deployer type, with every requirement mapped to the statute.

Phase 1: Inventory and Classification

Before you can comply, you need to know what you're complying for. This phase is non-negotiable regardless of your deployer type.

  • Catalog every AI system. Any system that uses machine learning, natural language processing, computer vision, or generative AI to make or assist decisions in Texas falls under TRAIGA. Include third-party tools, embedded models, and API integrations — not just internally built systems.
  • Classify each system's risk level. TRAIGA uses an intent-based model with six tiers: Prohibited Use, Heightened Scrutiny, Healthcare AI, General, Sandbox Participant, and Exempt. Classification determines your obligations downstream.
  • Identify your deployer type. Private sector deployers have different requirements than state agencies (SB 1964) or healthcare providers (SB 1188). Your type determines which statutes stack on top of TRAIGA.
  • Document data categories. For each system, record what personal data it processes, how it's sourced, and whether it touches biometric, health, or financial data.

Phase 2: Prohibited Practice Screening

TRAIGA Section 2 defines specific AI uses that are prohibited by intent. This is the highest-stakes compliance check because violations carry up to $200,000 in penalties per incident.

  • Screen for subliminal manipulation. Does the system deploy techniques beyond a person's consciousness to materially distort behavior?
  • Screen for exploitation of vulnerability. Does it target age, disability, or socioeconomic vulnerability to distort behavior in ways that cause harm?
  • Screen for social scoring. Does it evaluate individuals based on social behavior or predicted personality traits for unrelated purposes?
  • Screen for biometric categorization. Does it infer race, political opinion, sexual orientation, or religious belief from biometric data?
  • Screen for real-time biometric identification. Does it use biometric ID in publicly accessible spaces (with narrow law enforcement exceptions)?
  • Screen for predictive policing. Does it profile individuals based solely on personal characteristics to predict criminal behavior?
  • Screen for emotion inference. Does it detect emotions in workplaces, schools, or law enforcement contexts without medical justification?
  • Document each screening result. Even clean results need to be recorded — the documentation itself is part of your defense.

Phase 3: NIST AI RMF Alignment (Affirmative Defense)

TRAIGA Section 546.103 explicitly recognizes compliance with the NIST AI Risk Management Framework as an affirmative defense. This is the single most valuable protection the law offers. Build it.

  • GOVERN. Establish AI governance policies, assign oversight roles, define risk tolerance thresholds, and create accountability structures.
  • MAP. Identify and document AI risks for each system — bias, accuracy, transparency, privacy, and security risks specific to your use case.
  • MEASURE. Implement metrics and testing protocols. Quantify identified risks. Run red-team exercises. Document results.
  • MANAGE. Deploy controls. Assign mitigation owners. Track remediation progress. Create incident response procedures.
  • Score each function. NIST uses a 0-100 alignment scale across all four functions. An overall score above 70 demonstrates meaningful alignment; above 85 demonstrates strong alignment.

Phase 4: Deployer-Specific Requirements

Government Agencies (SB 1964 + HB 3512)

  • Adopt a formal AI ethics code aligned with DIR guidance
  • Conduct heightened scrutiny assessments for AI in critical decisions
  • Maintain a public AI system inventory
  • Ensure all employees using AI 25%+ of their duties complete DIR-certified annual AI training (HB 3512)

Healthcare Providers (SB 1188)

  • Provide patient-facing disclosures before or at the time of AI-assisted service
  • Ensure disclosure forms are clear, conspicuous, and free of dark patterns
  • Document all disclosures with timestamps and patient acknowledgments

Private Sector

  • Complete prohibited practice screening and NIST alignment (Phases 2-3)
  • Establish a cure response playbook for the 60-day AG notice window
  • Maintain evidence bundles that can be produced on demand

Phase 5: Ongoing Compliance Operations

Compliance isn't a one-time project. TRAIGA requires continuous adherence.

  • Monitor for legislative updates. Texas AI law is evolving. DIR rulemaking, AG guidance, and new bills can change requirements.
  • Re-screen systems quarterly. New features, model updates, and use case changes can alter your risk classification.
  • Update NIST alignment annually. Your affirmative defense erodes if your alignment scores go stale.
  • Generate fresh evidence bundles after every significant system change or quarterly at minimum.
  • Track training certifications. Government deployers need annual renewal documentation per HB 3512.

The Checklist Is Only Useful If It's Automated

A static checklist in a spreadsheet gets outdated the moment your AI systems change. TXAIMS automates every phase: system registration, prohibited practice screening, NIST scoring, deployer-specific requirements, evidence bundle generation, and regulatory monitoring. The checklist runs continuously — not once a quarter.

The organizations that stay compliant aren't the ones who checked the box once. They're the ones who built the system to keep the box checked.

Ready to automate your TRAIGA compliance?

TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.

Start 14-day free trial