Texas AI Compliance Framework: Risk Assessment Step-by-Step
Search for “Texas AI compliance” or “artificial intelligence act risk assessment” and you'll find JD Supra, Blank Rome, Latham, IAPP — law firms and policy orgs dominating every top result. Their articles describe what the law says. They parse statutory language. They explain the penalties. What they don't tell you is what to do on Monday morning.
That gap matters. When the Texas Attorney General sends a notice, you have 60 days to cure. When a procurement team asks for your AI compliance documentation, they want evidence bundles — not a link to a law firm summary. When your board asks about TRAIGA exposure, they want risk scores and remediation plans. Legal analysis doesn't deliver any of that.
Compliance teams don't need another legal analysis. They need an actionable framework: step-by-step instructions, templates, decision trees, and evidence requirements. This post fills that gap. It's the Texas AI compliance framework that tells you exactly how to comply with TRAIGA (HB 149), build your NIST safe harbor, and prepare for enforcement — with no legal theory, only execution.
Step 1 — AI System Inventory
Before you can assess risk or screen for prohibited practices, you need to know what AI systems you deploy. TRAIGA applies to any system that uses machine learning, natural language processing, computer vision, or generative AI to make or assist decisions affecting individuals in Texas. That includes systems you built, systems you bought, and systems you access via API.
The most common inventory mistake is undercounting. Marketing teams use ChatGPT for copy. HR uses resume-screening AI. Finance uses fraud detection models. Customer service uses chatbots. Each of these is a deployable AI system under TRAIGA. So are embedded recommendation engines, document classification tools, and any SaaS that uses ML under the hood. Cast a wide net. It's easier to exclude systems after review than to discover you missed one during an AG investigation.
What to Catalogue
- System name and purpose — What does it do? What decisions does it influence?
- Deployment context — Internal tool, customer-facing, vendor-provided, embedded in product?
- Data inputs — What data does it process? Personal data, biometric, health, financial?
- Decision outputs — Hiring, lending, healthcare, marketing, fraud detection, content moderation?
- Vendor and ownership — Built in-house, third-party SaaS, open-source model with custom wrapper?
- Geographic scope — Does it touch Texas residents? (If yes, TRAIGA applies.)
Inventory Template
Use a structured template for each system. At minimum: System ID, Name, Purpose, Owner, Data Categories, Decision Type, Risk Tier (to be assigned in Step 4), Last Screened Date. Add fields for Vendor/Provider, API vs. embedded, and Texas-resident touch (yes/no). A spreadsheet works for small inventories; beyond 10–15 systems, consider a dedicated TRAIGA compliance platform that keeps the inventory current as systems change.
Assign an owner to each system — the person accountable for its compliance. Without ownership, inventory maintenance drifts. Schedule quarterly inventory reviews. New systems get added; deprecated systems get archived. Your inventory is the foundation of every downstream step. Incomplete inventories lead to blind spots. Blind spots lead to violations. Start here. See our TRAIGA compliance checklist for a full inventory phase breakdown.
Step 2 — Deployer Type Classification
Texas didn't write one AI law. It wrote four. Which ones apply to you depends on your deployer type. Get this wrong and you either over-comply (wasting resources) or under-comply (exposing yourself to enforcement).
The Seven Deployer Contexts
- Private sector — general — Any business deploying AI affecting Texas residents. Subject to TRAIGA only.
- Private sector — high-impact — AI in hiring, lending, housing, insurance, or critical infrastructure. Same TRAIGA obligations, higher scrutiny from the AG.
- Government agency — state — Texas state agencies, boards, commissions. TRAIGA + SB 1964 (ethics codes, public inventories) + HB 3512 (AI training).
- Government agency — local — Cities, counties, special districts. Same stack as state agencies.
- Healthcare provider — Hospitals, clinics, practitioners using AI in diagnosis or treatment. TRAIGA + SB 1188 (patient disclosure).
- Sandbox participant — Organizations in a DIR-approved sandbox. Modified obligations; check your sandbox terms.
- Exempt — Research, defense, certain law enforcement. Narrow; don't assume you qualify.
Why it matters: A private company needs prohibited practice screening and NIST alignment. A government agency needs all of that plus a public AI inventory, a formal ethics code, and annual DIR-certified training for qualifying employees. A healthcare provider needs all of that plus patient-facing disclosures before AI-assisted service. Misclassifying your deployer type means you either over-invest in compliance you don't need or under-invest and face enforcement. Get it right early. See the complete deployer type reference for obligation matrices.
Step 3 — Prohibited Practice Screening
TRAIGA Section 2 defines seven AI uses that are prohibited by intent. If your system was designed or deployed with intent to do any of these, you're in violation — up to $200,000 per incident. Screening is binary: either you clear each category or you don't.
The Seven Categories and How to Screen Each
- Subliminal manipulation — Does the system deploy techniques beyond conscious awareness to materially distort behavior? Screen: Review all user-facing AI interactions. Document intent. Flag any emotional or behavioral targeting.
- Exploitation of vulnerability — Does it target age, disability, or socioeconomic vulnerability to distort behavior? Screen: Audit treatment of protected populations. Run equitable outcome testing.
- Social scoring — Does it evaluate individuals based on social behavior or predicted traits for unrelated purposes? Screen: Check for cross-context data use. Ensure decisions stay scoped to intended context.
- Biometric categorization — Does it infer race, political opinion, sexual orientation, or religious belief from biometric data? Screen: Map all biometric inputs. Document that no prohibited inferences occur.
- Real-time biometric identification — Does it use biometric ID in publicly accessible spaces? (Narrow law enforcement exceptions apply.) Screen: Identify any live biometric matching in public spaces.
- Predictive policing — Does it profile individuals based solely on personal characteristics to predict criminal behavior? Screen: Review any law enforcement or security AI. Document decision logic.
- Emotion inference — Does it detect emotions in workplaces, schools, or law enforcement without medical justification? Screen: Flag any emotion detection, sentiment analysis in HR/education, or affect recognition.
Document every screening result. Even “clear” results need dated records — the documentation itself is part of your defense. Create a screening log: System ID, Screening Date, Screener, Result per category (Clear/Flag/Review), Notes. If you flag a potential issue, document the mitigation or the decision to accept residual risk. The AG will want to see that you considered each practice deliberately, not that you skipped the exercise.
Prohibited practice screening is intent-based. The law asks whether your system was designed or deployed with intent to engage in a prohibited use. That means your documentation should capture design intent: What is this system meant to do? What decisions does it support? If the answer to any of the seven categories is “yes,” you have a violation. If the answer is “no but it could be misused,” document your safeguards. For detailed examples and compliance steps per practice, see our complete prohibited practices list.
Step 4 — NIST AI RMF Alignment
TRAIGA Section 546.103 establishes that demonstrable compliance with the NIST AI Risk Management Framework constitutes an affirmative defense. This is the single most valuable protection the law offers. If the AG investigates, your NIST alignment documentation is your legal shield.
Govern / Map / Measure / Manage
- GOVERN — Establish AI governance policies, assign oversight roles, define risk tolerance, create accountability structures. Deliverables: Written AI governance policy, designated compliance owner, quarterly governance reviews.
- MAP — Identify and document AI risks for each system. Deliverables: System inventory with purpose documentation, stakeholder impact mapping, data provenance and bias risk assessment.
- MEASURE — Quantify risk through testing. Deliverables: Pre-deployment testing protocols, red-team exercises (annual for high-risk), documented metrics and thresholds.
- MANAGE — Deploy controls, assign mitigation owners, track remediation. Deliverables: Remediation plans with milestones, human-in-the-loop gates for consequential decisions, incident response procedures.
NIST uses a 0–100 alignment scale across all four functions. An overall score above 70 demonstrates meaningful alignment; above 85 demonstrates strong alignment. “We follow NIST guidelines” is not enough — you need scored, auditable evidence. The AG will ask: What did you do? When did you do it? Who owns it? Can you prove it? Policy documents alone don't answer those questions. Test results, meeting minutes, remediation tracking, and governance review records do.
Prioritize high-impact systems. If you have limited resources, focus NIST alignment efforts on AI in hiring, lending, healthcare, or other consequential decisions first. Lower-impact systems (e.g., internal productivity tools) can follow. But don't leave gaps indefinitely — the AG can investigate any system. See our NIST AI RMF safe harbor guide for the full framework.
Step 5 — Evidence Bundle Assembly
When the Texas AG requests compliance documentation, they want evidence — structured, timestamped, verifiable — not slide decks or internal memos. An evidence bundle is a comprehensive, audience-specific package that proves you took compliance seriously before anyone asked.
What the AG Wants to See
- AI system inventory — Every system, purpose, data inputs, decision outputs, deployment context.
- Prohibited practice screening results — Dated assessments showing you checked each system against all seven practices.
- NIST AI RMF alignment scores — Quantified scores across Govern, Map, Measure, Manage with supporting evidence.
- Intent documentation — Written statements of purpose for each AI system.
- Testing evidence — Red-team results, bias testing, performance monitoring logs.
- Governance artifacts — AI governance policy (current version), meeting minutes, training records, incident response plan.
Different audiences need different bundles: AG (enforcement-focused), procurement (vendor due diligence), board (risk oversight), insurance (loss prevention). Build audience-specific packages. The AG bundle emphasizes prohibited practice clearance and NIST alignment. The procurement bundle emphasizes vendor documentation and testing evidence. The board bundle emphasizes risk trends and remediation progress. One evidence bundle does not fit all. For the full evidence bundle methodology, see evidence bundles for AI compliance.
Step 6 — Cure Readiness Planning
TRAIGA gives you 60 days to cure a violation after the AG sends notice. That window is worth up to $200,000 per violation — but only if you're ready to use it. Most organizations aren't. They scramble when the notice arrives. You need a playbook before it does.
60-Day Response Workflow
- Day 1–3: Triage and containment — Assign incident owner. Stop the violating behavior (disable system, update model, pull feature). Preserve evidence.
- Day 4–14: Root cause analysis — Document what went wrong and why. Identify systemic gaps.
- Day 15–30: Remediation plan — Specific, measurable steps. Assign owners. Set milestones.
- Day 31–50: Execute remediation — Implement fixes. Document completion. Test.
- Day 51–60: Recurrence prevention and submission — Document systemic changes. Package evidence. Submit cure response to AG.
A one-paragraph email saying “we fixed it” will not satisfy the cure requirement. You need a structured, documented response. Pre-draft templates: root cause analysis form, remediation plan template, recurrence prevention checklist. Identify who runs the response — legal, compliance, and technical lead. Establish escalation paths. When the notice arrives, you have 60 days. The first 72 hours are critical. For the full playbook, see our 60-day cure period strategy.
Step 7 — Ongoing Monitoring and Reporting
Compliance isn't a one-time project. TRAIGA requires continuous adherence. Your inventory, screening results, NIST scores, and evidence bundles go stale the moment your AI systems change.
- Re-screen systems quarterly — New features, model updates, and use case changes can alter risk classification.
- Update NIST alignment annually — Your affirmative defense erodes if scores go stale.
- Refresh evidence bundles — After every significant system change, or quarterly at minimum.
- Monitor for legislative updates — DIR rulemaking, AG guidance, and new bills can change requirements.
- Track training certifications — Government deployers need annual renewal documentation per HB 3512.
The organizations that stay compliant aren't the ones who checked the box once. They're the ones who built the system to keep the box checked. Automation is the difference. Manual quarterly reviews slip. Automated monitoring and alerts don't. Integrate compliance into your AI deployment lifecycle: new systems get inventoried before go-live; model updates trigger re-screening; governance reviews happen on schedule.
Deployer-Specific Frameworks
Steps 1–7 apply to all deployers. But your obligations don't stop there. Additional statutes layer on by deployer type.
Government Agencies: SB 1964 + HB 3512
State and local government entities must adopt a formal AI ethics code aligned with DIR guidance, conduct heightened scrutiny assessments for AI in critical decisions, maintain a public AI system inventory, and ensure all employees using AI 25%+ of their duties complete DIR-certified annual AI training per HB 3512. See SB 1964 AI ethics code and HB 3512 training requirements for details.
Healthcare Providers: SB 1188
Healthcare providers must provide patient-facing disclosures before or at the time of AI-assisted service. Disclosures must be clear, conspicuous, and free of dark patterns. Document all disclosures with timestamps and patient acknowledgments. SB 1188 carries its own penalty track ($5,000–$250,000 per violation) separate from TRAIGA. Disclosure failures can trigger license suspension. Integrate disclosure into your clinical workflow — not as an afterthought. See healthcare AI disclosure (SB 1188) for the full requirements.
Risk Assessment Methodology
How do you classify and score AI risk? TRAIGA uses an intent-based model. Risk isn't just about technical capability — it's about how the system is designed and deployed.
Risk Tiers
- Prohibited Use — Designed or deployed with intent to engage in one of the seven prohibited practices. Highest risk; do not deploy.
- Heightened Scrutiny — AI in critical decisions (hiring, lending, housing, healthcare, criminal justice). Requires enhanced documentation and governance.
- Healthcare AI — AI in diagnosis or treatment. SB 1188 disclosure requirements apply.
- General — AI that doesn't fall into the above. Standard TRAIGA obligations.
- Sandbox Participant — In a DIR-approved sandbox. Modified obligations.
- Exempt — Research, defense, certain law enforcement. Narrow.
Scoring Approach
For each system, document: (1) intended use and decision context, (2) data categories affected, (3) population impact, (4) reversibility of decisions. Use this to assign a risk tier. Consequential, irreversible decisions (e.g., hiring, lending denials) warrant Heightened Scrutiny. Reversible, low-stakes decisions (e.g., product recommendations) may warrant General. When in doubt, err toward higher scrutiny — over-documentation is cheaper than under-compliance.
For NIST alignment, score each of the four functions (Govern, Map, Measure, Manage) on a 0–100 scale. Use a consistent rubric: What evidence exists for each function? What's missing? Aggregate to an overall alignment score. Track scores over time. Trend matters — improving scores demonstrate continuous improvement; declining scores signal erosion of your defense.
Common Compliance Mistakes
- Treating compliance as a one-time project — TRAIGA requires continuous adherence. Inventories, screening, and evidence bundles must be refreshed. Quarterly re-screening and annual NIST updates are minimums, not aspirational.
- Relying on “we follow NIST guidelines” — You need scored, auditable evidence. A policy document is not an affirmative defense. The AG wants to see test results, governance meeting minutes, remediation tracking, and dated artifacts.
- Incomplete inventories — Missing third-party tools, embedded models, or API integrations creates blind spots. Conduct a cross-functional inventory sweep: IT, product, marketing, HR, finance, ops. Each team may have AI they don't think of as “AI.”
- Assuming exempt status — Exemptions are narrow. Research, defense, and certain law enforcement uses may qualify. Document your qualification if you claim one. Don't assume.
- Ignoring deployer-specific statutes — Government and healthcare have additional obligations. SB 1964, SB 1188, and HB 3512 stack on top of TRAIGA. Don't assume TRAIGA alone covers you.
- Waiting for the AG notice to build cure readiness — The 60-day clock starts when the notice arrives. If you don't have a playbook, you've already lost. Build your response workflow now.
- Treating vendor AI as “their problem” — You deploy AI when you use it. Third-party SaaS, embedded models, and API integrations are your responsibility. Include them in your inventory and screening.
Next Steps
This framework gives you the structure. Execution is the hard part. For the full statutory context, see our complete guide to TRAIGA (HB 149). For answers to common questions, visit our FAQ.
A static checklist in a spreadsheet gets outdated the moment your AI systems change. TXAIMS automates every step: system registration, prohibited practice screening, NIST scoring, evidence bundle generation, and regulatory monitoring. The framework runs continuously — not once a quarter. Start your 14-day free trial and build your Texas AI compliance defense before the AG asks for it.
Ready to automate your TRAIGA compliance?
TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.
Start 14-day free trial