The Complete Texas AI Compliance Bible by Deployer Type
59.1% of Texas businesses are now deploying artificial intelligence. Four AI governance laws are enforceable. Penalties reach $200,000 per violation. And the single most important compliance question isn't “which law applies to me?” It's this:
What type of deployer are you?
That classification — private sector, government agency, or healthcare provider — determines which of the four Texas AI statutes stack on top of each other, what you're required to document, what you're required to disclose, and what the Attorney General expects to see when enforcement begins.
Most compliance guides start with the law. This one starts with you.
The Three Deployer Types Under Texas Law
Texas didn't write a one-size-fits-all AI law. It wrote four statutes that layer differently depending on your organizational classification:
Private Sector Deployers — Any business, startup, or enterprise deploying AI systems that affect individuals in Texas. This includes companies headquartered outside Texas if their AI touches Texas residents. You're subject to TRAIGA (HB 149) and its prohibited practice framework. Your compliance surface is the narrowest of the three types — but the penalties are identical.
Government Agency Deployers — Texas state agencies, boards, commissions, and local government entities using AI in operations or public-facing services. You face the heaviest compliance burden: TRAIGA plus SB 1964's mandatory ethics codes and public inventories, plus HB 3512's annual AI training requirements. Every AI system in your agency must be cataloged and publicly disclosed.
Healthcare Provider Deployers — Hospitals, physician practices, health systems, and any licensed healthcare practitioner using AI for diagnosis, treatment recommendations, or patient interaction. You face TRAIGA plus SB 1188's patient disclosure requirements and practitioner review mandates. If you're a state-affiliated healthcare system, HB 3512's training requirements also apply.
The Obligation Matrix
Here's what most people need: a single reference showing exactly which laws apply to which deployer type and what each one requires.
HB 149 (TRAIGA) — Applies to: ALL deployer types
- 7 prohibited AI practice categories
- NIST AI RMF safe harbor (affirmative defense)
- 60-day cure window for first violations
- $200,000 per violation, AG-enforced
- Government-specific: social scoring ban, biometric ID ban, consumer disclosure mandate
- Healthcare-specific: patient interaction disclosure mandate
SB 1964 — Applies to: GOVERNMENT ONLY
- Mandatory written AI ethics code
- Public AI system inventory (name, vendor, purpose, risk classification, affected populations, deployment date)
- Heightened scrutiny assessments for critical decisions (parole, child welfare, employment, licensing, benefits, law enforcement)
- Annual reporting to the Department of Information Resources (DIR)
SB 1188 — Applies to: HEALTHCARE ONLY
- Patient disclosure before AI-assisted diagnosis or treatment
- Verbal or written format (no mandated template)
- Practitioner must review all AI-generated records per Texas Medical Board standards
- AI use must fall within practitioner's scope of license
- Separate penalty structure: $5,000–$250,000 per violation
- License suspension or revocation by regulatory board
HB 3512 — Applies to: GOVERNMENT + state-affiliated HEALTHCARE
- Annual AI awareness training for employees using computers 25%+ of their duties
- Training must be DIR-certified (at least 5 certified programs required)
- Covers responsible use, accessibility, risk mitigation, and public-sector scenarios
- Pre- and post-training assessments required
- Elected and appointed officers included
Private Sector Deployers: Your Compliance Checklist
Your exposure is TRAIGA only. That's the good news. The bad news: “only” still carries $200K per violation and no private right of action to hide behind — enforcement comes directly from the AG.
1. Inventory every AI system that touches Texas residents. Chatbots, hiring algorithms, lending models, recommendation engines, CRM scoring, fraud detection — all of it.
2. Screen each system against the 7 prohibited practices. The categories include incitement to self-harm or violence, algorithmic discrimination with sole discriminatory intent, constitutional rights infringement, CSAM generation, and deepfake intimate imagery. Social scoring and unconsented biometric identification apply only to government deployers.
3. Document your NIST AI RMF alignment. This is your legal shield. Map your practices to the four NIST functions: Govern, Map, Measure, Manage. The documentation must be timestamped and must predate any enforcement action to qualify as an affirmative defense.
4. Build a cure response plan. When the AG sends a notice, the 60-day clock starts on delivery. Have your response workflow, evidence assembly process, and responsible contacts designated before you need them.
5. Generate evidence bundles. The AG doesn't want a policy PDF. They want timestamped evidence of systematic compliance — screening results, NIST alignment scores, remediation records, system inventories. Build these artifacts now.
Government Agency Deployers: Your Compliance Checklist
You face the heaviest compliance load: three statutes stacking simultaneously. TRAIGA gives you the prohibited practices and NIST framework. SB 1964 adds ethics codes, public inventories, and heightened scrutiny. HB 3512 adds mandatory training.
1. Complete everything in the private sector checklist above. TRAIGA applies to you identically, plus two additional prohibited practices: social scoring and unconsented biometric identification.
2. Adopt a written AI ethics code that addresses transparency, oversight procedures, accountability, acceptable use boundaries, and alignment with DIR guidance. This must be a formal, adopted document.
3. Publish your AI system inventory. Every AI system must be publicly cataloged: system name, vendor, purpose, deployment date, risk classification, data inputs, and affected populations.
4. Conduct heightened scrutiny assessments for any AI system used in critical decisions: parole and probation, child welfare, employment, licensing, benefits, and law enforcement. Each assessment must evaluate accuracy, bias risk, transparency, human oversight, and disparate impact potential before deployment.
5. Disclose AI interactions to consumers. Any AI system intended to interact with consumers that your agency makes available must include a disclosure.
6. Report annually to DIR on AI usage, ethics code compliance, incidents, and heightened scrutiny results.
7. Identify all employees who use a computer for 25%+ of their duties. These employees plus all elected and appointed officers must complete DIR-certified AI training annually.
8. Track training completion with documented pre- and post-training assessments. Only DIR-certified programs qualify.
Healthcare Provider Deployers: Your Compliance Checklist
Your compliance surface spans TRAIGA and SB 1188, with HB 3512 adding on if you're a state-affiliated system. The critical distinction: SB 1188 carries its own penalty structure separate from TRAIGA — up to $250,000 per violation plus license action.
1. Complete the private sector TRAIGA checklist (items 1–5 above). All prohibited practices apply, and the NIST safe harbor defense is available.
2. Disclose AI use to patients when AI is involved in diagnosis or treatment. The disclosure must be provided no later than the date the service or treatment is first provided. Verbal or written — the statute doesn't mandate a format, but document that you made the disclosure.
3. Review all AI-generated records according to Texas Medical Board standards. Practitioners cannot accept AI recommendations without proper review and documentation. The AI is a tool, not a decision-maker.
4. Verify scope-of-license compliance. AI use must fall within the practitioner's existing scope of license, certification, or authorization. An AI system cannot extend a practitioner's scope.
5. Cross-check against other regulatory constraints. SB 1188 explicitly requires that the AI use isn't restricted by other state or federal laws. HIPAA, CMS guidance, and FDA clearance requirements remain fully in force.
6. Implement anti-discrimination safeguards. AI algorithms used in healthcare cannot discriminate based on race, age, gender, disability, or other protected characteristics.
7. If state-affiliated: complete HB 3512 AI training requirements for all qualifying employees and officers, using only DIR-certified programs.
The Stacking Effect
This is the part most coverage misses.
A private sector company deploying one AI chatbot faces one statute and five prohibited practice categories.
A state agency deploying that same chatbot faces three statutes, seven prohibited practice categories, a mandatory ethics code, a public inventory, heightened scrutiny assessments (if the chatbot makes critical decisions), annual DIR reporting, and mandatory employee AI training.
A hospital system deploying that same chatbot for patient triage faces two to three statutes, seven prohibited practice categories, patient disclosure requirements, practitioner review mandates, scope-of-license verification, and a separate penalty track that can reach $250,000 per violation plus license revocation.
Same chatbot. Three completely different compliance surfaces.
This is why deployer-type classification is step zero — not step five — of any Texas AI compliance program.
The Universal Defense: NIST AI RMF
Regardless of your deployer type, one legal mechanism is available to all three: the NIST AI Risk Management Framework affirmative defense.
TRAIGA explicitly codifies NIST AI RMF 1.0 alignment as an affirmative defense against enforcement action. This is not a guideline. It's a statutory safe harbor — the only one in Texas AI law.
The framework has four functions:
- Govern — Establish AI governance structures, roles, and accountability
- Map — Identify and document AI risks in context
- Measure — Quantify and track AI risks with appropriate metrics
- Manage — Implement controls to mitigate identified risks
The critical requirement: your NIST alignment must be documented and timestamped before an enforcement action. Building your defense after the AG sends a letter is too late.
What to Do Monday Morning
Regardless of your deployer type, three things you can do this week:
1. Classify yourself. Private, government, or healthcare. If you span categories (a state hospital system, a government contractor deploying patient-facing AI), you face the combined obligations of every applicable type.
2. Inventory your AI. Every system that touches a Texas resident. Name it, categorize it, document what it does and who it affects. You can't comply with what you haven't cataloged.
3. Start your NIST documentation. Even a partial mapping to Govern, Map, Measure, and Manage — timestamped today — is infinitely more defensible than a complete mapping timestamped after an AG notice.
The companies that prepare before enforcement become the ones that define the standard. The ones that wait become the case studies.
Ready to automate your TRAIGA compliance?
TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.
Start 14-day free trial