Blog/Healthcare
HealthcareFebruary 1, 2026·3 min read

SB 1188: What Texas Healthcare Providers Must Disclose About AI

Texas SB 1188 creates a straightforward but far-reaching requirement: if you're a healthcare provider using AI in patient care, you must disclose that use to the patient before or at the time of the AI-assisted service. No exceptions for “it's just a tool” or “the doctor made the final call.”

What Triggers the Disclosure Requirement

The disclosure obligation applies when AI is used in:

  • Diagnostic support — AI that assists in reading imaging, lab results, or symptom analysis
  • Treatment recommendations — AI systems that suggest treatment plans, drug interactions, or care pathways
  • Triage and scheduling — AI that determines urgency or priority of care
  • Patient communication — chatbots, virtual assistants, or AI-drafted patient messages
  • Clinical documentation — AI-generated notes, summaries, or coding

The trigger is broad: if AI meaningfully participates in a process that affects the patient's care experience, disclosure is required.

What the Disclosure Must Include

SB 1188 doesn't prescribe exact language, but it does require:

  1. Clear identification that AI is being used
  2. The nature of the AI's role — what it does in the care process
  3. That a human provider remains responsible for final decisions
  4. How to opt out or request human-only review where applicable

The disclosure must be before or at the time of the AI-assisted service. Post-hoc notification — burying it in discharge paperwork — doesn't satisfy the statute.

The Dark Pattern Trap

SB 1188 explicitly prohibits dark pattern disclosures. What does that mean in practice?

  • No pre-checked consent boxes that patients have to opt out of
  • No disclosure buried in a 40-page intake form at paragraph 37
  • No “by continuing, you agree” passive consent mechanisms
  • No confusing language designed to minimize the AI's role

The standard is informed, accessible, honest disclosure. If a reasonable patient wouldn't understand that AI was involved after reading your disclosure, it doesn't meet the standard.

Who Is Liable

The healthcare provider entity is liable — the hospital, clinic, or practice. Not the individual physician (unless they're the entity). Not the AI vendor. This means compliance is an institutional responsibility, not something you can delegate to individual providers making ad hoc disclosures.

Implementation Checklist

  1. Inventory every AI system in your care workflow — including vendor-provided tools you may not think of as “AI”
  2. Create disclosure templates for each category of AI use (diagnostic, treatment, triage, communication, documentation)
  3. Integrate disclosures into intake workflows — not as a separate form, but woven into the care process
  4. Train front-desk and clinical staff on when and how to deliver disclosures
  5. Document every disclosure — date, time, method, patient acknowledgment
  6. Audit quarterly — are disclosures actually being delivered? Are new AI systems being added without corresponding disclosures?

Where TRAIGA Adds Complexity

Healthcare providers aren't just subject to SB 1188. TRAIGA's prohibited practices apply too. If your diagnostic AI exploits patient vulnerabilities or your triage system discriminates based on protected characteristics, you're facing both SB 1188 disclosure violations and TRAIGA prohibited practice violations.

TXAIMS for Healthcare handles both: SB 1188 disclosure templates with dark-pattern-free formatting, plus full TRAIGA prohibited practice screening and NIST alignment. One platform, both statutes. Start your trial.

Ready to automate your TRAIGA compliance?

TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.

Start 14-day free trial