Blog/Compliance Guide
Compliance GuideFebruary 18, 2026·5 min read

Do Companies Have to Disclose If They Use AI?

Short answer: yes, in specific contexts. There is no single blanket “you must disclose all AI use” law in the United States. But across Texas, Colorado, and the EU, disclosure requirements are already enforceable — and they hit harder than most companies realize.

Here is exactly who must disclose, what they must disclose, and to whom.

Texas: Three Disclosure Triggers

1. Healthcare AI — SB 1188 (Mandatory)

Texas SB 1188 is the most explicit AI disclosure law in the state. If you are a healthcare provider using AI in patient treatment or diagnosis, you must notify the patient before the AI-assisted service is rendered.

This is not optional. It applies to:

  • AI-assisted diagnostic tools
  • AI-generated treatment recommendations
  • AI triage and screening systems
  • AI-powered clinical decision support

The disclosure must be clear, specific, and documented. “We may use technology” buried in a 12-page consent form does not satisfy the requirement.

2. Government Biometric ID — TRAIGA Section 2(7) (Prohibited Without Consent)

TRAIGA's seventh prohibited practice bars government agencies from using AI for biometric identification without obtaining informed consent. This is effectively a disclosure-plus-consent requirement — the individual must be told AI is being used for biometric purposes and must affirmatively agree.

Violation: up to $200,000 per instance, enforced by the Texas AG.

3. Government AI Inventory — SB 1964 (Public Disclosure)

SB 1964 requires Texas government agencies to maintain and publicly disclose an inventory of all AI systems they deploy. This isn't patient-facing disclosure — it's institutional transparency. The public has a right to know what AI systems government agencies are running.

Colorado: Consumer Notice Before Consequential Decisions

Colorado SB 24-205 (enforceable June 2026) requires deployers of high-risk AI systems to provide consumers with notice before the AI system makes or substantially contributes to a consequential decision. This covers:

  • Employment decisions (hiring, firing, promotion)
  • Education admissions and financial aid
  • Credit and lending decisions
  • Insurance underwriting and pricing
  • Housing availability and terms

The consumer must also be told how to appeal the decision and request human review.

EU AI Act: Transparency for All AI Interactions

The EU AI Act requires disclosure whenever an AI system interacts with a person:

  • Chatbots — must disclose they are AI (not optional)
  • Deepfakes — AI-generated images, audio, and video must be labeled
  • AI-generated content — text produced by AI must be identified as such when published for public consumption
  • Emotion recognition — subjects must be informed

Penalties for transparency violations: up to €15 million or 3% of global turnover.

The Disclosure Matrix: Who Must Disclose What

Deployer TypeFrameworkDisclosure RequiredTo Whom
Healthcare (TX)SB 1188AI use in treatment/diagnosisPatient, before service
Gov agency (TX)TRAIGA §2(7)Biometric AI identificationIndividual, with consent
Gov agency (TX)SB 1964Full AI system inventoryPublic
High-risk (CO)SB 24-205AI in consequential decisionsConsumer, before decision
All (EU)EU AI ActAI interaction + AI-generated contentAny person interacting with AI
Private (TX)TRAIGANo blanket mandate, but strongly recommendedCustomers, employees

Why Disclose Even When Not Required

For private sector Texas companies, TRAIGA does not mandate universal AI disclosure. But there are three strategic reasons to disclose proactively:

1. NIST AI RMF alignment. The Govern function of the NIST AI Risk Management Framework includes transparency as a core principle. Documenting your disclosure practices strengthens the affirmative defense that protects you from $200K/violation penalties.

2. Deceptive trade practices exposure. Texas's Deceptive Trade Practices Act predates TRAIGA. If a customer reasonably believed they were interacting with a human when they were actually interacting with AI, that's a potential DTPA claim — regardless of TRAIGA.

3. Multi-state compliance. If your AI system touches consumers in Colorado, the EU, or any of the 24+ states with AI legislation pending, proactive disclosure puts you ahead of the compliance curve rather than scrambling to retrofit.

How to Build a Disclosure Framework

  1. Inventory your AI touchpoints — Every place a customer, patient, employee, or citizen interacts with AI
  2. Map to legal requirements — Which touchpoints trigger mandatory disclosure under which statute?
  3. Draft clear language — Avoid legalese. “This analysis uses artificial intelligence” beats “automated decisioning technology may be employed”
  4. Timestamp and archive — Every disclosure delivered should be logged with a timestamp for your evidence bundle
  5. Review quarterly — New AI deployments mean new disclosure obligations

The companies that disclose voluntarily look transparent. The ones forced to disclose after an AG investigation look evasive. Same information, radically different optics.

Ready to automate your TRAIGA compliance?

TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.

Start 14-day free trial