The 7 Prohibited AI Practices Under TRAIGA (With Examples)
TRAIGA doesn't regulate all AI. It draws bright lines around specific prohibited practices. If your AI system was designed or deployed with intent to do any of the following, you're in violation — up to $200,000 per incident.
Here's the complete list from HB 149 Section 2, with examples that make it concrete.
1. Subliminal or Manipulative Techniques
The law says: AI systems shall not deploy subliminal techniques beyond a person's consciousness or deliberately manipulative techniques to materially distort behavior.
Example: An e-commerce AI that uses micro-targeted emotional triggers calibrated below conscious awareness to drive impulse purchases. A chatbot designed to create artificial urgency through deceptive countdown timers personalized by user anxiety profiles.
What to do: Document the intent behind every user-facing AI interaction. If it influences decisions, the mechanism must be transparent.
2. Exploiting Vulnerabilities
The law says: AI systems shall not exploit vulnerabilities of individuals due to age, disability, or specific social or economic situation.
Example: A lending AI that identifies elderly applicants and routes them to higher-interest products. A subscription service that uses cognitive load patterns to make cancellation harder for users with identified disabilities.
What to do: Audit your AI's treatment of protected populations. Implement equitable outcome testing across demographic groups.
3. Social Scoring
The law says: AI systems shall not evaluate or classify individuals based on social behavior or personal characteristics in a way that leads to detrimental treatment unrelated to the original context.
Example: An HR system that factors in an applicant's social media activity to determine job fitness. A tenant screening tool that uses neighborhood crime data as a proxy for individual reliability.
What to do: Ensure AI decisions are scoped to their intended context. Cross-context data use is the trigger.
4. Real-Time Biometric Identification in Public Spaces
The law says: Prohibited in publicly accessible spaces, with narrow exceptions for law enforcement under judicial authorization.
Example: A retail chain using facial recognition at store entrances to identify known shoplifters. A property management company using biometric scanning in common areas.
What to do: If you use any biometric AI in public-facing spaces, get legal review immediately. The exceptions are extremely narrow.
5. Discrimination in Consequential Decisions
The law says: AI systems shall not discriminate based on protected characteristics in employment, housing, credit, education, or public accommodations.
Example: A resume screening AI that systematically down-ranks candidates from certain zip codes. A mortgage approval model with unexplained disparate impact across racial demographics.
What to do: Run disparate impact analysis. Document that your system was not designed with discriminatory intent and actively test for discriminatory outcomes.
6. Deceptive Content Generation Without Disclosure
The law says: AI-generated content that could be mistaken for human-created content must include disclosure.
Example: AI-generated customer reviews posted without disclosure. Deepfake-style video testimonials. AI-written news articles published as human journalism.
What to do: Label AI-generated content. Implement disclosure mechanisms in any AI content pipeline.
7. Surveillance Without Notice or Consent
The law says: AI-powered surveillance systems require notice to affected individuals and, in many contexts, consent.
Example: Employee monitoring AI that tracks keystrokes and screen activity without disclosure. Customer behavior tracking in physical spaces without posted notice.
What to do: Audit every AI system that collects behavioral data. Implement clear notice mechanisms.
The Pattern: Intent + Documentation
Every prohibited practice comes back to intent. The AG doesn't need to prove harm occurred — just that the system was designed or deployed with prohibited intent. Your defense is documented intent: showing that you built the system for legitimate purposes and actively screened for prohibited uses.
TXAIMS automates prohibited practice screening across all seven categories, generates the documentation the AG wants to see, and flags systems that need remediation — before enforcement finds them first.
Related Resources
- The Complete Guide to TRAIGA (HB 149) — see how prohibited practices fit into the full law
- NIST AI RMF: Your Affirmative Defense — build the safe harbor that protects you from violations
- TRAIGA Employer Compliance Checklist — prohibited practices #3 and #5 hit employers hardest
- TRAIGA Penalties and Enforcement — $200K per violation for prohibited practices
- AI in Hiring & HR — how prohibited practices affect recruitment AI
Ready to automate your TRAIGA compliance?
TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.
Start 14-day free trial