SB 1964: The Texas AI Ethics Code for Government Agencies
While TRAIGA (HB 149) applies to all AI deployers in Texas, SB 1964 creates additional obligations specifically for state agencies and local government entities. It mandates an AI ethics code, a public system inventory, and heightened scrutiny assessments — requirements that don't apply to private sector companies.
If you're a CIO, compliance officer, or IT director at a Texas government agency, this is your implementation guide.
What SB 1964 Requires
SB 1964 has four core mandates:
1. Formal AI Ethics Code Adoption
Every Texas state agency using AI must adopt a formal, written AI ethics code. This isn't discretionary — it's a statutory requirement. The ethics code must:
- Align with DIR guidance. The Department of Information Resources has published model ethics frameworks. Your code should reference and build on these, not start from scratch.
- Define acceptable AI use. Specify what types of AI applications the agency permits, restricts, and prohibits — beyond TRAIGA's baseline prohibitions.
- Establish accountability. Name the office, role, or committee responsible for AI governance. Vague ownership leads to vague compliance.
- Include oversight procedures. Define how AI systems are reviewed, approved, monitored, and retired.
- Address transparency. Specify what information about AI use is made available to the public and to affected individuals.
2. Public AI System Inventory
Agencies must maintain a catalog of every AI system in use and make it available for public review. The inventory should include:
- System name and vendor
- Purpose and function (what decisions it makes or assists)
- Data inputs and sources
- Affected populations (employees, constituents, applicants)
- Risk classification
- Date deployed and last reviewed
This is a significant transparency obligation. Private sector companies don't have to publish their AI inventories — agencies do. Treat the inventory as a living document that updates whenever systems change.
3. Heightened Scrutiny Assessments
AI systems used in critical decisions require a heightened scrutiny assessment before deployment. Critical decisions include:
- Law enforcement actions (surveillance, predictive policing, forensic analysis)
- Benefits determinations (eligibility, amounts, denials)
- Licensing and permitting decisions
- Employment decisions (hiring, promotion, termination for agency staff)
- Child welfare assessments
- Parole and probation recommendations
The assessment must document the system's accuracy, bias risk, transparency level, human oversight mechanisms, and potential for disparate impact. It must be completed before the system is deployed, not after.
4. Annual Reporting to DIR
Agencies must report their AI usage, ethics code compliance, incidents, and heightened scrutiny assessment results to DIR annually. This report feeds into statewide AI governance oversight and can trigger DIR guidance updates.
How SB 1964 Stacks with TRAIGA and HB 3512
Government agencies face triple compliance — and the statutes don't just coexist, they compound:
- TRAIGA (HB 149): Prohibited practice screening, NIST AI RMF alignment, 60-day cure readiness — same as private sector.
- SB 1964: Ethics code, public inventory, heightened scrutiny assessments, annual DIR reporting — government-only obligations layered on top.
- HB 3512: Annual DIR-certified AI training for all employees using computers for 25%+ of their duties — another government-only requirement.
An agency that completes TRAIGA screening but ignores SB 1964's ethics code is non-compliant. An agency with a perfect ethics code but no NIST alignment has no affirmative defense. All three statutes must be addressed simultaneously.
Implementation Timeline
- Week 1-2: Draft or adopt AI ethics code aligned with DIR guidance. Identify accountability officer/committee.
- Week 2-4: Build complete AI system inventory. Classify each system by risk level and decision type.
- Week 4-6: Conduct heightened scrutiny assessments for all critical-decision AI systems.
- Week 6-8: Complete TRAIGA prohibited practice screenings and begin NIST alignment scoring.
- Week 8-10: Identify all HB 3512-eligible employees. Schedule DIR-certified training.
- Ongoing: Update inventory on system changes. Re-assess heightened scrutiny annually. Maintain training compliance through fiscal year.
Common Mistakes Agencies Make
- Copy-pasting another agency's ethics code. DIR guidance provides a framework, but your code must reflect your specific AI use cases, risk profile, and decision domains. A generic code won't survive scrutiny.
- Treating the inventory as a one-time project. New AI tools get deployed constantly — embedded AI in SaaS, pilot programs, department-level purchases. Without a process for capturing new systems, the inventory goes stale in months.
- Missing embedded AI. Agencies often inventory their obvious AI systems (chatbots, analytics dashboards) but miss AI embedded in enterprise tools — HR software with AI screening, financial systems with predictive models, communication platforms with sentiment analysis.
- No heightened scrutiny for “assistive” AI. SB 1964 covers AI that assists critical decisions, not just AI that makes autonomous decisions. A system that recommends benefits eligibility to a human reviewer still requires heightened scrutiny.
Automate Government Compliance
TXAIMS is deployer-type aware. When you register as a state agency or local government, the platform automatically activates SB 1964 requirements — ethics code tracking, public inventory management, heightened scrutiny assessment workflows — alongside TRAIGA screening and HB 3512 training compliance. One platform, all three statutes, scored as a unified compliance posture.
Ready to automate your TRAIGA compliance?
TXAIMS screens your AI systems, builds your NIST defense, and generates evidence bundles in minutes.
Start 14-day free trial