Texas Considers Sweeping AI Legislation: 5 Things Employers Need to Know

This post was originally published on this site.

A Texas lawmaker recently introduced a potentially groundbreaking bill that could force Texas employers to comply with the nation’s most comprehensive state-level AI standards. Introduced by Rep. Giovanni Capriglione (R) on December 23, the Texas Responsible AI Governance Act (TRAIGA) takes a risk-based approach to AI regulation similar to European-style regulatory schemes, with significant implications for employers across industries. Here’s five things businesses need to know and what you might expect in the future – for Texas employers and for businesses across the country.

1. High-Risk AI Systems Targeted for Oversight

TRAIGA (officially HB 1709, which you can read here) classifies AI systems by their potential impact on “consequential decisions,” such as employment, housing, healthcare, and education. According to the bill, “consequential decisions” are defined as those that materially affect an individual’s access to or conditions of these essential services, including hiring or other employment matters.

If an AI system can substantially influence such decisions, it is considered “high-risk” – and therefore employers using such tools would have to:

  • Conduct semiannual impact assessments;
  • Document how these systems are trained and tested; and
  • Report steps taken to prevent algorithmic discrimination.

This mirrors the EU AI Act and Colorado’s AI law, both of which also apply a risk-based framework to AI systems with potentially high societal impacts. However, TRAIGA’s definitions are broader and more ambiguous than Colorado’s, while its reporting requirements are more stringent than the EU’s – potentially creating even greater compliance burdens for businesses.

2. Key Obligations for Developers, Deployers, and Distributors

TRAIGA would impose distinct obligations based on an entity’s role:

  • Developers: Must maintain extensive documentation of training data and disclose potential algorithmic risks.
  • Deployers (such as employers): Must conduct regular bias audits and inform individuals when AI impacts decision-making. However, the bill does not create any affirmative obligation on the part of employers to accommodate applicants or workers who would like to opt out of being subject to AI decision-making.
  • Distributors: Must withdraw or disable non-compliant systems when issues arise.

3. Banned and Restricted AI Applications

The law would prohibit AI uses deemed to present “unacceptable risks,” including:

  • Social scoring;
  • Developing inferences based on sensitive personal attributes (e.g., race, color, religion, disability, religion, sex, national origin, age, etc.);
  • Capturing biometric identifiers using AI; and
  • AI designed to manipulate human behavior.

4. The Texas AI Council and Regulatory Sandbox

TRAIGA would establish a state-level AI Council to issue guidelines and monitor compliance. Additionally, it would create a “regulatory sandbox” allowing businesses to test innovative AI applications under limited regulatory scrutiny. This echoes global AI regulatory strategies – but this type of strategy has faced skepticism from critics who argue the council may lack the resources to effectively oversee such a complex domain.

5. Bigger Picture Implications

While Texas is traditionally known for its pro-business, low-regulation stance, TRAIGA marks a surprising departure. AI regulations of this magnitude are more commonly seen in progressive states like Colorado, Illinois, and in proposed legislation from California and New York.

  • Could this suggest that concerns about algorithmic bias and AI governance might cross ideological lines and become bipartisan issues?
  • Or is this an aberrant bill doomed to fail in a red state that doesn’t want to put the brakes on innovation and business opportunities?

What’s Next?

As of now, the bill remains in its early stages, with no further legislative action since its December 2024 introduction. The bill’s author, Rep. Capriglione, is no stranger to the tech world – he owns a private equity company and worked in computer engineering roles for close to a decade before assuming office. He also appears to be in the good graces with party leadership as he was just handed a high-visibility role in the state’s House DOGE committee, so there is a chance this bill could gain traction. If enacted, TRAIGA would reshape the AI landscape in Texas and serve as a model – or cautionary tale – for other states.

However, the overall chances of the bill passing in its current form remain below 50%. The issue of AI regulation is somewhat dividing state Republican party leaders who are trying to balance the apparent need for some sort of regulation with the goal of encouraging innovation and business growth.

What Should You Do?

For now, monitor developments closely and prepare for potential compliance requirements. If you subscribe to FP’s Insight system, we’ll track this bill and provide a detailed update and compliance guide should the proposed law gain traction. In the meantime, companies should always consider staying one step ahead of potential regulation and compliance challenges by installing an AI Governance program, including:

  • Reviewing existing AI-driven processes, particularly in HR and hiring.
  • Auditing AI tools for algorithmic bias and document results.
  • Training staff to oversee high-risk AI applications effectively.

Conclusion

We’ll continue to monitor developments in this ever-changing area and provide the most up-to-date information directly to your inbox, so make sure you are subscribed to Fisher Phillips’ Insight System. If you have questions, contact your Fisher Phillips attorney, the authors of this Insight, any attorney in our Texas offices, or any attorney in our AI, Data, and Analytics Practice Group.