State Spotlight: Utah
Key Takeaways
- With companies increasingly integrating artificial intelligence into their workflows, it is important to monitor and adapt to a patchwork of AI disclosure laws across jurisdictions.
- In this inaugural alert in a series spotlighting state AI laws and regulations, we examine Utah’s law, which requires individuals in certain occupations to disclose the use of generative AI at the outset of high-risk interactions with consumers.
Companies are integrating artificial intelligence into their workflows for providing services to consumers. There is an ongoing debate over the extent businesses should be regulated in their use of artificial intelligence. In consequence, the regulatory landscape of artificial intelligence is rapidly evolving.
In this inaugural edition of our State Spotlight series, we will examine the current landscape of AI regulation at the federal and state levels, using Utah’s recently enacted AI disclosure law to highlight the kinds of issues that companies—particularly those that operate in consumer-facing environments—should monitor. We’ll then provide some practical guidance for how companies can boost their AI disclosure compliance in the face of this swiftly evolving landscape. Future installments in this series will examine AI regulations and compliance for other states that have adopted similar laws.
The Federal Landscape
The “One Big Beautiful Bill Act”–signed July 4, 2025–originally included a 10-year moratorium preventing states from regulating artificial intelligence. The moratorium aimed to foster AI innovation and avoid a fragmented regulatory landscape across states. Critics argued the moratorium would undermine states’ ability to protect against AI-related harm. After a failed compromise to shorten the moratorium to five years, the Senate voted 99–1 to strike the provision, allowing individual states to regulate AI.
Just weeks later, on July 23, 2025, the Trump administration released the AI Action Plan. While not a binding order, this plan recommends that federal agencies limit AI-related discretionary funding from flowing to states with a burdensome “AI regulatory climate” that would waste those funds. What qualifies as “burdensome” remains undefined, leaving federal agencies wide latitude to pressure states that implement aggressive AI oversight.
A Regulatory Patchwork of States: Utah’s Disclosure Law
Despite uncertainty at the federal level, several states have already enacted laws regulating the use of AI. Utah was among the first with SB 226 taking effect May 7, 2025. This law requires an “individual providing services in a regulated occupation” to disclose the use of “generative AI” at the outset of “high-risk” AI interactions. [Emphasis added.] The bill defines these terms as follows:
- “Regulated occupations” are those that require a license or state certification to practice, such as doctors, financial advisors, or attorneys.
- “Generative AI” is defined narrowly as a system designed to simulate human conversation with a consumer and that uses non-scripted outputs, with limited or no human oversight.
- “High-risk” interactions include collecting sensitive personal information or providing advice for significant personal decisions.
Many businesses wonder whether these laws apply to them. To provide clarity, we have developed a set of hypotheticals illustrating how Utah’s disclosure requirement might play out in real-world scenarios. As a general rule, businesses can comply with SB 226 in one of two ways: by requiring human oversight of the use of AI, or by configuring the AI to make the necessary disclosure(s) itself.
Takeaways
Utah is not alone. Other states, like California, have also enacted their own AI disclosure rules. The patchwork of state laws may soon require businesses to juggle through compliance across multiple states.
In this period of uncertainty, with no single federal framework in place, companies should map where and how AI is deployed in their workflows, especially in consumer-facing or high-risk contexts. Maintaining internal records will not only help satisfy current obligations, but also position businesses to adapt quickly as new disclosure laws emerge. Businesses should also familiarize themselves with proposed legislation, if any, in jurisdictions of interest and consult with legal counsel to understand possible future regulations requiring their compliance.
Be sure to watch for future installments in this series examining state AI compliance laws. To learn more about how Knobbe Martens can help your organization navigate the complex AI regulatory environment, click here.