Making Utah the first U.S. state to enact a major artificial intelligence (AI) statute governing private-sector AI usage, on March 13, 2024—coincidentally, the same day the European Parliament adopted the EU AI Act1—Utah Gov. Cox signed into law S.B. 149 (AI Law). The AI Law, set to take effect May 1, 2024, was incorporated into Utah's consumer protection statutes. Its key elements include establishing liability for inadequate/improper disclosure of generative AI (GenAI)2 use and creating the Office of Artificial Intelligence Policy (Office) to administer a state AI program.

Disclosures

While technically not the first U.S. law to address a consumer's interaction with GenAI—at least in certain narrow circumstances3—Utah's AI Law is the most far-reaching and comprehensive. Under the AI Law, if a business or natural person uses GenAI to interact with an individual in connection with commercial activities regulated by Utah's Division of Consumer Protection (Division), it must clearly and conspicuously disclose to the individual that he or she is interacting with GenAI and not a human. This requirement applies only if the individual interacting with the GenAI prompts or asks the GenAI to disclose whether the individual is interacting with a human.

The AI Law also sets forth more restrictive disclosure obligations on persons providing the services of "regulated occupations" such as clinical mental health, dentistry, and medicine.4 Such persons must, when using GenAI in providing the regulated services, prominently disclose that an individual is interacting with GenAI. In contrast to the provisions addressing GenAI disclosure in contexts outside professional occupations/services, this disclosure obligation applies regardless of whether the individual interacting with the GenAI has asked the GenAI if he or she is interacting with a human. Additionally, for regulated service-related GenAI disclosures, the AI Law specifically requires the disclosure to be provided verbally when oral exchanges or conversations commence and via electronic messaging prior to written exchanges.

In a novel preemptive maneuver, the AI Law expressly prohibits attempting to avoid consumer protection/fraud liability by blaming GenAI itself as an intervening factor.

Enforcement

The AI Law grants the Division enforcement authority for violations, allowing the Division director to impose administrative fines of up to $2,500 per violation. It further permits the Division to seek in court the remedies of a judgment declaring that a particular act or practice violates the AI Law, injunctive relief, fines of up to $2,500 per violation, in addition to any administrative fines, disgorgement, and payment of disgorged sums to the individuals harmed by the violation. In such actions, the Division is entitled to prevailing party attorneys' and investigative fees, as well as court costs.

The Office of Artificial Intelligence Policy

The AI Law includes the Artificial Intelligence Policy Act (AIPA), which creates the Office within the Department of Commerce. The AIPA sets forth the Office's duties as follows:

(a) running the AI Learning Laboratory Program (Learning Lab);

(b) consulting with state businesses and stakeholders about regulatory proposals;

(c) engaging in rulemaking concerning, among other things, application fees and procedures for participation in, criteria for invitation to, acceptance in, and removal from, data usage limitations and cybersecurity criteria for participation in, and consumer disclosures for participants in the Learning Lab; and

(d) annually reporting to the Business and Labor Interim Committee the Learning Lab's proposed agenda, its outcomes and related findings, and recommended legislation arising from such findings.

The AI Learning Laboratory Program

The Learning Lab's purpose is to analyze and research AI risks, benefits, impacts, and policy implications to produce findings and legislative recommendations to inform Utah's regulatory framework. It also aims to promote AI technology development in Utah and evaluate with AI companies the effectiveness/viability of current, potential, and proposed AI legislation.

A benefit of acceptance to the Learning Lab is that participants using or seeking to use AI technology in Utah may apply to enter into a "regulatory mitigation agreement" with the Office and other relevant state agencies for a 12-month period (with a single 12-month extension available under certain circumstances). Entering into a regulatory mitigation agreement essentially allows a Learning Lab participant to develop and test AI technology while enjoying certain benefits as to potential liability arising from the AI testing (e.g., delayed restitution payments, a cure period before penalties are assessed, and reduced civil fines during the participation term).

Conclusion

The AI Law's provisions proscribing the deceptive use of GenAI may result in large monetary penalties if businesses do not comply with the applicable disclosure requirements. However, unlike the EU AI Act, the AI Law has little impact on the regulation of the development of AI technology.5 Rather, the focus is on the end use of an already-developed technology. Nevertheless, the AI Law's enactment may signal a coming wave of state-level AI regulation, with numerous AI bills already introduced in state legislatures across the nation.6

Footnotes

1. See GT Alert.

2. The AI Law defines "[g]enerative artificial intelligence" to "mean[] an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight."

3. Note that in 2018, California enacted the Bolstering Online Transparency Act (BOT Act), which allows businesses and individuals to avoid liability for deceptive "bot" usage by posting a clear, conspicuous disclosure reasonably designed to inform users that they are interacting with the bot. Cal. Bus. & Prof. Code § 17941 (eff. Jan. 1, 2019). However, compared to the AI Law, the Bot Act is narrow in that it makes unlawful only bot usage "to communicate or interact with [a] person in California online, with the intent to mislead the ... person about [the bot's] artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election." Cal. Bus. & Prof. Code, § 17941(a) (emphasis added).

4. The AI Law defines "[r]egulated occupation" to "mean[] an occupation regulated by the Department of Commerce that requires a person to obtain a license or state certification to practice the occupation."

5. See generally Utah S.B. 149; Caitlin Andrews, Private-sector AI bill clears Utah Legislature, IAPP, March 6. 2024 (last visited March 29, 2024).

6. See, e.g., CA AB 2013 (2024) (concerning AI training data transparency); CA AB 2930 (2024) (concerning requirements for deployers of automated decision tools); CA SB 970 (2024) (concerning deepfakes); VA HB 747 (2024) (concerning AI development); CO HB 24-1147 (2024) (concerning the use of a deepfake in communication related to a candidate for elected office); NY AB 7106 (2023) (requiring political communications to disclose their creation with the assistance of AI).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.