top of page

What You Need To Know About Utah's New AI Policy Act


Adapting to the changing Ad Tech Environment

On the evening of Friday, March 15, the Artificial Intelligence Policy Act was signed by Utah's Governor Spencer Cox and is said to take effect from May 1, 2024. Its main focus is to promote responsible use of AI in private sector businesses, protecting the sanctity of Privacy Laws laid out all over the country. Being the first to produce a bill vouching for AI safety, Utah's mission mandates the inclusion of Human Intervention in AI processes. 

 

Key Definitions


  • Artificial Intelligence: Defined as any technology capable of performing tasks or exhibiting behaviors that typically require human intelligence, such as problem-solving, decision-making, and learning.


  • Algorithm: A set of instructions or rules followed by a computer program to perform a specific task or solve a problem. Algorithms serve as the foundations of AI systems, governing their behavior and functionality.


  • Bias: Refers to systematic errors or inaccuracies in decision-making processes that result in unfair treatment or discrimination against certain individuals or groups. Mitigating bias is essential for ensuring fairness and equality in AI applications.

 

Scope

The AI Policy Act extends its reach across a broad spectrum, addressing multiple facets of AI integration, utilization, and advancement within Utah's private sector. Its scope spans a diverse range of fields, such as healthcare, finance, transportation, and education, illustrating the widespread impact of AI across various industries. With the help of a new workspace created to further the AI legislation in Utah, safety in the practice of emerging tech has materialized for further development. Participants who choose to take part in the AI Learning Lab Program can now develop AI tools while mitigating potential regulatory enforcement overseen by the Office of AI Policy, which administers the program.

 

Enforcement and regulations

The principal aim of this legislation is to integrate the utilization of Generative AI within Utah's existing consumer protection statutes. Utilizing a Generative AI model for unlawful purposes may incur criminal charges. Further, non-compliance with the law carries the possibility of an administrative fine of up to $2,500 and a civil penalty of up to $5,000.


Intending to hold creators accountable instead of the technology itself, the AI Policy Act makes two subtle yet significant contributions to this discussion. The AI Policy Act mandates AI disclosure in two main scenarios:


  • Disclosure in Regulated Activities:

    • When a person employs, initiates, or facilitates interactions with Generative Artificial Intelligence in activities regulated and enforced by the Utah Division of Consumer Protection.

    • Disclosure is required if the interacting individual asks or prompts for it, ensuring they are informed that they are interacting with AI and not a human.

  • Disclosure in Regulated Occupations:

    • In occupations regulated by the Utah Department of Commerce that require a license or certification, where generative AI is permitted

 

Provisions

Businesses looking to develop AI technologies shall not feel threatened by possible regulatory violations. With provisions to encourage innovation without fear, the focus is shifted to:


  • Transparency and Accountability- The Act draws an emphasis on the importance of transparency and accountability in AI systems, requiring developers and users to disclose crucial information such as data sources, algorithms employed, and potential biases or risks associated with AI applications.


  • Data Protection- In alignment with existing privacy laws, the Act establishes safeguards to protect individuals' privacy rights and sensitive personal data used in AI systems. It sets forth guidelines for data collection, processing, and storage to ensure compliance with privacy regulations.


  • Fairness and Bias Mitigation- Addressing concerns about algorithmic bias and discrimination, the Act mandates measures to mitigate bias and promote fairness in AI systems. Regular audits and assessments of AI algorithms are required to identify and rectify any biases or disparities in outcomes.

 

Safe practice and business impact

Deliberately opting for a less restrictive approach, the legislation prioritizes innovation by expanding regulations to include AI use, aligning with evolving technology while safeguarding consumer interests. When integrating AI tools into your business through in-house strategies or external sources, prioritize robust contracts, conduct thorough risk assessments, provide comprehensive employee training, and establish human oversight for ethical deployment.


Businesses can prepare for the Act by ensuring that their Testing Periods end successfully to mitigate possible penalties. Businesses can choose to do this as part of the AI Learning Lab Program, where you are provided a 12-month period to test and revise your AI product. In this period, should you incur a violation of the regulations, you will be provided with a cure period and a further reduced penalty. Businesses can further implement stringent measures such as those mentioned above to ensure compliance with and ethical deployment of AI tools. Businesses can further deploy systems that automatically track and disclose AI usage to consumers.

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page