Best Practices

What Telemarketers Need to Know About Colorado's New AI Regulations

Colorado's new AI regulations are setting a precedent in the fight against algorithmic discrimination. The Colorado AI Act mandates stringent oversight for developers and deployers of high-risk AI systems, ensuring fair treatment across sectors like education, employment, and healthcare. Compliance is essential to avoid severe penalties and promote ethical AI usage.

The Centennial State officially leads the nation in taking steps to comprehensively regulate the use of generative artificial intelligence technology (AI) and prevent algorithmic discrimination.

On May 17, 2024, Governor Jared Polis signed Colorado Senate Bill 24-205, the Colorado AI Act (CAA), into law. Unlike the more limited AI laws enacted in states like Florida and Utah, the CAA takes a risk-based approach to AI regulations, by targeting the developers and deployers of AI systems that are deemed to have a high risk of “algorithmic discrimination.”

Covered Technology

The CAA applies to all developers and “deployers” (i.e., companies) using “high-risk artificial intelligence systems” that do business in Colorado. “High-risk AI systems” are defined by the CAA as one that makes or is a “substantial factor” in making or altering the outcome of a consequential decision. Some AI systems are considered “high-risk” if they affect the cost, provision, or denial of any of the following:

  • Education or education opportunities
  • Employment or employment opportunities
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance, or
  • Legal services.

The law also covers using AI to create any content, decision, prediction, or recommendation about a consumer that is used to make a consequential decision about them.

Excluded Technology

An AI system is not considered “high-risk” if it is intended to perform a narrow procedural task or to detect any decision-making patterns or deviations within them, meaning it is not intended to replace human assessment without review.

Additionally, anti-fraud technology that does not use facial recognition, databases, data storage, cybersecurity technology, and firewalls are not considered “high-risk” AI systems, along with generative AI tools such as chatbots designed to provide users with information, provided these tools prohibit the generation of discriminatory or harmful content.

What is Algorithmic Discrimination?

The law requires developers and deployers of high-risk AI systems to exercise reasonable care to avoid algorithmic discrimination, which occurs when the use of AI results in an illegal differential treatment or impact affecting someone in a class protected from discrimination, such as race, color, ethnicity, national origin, religion, sex, age, disability, veteran status, etc.

In contrast, AI systems used to increase diversity or rectify past discrimination do not practice algorithmic discrimination. In addition, using an AI that leads to discriminatory acts or omissions is not algorithmic discrimination under the CAA if performed by a private club or other establishment that is not open to the public.

Developer Responsibilities

The CAA imposes a duty of care on developers of high-risk AI systems to protect consumers from any risks of algorithmic discrimination that might be associated with the AI systems they develop. The law creates a rebuttable presumption that developers exercised reasonable care if they follow certain procedures when developing high-risk AI systems, mainly:

  • Providing whoever will deploy the high-risk AI system deployers with certain information, including the purpose, intended use, intended benefits, potential risks, and any foreseeable algorithmic discrimination associated with using the AI system, and information on how to manage that risk.
  • Providing deployers with the documentation required to conduct an impact assessment of the system.
  • Disclosing to deployer and the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination within 90 days of discovering such risks.
  • Releasing information to the general public summarizing the types of high-risk AI systems they develop or modify.

Deployer Responsibilities

The CAA also imposes a duty of reasonable care on companies that deploy high-risk AI systems to protect consumers from any known or foreseeable risks of algorithmic discrimination, and also creates rebuttable presumption that a deployer exercised the requisite level of care if they follow certain procedures.

Among other things, deployers must review each high-risk AI system at least annually for any evidence of algorithmic discrimination, inform consumers about decisions made using the system and provide them with an opportunity to correct any erroneous information used to make a consequential decision. Companies with more than 50 employees that use the AI system in certain ways have additional obligations to meet their duty of care.

Additional Disclosure Obligations

Any developer or deployer that makes or uses an AI system intended to interact with consumers are required to disclose the fact that consumers are interacting with an AI system and not a live person.

Enforcement

Enforcement of the CAA is the sole purview of the Colorado Attorney General, which is also empowered to implement additional rules to enforce the law. Any violations of the Colorado AI Act will be considered unfair trade practices. Thankfully, the law does not include a private right of action.

The Road Ahead

Developers and companies that use AI to make consequential decisions should keep a close eye on the CAA and its requirements and prepare for compliance. The alternative is to steer clear of doing business in Colorado, but it is only a matter of time before other states enact equally expansive statutes regulating the use of AI technology to curb the potential effects of algorithmic discrimination.

One such state is Connecticut, the legislature of which is seeking to pass SB 2, titled “An Act Concerning Artificial Intelligence,” which appears to mirror the CAA in many ways.

FAQ 1: What is the primary purpose of the Colorado AI Act (CAA)?

Answer: The primary purpose of the Colorado AI Act (CAA) is to regulate the use of high-risk artificial intelligence systems to prevent algorithmic discrimination. The CAA targets developers and deployers of AI systems that significantly impact consequential decisions, ensuring these systems do not unlawfully discriminate against protected classes such as race, gender, and age.

FAQ 2: Who is affected by the Colorado AI Act?

Answer: The Colorado AI Act affects all developers and deployers of high-risk AI systems doing business in Colorado. This includes companies that use AI to make or influence consequential decisions in areas like education, employment, financial services, healthcare, housing, insurance, and legal services. It also applies to AI systems that create content, predictions, or recommendations about consumers used in consequential decision-making.

FAQ 3: What are the responsibilities of AI developers and deployers under the Colorado AI Act?

Answer: Under the Colorado AI Act, developers are required to exercise reasonable care to avoid algorithmic discrimination and provide deployers with detailed information about their AI systems, including potential risks and ways to manage them. Deployers must review AI systems annually for evidence of discrimination, inform consumers about AI-based decisions, and allow consumers to correct erroneous information. Both developers and deployers must disclose to consumers when they are interacting with an AI system.

discrimination