Keypoint: If signed into law, Colorado will become the first state to enact legislation regulating the use of high-risk artificial intelligence systems.
On May 8, the Colorado legislature passed the Colorado Artificial Intelligence Act (SB 205). If signed by Governor Jared Polis, Colorado will become the first state to enact legislation that broadly addresses the use of artificial intelligence, in particular the use of artificial intelligence in high-risk activities. The bill is co-sponsored by Senate Majority Leader Robert Rodriguez and House Representatives Manny Rutinel and Brianna Titone.
In the below article, we first provide context and background on the bill. We then provide a summary of the bill’s provisions.
I. Background
A. Bill Drafting Origins and Process
Starting last summer, a bipartisan group of state lawmakers from nearly thirty states engaged in a multi-state artificial intelligence workgroup led by Connecticut Senator James Maroney and facilitated by the Future of Privacy Forum. The concept behind the workgroup was to create a forum to educate state lawmakers interested in this topic and to coordinate approaches across states to better allow for interoperability. The workgroup met seven times and heard from AI experts from many different fields. Senator Maroney also separately chaired a Connecticut lawmaker workgroup on the same topic.
After the multi-state workgroup, Senators Maroney and Rodriguez coordinated their bill drafting efforts with an initial draft circulated to stakeholders prior to the Colorado and Connecticut legislative sessions opening. Senator Maroney then filed SB 2 in early February and engaged in further stakeholdering that resulted in multiple rounds of revised drafts. Senator Rodriguez filed SB 205 on April 10 with the bill largely tracking the then-current version of Connecticut SB 2 (although with some Colorado-specific terms such as allowing for Attorney General rulemaking, which was not present in the Connecticut bill).
Senator Maroney’s bill passed the Connecticut Senate on April 24 but ultimately was blocked from a vote in the Connecticut House after Governor Ned Lamont threatened a veto. Connecticut’s legislature (like Colorado’s) closed on May 8. Colorado Governor Polis has not yet taken a public position on the Colorado bill.
B. Innovation versus Regulation
During the workgroup and legislative process, a stated goal was to create a legislative structure that balanced allowing for continued innovation while at the same time providing basic guardrails to protect consumers. The drafters attempted to reach this balance in a few ways.
First, the bill sets out general duties of care on developers and deployers (defined below) to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. However, it creates a rebuttable presumption that developers and deployers used such reasonable care if they follow the bill’s requirements such as providing disclosures and documentation. The bill also creates an affirmative defense for developers and deployers that discover and cure violations and are complying with a recognized AI risk mitigation framework. This structure was designed to incentivize actors to engage in upfront risk mitigation activities.
Second, the bill does not contain a private right of action and is only enforceable by the Attorney General’s office.
Third, as originally introduced, the bill contained a section regulating general purpose artificial intelligence models. This section was removed to narrow the bill’s focus and address concerns raised during the stakeholdering process, including during the Senate committee hearing.
Whether the bill’s structure ultimately lands on an appropriate balance between these two interests remains to be seen. Advocates from different organizations have argued both that the bill does not go far enough and that it goes too far.
C. Delayed Effective Date and Workgroup
If the bill is signed by the Governor and becomes law, it will not go into effect until February 1, 2026, or almost two years from passing. In a separate bill (HB 1468), Colorado lawmakers created a workgroup to continue looking at the bill’s provisions. This structure allows the legislature to take another session to make amendments (if needed) and to address criticisms that the bill had been rushed. The bill also permits (but does not require) the Colorado Attorney General’s office to engage in rulemaking. Those familiar with the CCPA’s implementation will recall that the California legislature similarly deferred implementation of the CCPA and during the next legislative session passed several amendments.
II. Summary
A. Scope
The bill creates a new Part 16 in Colorado’s Consumer Protection Act (where the Colorado Privacy Act resides as Part 13). At its core, the bill is anti-discrimination legislation, focusing on the use of high-risk artificial intelligence systems (although section 6-1-1604 applies more broadly as explained below).
The bill generally applies to developers and deployers of high-risk artificial intelligence systems. A “developer” is a person doing business in Colorado that develops or intentionally and substantially modifies an artificial intelligence system. A “deployer” is a person doing business in Colorado that deploys a high-risk artificial intelligence system. “Person” is defined in C.R.S. § 6-1-102(6) as “an individual, corporation, business trust, estate, trust, partnership, unincorporated association, or two or more thereof having a joint or common interest, or any other legal or commercial entity.”
An artificial intelligence system is defined as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
A high-risk artificial intelligence system is “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making a consequential decision.” The term excludes several different types of activities such as anti-fraud technology that does not use facial recognition technology, anti-malware, data storage, databases, artificial intelligence-enabled video games and chat features (to name a few) so long as they do not make, or are not a substantial factor in making, a consequential decision.
The bill defines a “consequential decision” as “a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.” As originally introduced, the bill also included “a criminal justice remedy” and “an essential good or service” in the definition of “consequential decisions,” but those categories were removed.
B. Developer Duties
Section 6-1-1602 applies to developers and creates a general duty of care, stating “a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system.”
“Algorithmic discrimination” is defined as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the english language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”
Section 6-1-1602 goes on to create a rebuttable presumption that a developer used reasonable care if they comply with the requirements in that section.
Pursuant to section 6-1-1602, developers must make available to deployers or other developers of high-risk artificial intelligence systems:
- A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system;
- Documentation disclosing things such as a high-level summary of the type of data used to train the high-risk artificial intelligence system and known or reasonably foreseeable limitations of the system;
- Documentation describing things such as how the system was evaluated for performance and mitigation of algorithmic discrimination and the intended outputs of the high-risk system;
- Any additional documentation reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the system for risks of algorithmic discrimination; and
- Documentation and information necessary for a deployer to complete an impact assessment.
Developers also must make available on their website or in a public use case inventory: (1) a statement summarizing the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (2) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise.
Finally, if a developer learns that its high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination, it must disclose that to the Attorney General and all known deployers or other developers within ninety days of discovery.
C. Deployer Duties
Section 6-1-1603 applies to deployers. Like section 6-1-1602, this section creates a duty of care for deployers to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination” and creates a rebuttable presumption that deployers used reasonable care if they follow the requirements of the section.
The section requires deployers to:
- Risk Management Program. Implement a risk management policy and program to govern their deployment of a high-risk artificial intelligence system (the requirements of which are outlined in the bill);
- Impact Assessment. Complete an impact assessment for the high-risk artificial intelligence system or contract with a third party to complete that assessment (the requirements of which are outlined in the bill);
- Notifications to Consumers. Notify consumers if the deployer uses a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer and provide the consumer with a statement disclosing information such as the purpose of the system and nature of the consequential decision and, if applicable, information regarding the right to opt out of profiling under the Colorado Privacy Act;
- Appeal. If the high-risk artificial intelligence system is used to make a consequential decision to the consumer that is adverse, the deployer must provide certain information to the consumer regarding that decision and provide the consumer an opportunity to appeal that decision which must, if technically feasible, allow for human review; and
- Website Disclosures. Make available on their websites a statement summarizing information such as the types of high-risk artificial intelligence systems that are currently deployed by the deployer and how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination.
Certain types of small businesses do not need to comply with the risk management, impact assessment, and website disclosure requirements referenced above.
Finally, if a deployer discovers that a high-risk artificial intelligence system has caused algorithmic discrimination it must notify the Attorney General within ninety days.
E. Duty to Disclose Use of Artificial Intelligence Systems
Section 6-1-1604 requires a deployer or other developer that deploys or otherwise makes available an artificial intelligence system that is intended to interact with consumers to disclose to consumers that they are interacting with an artificial intelligence system unless it would be obvious to a reasonable person.
F. Exemptions
Section 6-1-1605 sets forth over five pages of exemptions, which are too extensive to summarize here. Of note, the bill does not contain any entity-level exemptions. Rather, exemptions are primarily based on whether the developer or deployer’s use of the high-risk artificial intelligence system is subject to some other regulatory oversight or, in certain cases, a “substantially equivalent or more stringent” law or regulation.
G. Enforcement
The bill is enforceable exclusively by the Colorado Attorney General. There is no private right of action. The Attorney General is given authority to request that developers and deployers provide certain information regarding their documentation.
In any enforcement action, there is an affirmative defense if the developer, deployer, or other person discovers and cures the violation, is otherwise in compliance with NIST’s Artificial Intelligence Risk Management Framework, another nationally or internationally recognized risk management framework for artificial intelligence, or a risk management framework designated by the Attorney General.
H. Effective Date
If the bill becomes law, it will go into effect on February 1, 2026.
I. Rulemaking
The Attorney General’s office is authorized, but not required, to promulgate regulations. The bill identifies a list of rulemaking topics.