Navigating Colorado’s New Artificial Intelligence Act (CAIA)

By:  Kim Adamson

The Colorado Artificial Intelligence Act (CAIA), also known as Senate Bill 24-205, was signed into law by Colorado Governor Jared Polis in May 2024 and will take effect on February 1, 2026. This law establishes Colorado as the first state in the nation to implement comprehensive AI regulations aimed at preventing algorithmic discrimination. It is designed to regulate the private sector’s use of AI systems and imposes reasonable care requirements on Colorado employers to protect consumers from biases in high-risk AI systems. The law applies to both developers and deployers conducting business in Colorado, regardless of their physical location. 

When Governor Polis signed the CAIA, he also sent a letter to Colorado’s General Assembly addressing concerns to legislators, stakeholders, and industry leaders about the need to “fine-tune” the CAIA before its effective date to adequately protect technology, competition, and innovation in the state. The Colorado Artificial Intelligence Impact Task Force was established, consisting of a group of policymakers, industry insiders, and legal experts. After months of meetings and discussions, the Task Force published its Report and Recommendations in February 2025.

On April 28, 2025, Colorado Senate Majority Leader Robert Rodriguez introduced Senate Bill 25-318 (the “Artificial Intelligence Consumer Protections”), which was assigned to the Business, Labor, & Technology Committee. The bill aimed to amend the existing CAIA. However, the bill was postponed indefinitely by the Senate on May 5, 2025.

On May 5, 2025, Governor Polis, Attorney General Phil Weiser, Denver Mayor Mike Johnson, and others sent another letter to the Colorado legislature requesting a delay of the effective date of the CAIA until January 2027. Despite efforts to postpone or amend the law during the 2025 legislative session, these attempts were unsuccessful, and the CAIA is set to be implemented as signed into law on its effective date of February 1, 2026. Supporters have urged the Governor to call a special legislative session to delay the implementation of the CAIA and propose amendments that protect privacy and fairness without hindering innovation and business in the state.

First, let us define some of the terms used in the CAIA: 

  • Algorithmic Discrimination refers to unlawful differential treatment or impact that harms an individual or group of individuals based on protected characteristics, such as age, sex, race, color, ethnicity, national origin, religion, disability, veteran status, genetic information, reproductive health, and limited English proficiency.
  • Consequential Decisions refer to decisions with a material, legal, or similarly significant effect on areas such as employment and educational opportunities, housing, insurance, healthcare services, and financial or lending services. For employers, this directly impacts decisions regarding hiring, promotions, compensation, performance evaluations, and terminations.
  • Consumers are Colorado residents, including employees and job applicants.
  • Deployers are those who utilize AI systems and must exercise reasonable care to safeguard consumers from algorithmic discrimination related to these systems. Employers are deemed deployers when they use an AI system to make or assist in making employment-related decisions, such as reviewing resumes, conducting AI-driven video interviews, and making other hiring and promotion choices.
  • Developers are those who create or substantially modify AI systems and have a duty to avoid algorithmic discrimination and are required to use reasonable care to protect consumers from algorithmic discrimination. Employers may be considered developers if they create or substantially modify an AI system.
  • High-risk AI Systems are defined as those systems, including predictive AI systems, which make or are a substantial factor in making “consequential decisions.”  It does not generally apply to generative AI systems like ChatGPT unless the system is used in making consequential decisions. 

As a Deployer of AI, Employers Face Several Critical Obligations:

  • Reasonable Care to Prevent Discrimination: Employers must exercise “reasonable care” to protect against known or foreseeable risks of algorithmic discrimination in high-risk AI systems. Compliance with recognized risk management frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, can provide a rebuttable presumption of reasonable care and diligence.
  • Risk Management Policy and Program: Employers deploying high-risk AI systems must implement a robust risk management policy and program. This program should outline the principles, processes, and personnel involved in identifying, documenting, and mitigating the risks of algorithmic discrimination.
  • Impact Assessments: Annual impact assessments are required for each high-risk AI system and must be conducted within 90 days following any substantial modification. These assessments should evaluate potential discrimination risks, examine data inputs and outputs, and outline measures taken to mitigate risks.
  • Consumer Notification:
    • Prior to Consequential Decisions: Employers must notify consumers (including job applicants and employees) when a high-risk AI system is used to make, or is a substantial factor in making, a consequential decision. This disclosure should describe the AI system’s purpose, the nature of the decision, and provide contact information.
    • Adverse Decisions: If an adverse consequential decision is made by or substantially influenced by an AI system, the employer must provide a statement detailing the principal reasons for the decision, the degree to which the AI system contributed, and the types and sources of data processed by the system.
  • Opportunity to Correct and Appeal: Individuals subject to an adverse consequential decision must be given an opportunity to correct any incorrect personal data used by the AI system and to appeal the decision, ideally with human review.
  • Public Disclosure: Deployers must clearly and readily make available on their website information about the types of high-risk AI systems they deploy, how they manage discrimination risks, and the nature, source, and extent of information collected and used by the AI system.
  • Reporting to the Attorney General: Employers must notify the Colorado Attorney General within 90 days of discovering that a high-risk AI system has caused or is reasonably likely to cause algorithmic discrimination.

Exemptions: The law provides certain exemptions, notably for organizations with fewer than 50 employees that do not use their own data to train or enhance AI systems. However, employers should carefully review the specific criteria for these exemptions, as other conditions may apply. 

Tips and Next Steps for Employers to Proactively Prepare for CAIA Before the Effective Date of February 1, 2026:

  1. Inventory AI Use: Conduct a thorough audit of all current and planned AI systems used in your organization, particularly those involved in “consequential decisions” related to employment (e.g., resume screening, performance management, promotion recommendations). Identify which systems qualify as “high-risk.”
  2. Understand Your Role (Developer vs. Deployer): Determine if your organization acts solely as a “deployer” of third-party AI systems or if you also function as a “developer” by creating or substantially modifying AI. Each role carries distinct responsibilities. If the organization has a developer role, review and implement the requirements stated in the CAIA.
  3. Establish an AI Governance Framework: Develop a comprehensive AI governance strategy that aligns with the CAIA’s requirements. This includes:
    • Risk Management Policy: Create a written policy detailing how your organization identifies, assesses, mitigates, and monitors algorithmic discrimination risks. Consider adopting or aligning with established frameworks, such as NIST’s AI Risk Management Framework.
    • Impact Assessment Protocol: Design a process for conducting and documenting annual impact assessments for high-risk AI systems.
    • Data Governance: Establish clear guidelines for data collection, storage, use, and security, especially for data used to train or inform AI systems.
  4. Review Vendor Contracts: If you utilize third-party AI systems, scrutinize your contracts with AI developers and vendors. Ensure they provide the necessary documentation, disclosures, and assurances regarding compliance with the CAIA’s developer obligations. Understand how liability is allocated in the case of algorithmic discrimination.
  5. Develop Notification Procedures: Draft clear and concise notices for job applicants and employees regarding the use of AI in consequential decisions. Create a process for providing detailed explanations for adverse decisions and facilitating requests for data correction and appeals.
  6. Train Employees: Educate HR, legal, compliance, and other relevant teams on the CAIA’s requirements, potential risks of algorithmic discrimination, and internal policies and procedures for AI use.
  7. Monitor AI Systems for Bias: Implement ongoing monitoring and testing of AI systems to detect and address any algorithmic discrimination. This is crucial for demonstrating “reasonable care” and for meeting the reporting requirement to the Attorney General.
  8. Stay Informed: The AI regulatory landscape is rapidly evolving. Continue to monitor guidance from the Colorado Attorney General’s office and legislative developments at both the state and federal levels. While attempts to delay and amend the CAIA failed in the 2025 session, the conversation around AI regulation is ongoing, and future adjustments are possible and expected.
  9. Legal Counsel: Engage with legal counsel experienced in AI and employment law to assist with compliance assessments, policy drafting, and ongoing risk management.

Colorado’s AI Act represents a significant step toward the responsible deployment of AI. By taking proactive measures now, employers can navigate this new regulatory environment, mitigate potential risks, and promote a fair and equitable approach to AI in their operations.

 Sources: 

 

Archives