Colorado’s Artificial Intelligence Act and Workplace Investigations

By:  Abigail Leinsdorf Garber

Developments in artificial intelligence (AI) are emerging at a rapid pace, and companies are eager to identify ways in which they can rely on AI to reduce costs and improve efficiency. While workplace investigations may seem like a perfect place to automate, employers should be aware of Colorado’s Artificial Intelligence Act (CAIA), how it limits the use of AI to conduct such investigations, and, more generally, what they might be losing by turning to AI instead of seasoned workplace investigators.

Some workplace investigators rely on AI technology to summarize interview notes, generate interview questions, identify inconsistencies within transcripts, modify tone, or other data synthesis tasks that might otherwise take significant time. Given the massive strides AI technology has made in recent years, it is not far-fetched to envision a software claiming to accurately analyze investigation data and produce investigation findings in the near future. (See below for why, legal limitations aside, it would be wise to think twice before relying on such software.) Enter the CAIA.

Governor Polis signed the CAIA (C.R.S. § 6-1-1701) into law on May 17, 2024, and it is set to go into effect on February 1, 2026 (though some government officials, including Polis, AG Phil Weiser, and Denver Mayor Mike Johnson have requested a delayed effective date of January 2027).  The CAIA covers “high-risk” AI systems and defines such a system as one “that, when deployed, makes, or is a substantial factor in making, a consequential decision.” C.R.A. § 6-1-1701(9)(a). “Consequential decision” is defined in the law as a “decision that has material or legal or similarly significant effect on the provision or denial to any consumer at the cost or terms of . . . employment or an employment opportunity.” Id. at § 6-1-1701(3)(b). A “consumer” is defined as a Colorado resident, so any local employee would qualify. See id. at § 6-1-1701(4)

Though there is not yet a body of case law to understand how courts interpret the CAIA, a workplace investigation yielding substantiated or non-substantiated findings would presumably be a “substantial factor in making a consequential decision,” like the termination of one’s employment, for example.

Notably, the law does not include AI technologies that “perform a narrow procedural task;” or “detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without human review” as “high-risk.” Id. At § 6-1-1701(9)(b).  In other words, AI technologies employed by investigators to conduct the specific aims listed above (identify inconsistencies, generate interview questions, etc.), none of which make or are a substantial factor in making a “consequential decision,” are not considered high-risk AI technologies and are therefore not covered by the CAIA.

In terms of AI-generated workplace investigation findings, however, the CAIA imposes stringent requirements on any employer intending to rely on such findings. A “deployer” (i.e. an employer relying on AI to investigate a workplace complaint) must “use reasonable care to protect” against “any known or reasonably foreseeable risks of algorithmic discrimination” by employing a risk management policy and program governing the AI system in use. Id. at § 6-1-1703(1)-(2). Deployers or any third party contracted by the deployer (i.e. an investigations firm) must also conduct an impact assessment and repeat the assessment at least annually or within 90 days after modifying the AI system. See id. at § 6-1-1703(3). The CAIA lists eight separate required inclusions in the impact assessment, including whether the deployer used data to customize the system, what metrics it used to evaluate the performance of the system, and a description of monitoring the system once deployed.  See id.

In addition, a deployer must notify “the consumer” (i.e. the impacted employees) that it relied on a high-risk AI system to make or allow such a system to be a substantial factor in a consequential decision concerning the consumer, see id. at § 6-1-1703(4), make a statement on its website disclosing that it relies on such a system, see id. at § 6-1-1703(5), and notify the Colorado Attorney General if it discovers algorithmic discrimination, see id. at § 6-1-1703(7).

In sum, the CAIA requires employers who want to automate workplace investigations to become intimately familiar with the inner workings of any system it uses, ensure the technology does not result in discrimination, and provide transparency to employees, the public, and, if necessary, the government about its use.

CAIA aside, workplace investigations like those conducted by ILG are not capable of being automated and yielding the same valuable results. Sure, interview questions can be prepared ahead of time, but they are also generated on the spot based on a witness’s in-the-moment response to a previous question. AI generated witness summaries may save time for other investigators’ reports, but a thoughtful report that analyzes, not just summarizes, all necessary facts and considers the bigger picture of the given workplace culture is uniquely human in its creation. While many investigators consider report-writing the ditch digging of investigations, at ILG, we look forward to and enjoy writing good reports. Thinking through the evidence and what it means, not just whether the pieces fit together, is the reason we do this work.

 

Archives