By: Elizabeth Rooney
Artificial intelligence is transforming nearly every corner of the workplace. But when it comes to impartial workplace investigations, the line between “helpful tool” and “ethical landmine” is sharper than ever. As this tool sweeps into our industry, it is imperative that we – as experienced workplace investigators – determine how we will, and will not, use this powerful tool.
Here are some thoughts about drawing that line …
Where AI Could Be Used in Workplace Investigations
- Administrative Support: Sorting, Organizing, and Surfacing Information
AI can safely assist with the logistics of investigations. This could include:
- Sorting (and sorting through) large volumes of documents
- Identifying duplicate files
- Flagging relevant keywords in emails, documents or chat logs
Anything AI finds has to be checked, and this has to be done under human supervision. We all know investigations are as good as the data we consider. AI might be able to find it, but we need to confirm those findings.
- Witness summaries and (better) AI transcripts
These tools have been with us for a while, but they have improved dramatically in terms of accuracy. AI-powered transcription can provide nearly instantaneous text versions of interviews. These tools can be used to extract key quotes or facts, flag inconsistencies (for human review) and help investigators to more quickly digest a large volume of interview data.
While not perfect, these tools meaningfully reduce administrative burden on investigators and their teams.
- Creating Concise Compilations of Investigator-created Content (With Oversight)
Some of the latest-gen AI tools can be used to take long investigation reports or data compilations and create more succinct summaries for use in verbal reports, or more abbreviated reports. This has risks, because AI – even the most updated versions – still hallucinates. This can only be caught by a human who knows the facts of the case themselves.
Where AI Should Never Be Used in Workplace Investigations
- Judgement-related Tasks
Many steps of the investigation process require human judgement, and preferably well trained, well informed and experienced human judgements:
- Scoping the investigation
- Planning strategies and decisions
- Creating witness outlines
- Evaluating credibility (more on that below)
- Interpreting evidence for relevance, reliability, context, conflicts, cultural or situational factors
- Distinguishing between intent and impact
AI can help with logistics and document processing, but every stage that involves interpretation, fairness, credibility, ethics or context requires human judgment.
- Making Credibility Determinations
AI cannot and must not be used to evaluate whether a witness’s narrative is credible. Credibility assessments are based on objective criteria, but they involve a critical weighing exercise that implicates judgement, experience and context. This is particularly true where AI cannot reliably distinguish:
- Cultural communication differences
- Trauma-affected recall
- Emotional regulation under stress
- The impact or importance of history and power dynamics between people
- The potential effects of fear or retaliation
AI has no lived experience, and it cannot meaningfully contextualize human behavior. Credibility assessment requires empathy and AI has none. A meaningful assessment might depend upon understanding why someone might hesitate, why trauma affects memory, how fear of communication shapes communication. Empathy helps investigators evaluate credibility challenges without assuming deception. AI cannot empathize with fear, shame, grief or trauma.
Humans can be trained to navigate this complexity. AI cannot (and should not).
- Reaching Findings
AI works in probabilities, not truth. AI produces likelihoods, not facts. Reaching findings is not just a mechanical process, it involves evaluating context, considering intent and impact and balancing conflicting accounts. These tasks involve judgement, not pattern recognition.
Human behavior is not a dataset. Findings often hinge on messy, incomplete or nonlinear accounts. AI systems interpret patterns, not people. Human behavior, especially under stress, does not fit clean patterns.
Findings require an understanding of organizational context. Every organization has a unique culture and power dynamics. There are historical events that may be implicated. There can be unspoken norms. AI cannot understand this context, it can only analyze inputs.
AI struggles with ambiguity, which is what humans are built for. We often have inconclusive witness statements, conflicting evidence, partial corroboration, nuanced circumstances. Shades of gray. Human investigators can sit with that ambiguity, weigh it and reach holistic findings.
Findings require accountability. It is what “I find” as an investigator that counts. This is how we, as workplace investigators, build trust and legitimacy. AI cannot be accountable or explain its reasoning. Findings without human judgement collapse trust, because they are inherently not trustworthy.
Should we use AI at all? The Environmental and Ethical Concerns of Using AI
Finally, it is important to remember that AI is not “free.” We may think that $20 a month is so cheap it is almost free. But the technology comes with significant environmental and ethical implications.
These include enormous energy consumption and carbon emissions. Modern generative AI requires massive computational power, using staggering amounts of electricity. The growth in AI and commensurate growth in data center construction exacerbates climate change and increases reliance on fossil fuels unless renewables are explicitly integrated.
AI infrastructure uses huge volumes of water for cooling. This is particularly sensitive and important in drought-prone regions like Colorado, where the decision to expand AI data center operations can directly affect community water security.
The ethical considerations regarding AI are deep and wide, and include issues around knowledge extraction, questionable use models and the intense ethical conflict between the uncontrolled expansion of this technology, the energy and planetary resources required, and their utility to human kind.
These burdens are often disproportionately distributed to affect lower-income or marginalized communities. Destructive mining practices occur in regions where ecologically destructive extraction has led to toxic pollutants. Environmental justice considerations regarding the use and expansion of this technology are real and must be considered.
In Conclusion
This technology could make workplace investigations more organized and more efficient. It could also be misused to harm investigation participants by making credibility assessments and reaching findings that should be reserved to human investigators. If you or your firm decide to take this technology on, put up the guardrails first.
And it is a legitimate decision to say you don’t want to use this tool. This technology is not free, not really, no matter how cheap it is for you each month. There are real costs to the world, to real people, and to our future that relate to this technology – and the companies that are bringing it to us with very little constraint or idea about how to mitigate these impacts. AI is changing our world, in ways we can see coming, and in ways we probably can’t yet see. It is important for all of us to stay informed, and make the decisions that will best serve our clients and our communities.

