Using Generative AI in Investigations: Concerns and Pitfalls

By:  Yoyo Rita

It’s so convenient, so readily available, and so fast. It can decipher human language better than ever before. It can read a million books and identify patterns to discuss with a user. It can spout out a complete (albeit, always unexceptional) essay about any topic. But is using generative AI in our workplace investigations field ethical, effective, or desirable? I would argue that it is indeed not, and that one must not look far into the recent research to prove so.

You may have heard of Artificial Intelligence models like OpenAI, ChatGPT and Google Gemini demonstrating a behavior dubbed “hallucination,” wherein AI shares false information with human users—everything from creating nonexistent names when asked to cite authors, to spouting bizarre made-up historical facts[1], even to citing fake legal precedents that have shown up in the courts at alarming rates[2]. Studies suggest that AI hallucinations are only on the rise,[3] making the veracity and reliability of AI-generated information all the more tenuous to rely upon.

This is doubly true when considering the implications of using generative AI in workplace investigations. Aside from the hallucination concerns (i.e., the fact that generative AI has a high probability of creating false information, which would then taint any materials derived therefore), generative AI also presents a minefield of other ethical concerns as investigators. Namely, providing AI models with confidential, sensitive materials (e.g., witness interview notes) to, for instance, assist with data analysis or to draft an initial scope document, could present a major violation of attorney-client privilege and endanger the confidentiality of an investigation.

It is important to consider that, as of the date of this article, AWI and other workplace investigations associations like SHRM have not made official mention of AI usage in their Guiding Principles. Therefore, we are currently operating in an ethical vacuum wherein we must decide our best professional course of action. While we can, and many do, certainly save time using generative AI for some tasks in workplace investigations (e.g., AI-generated transcripts), we must proceed with caution in this new, unknown territory, lest we endanger the ethical principles that make us reliable to our clients across industries.

[1] https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

[2] https://www.technologyreview.com/2025/05/20/1116823/how-ai-is-introducing-errors-into-courtrooms/

[3] https://www.axios.com/2025/06/04/fixing-ai-hallucinations

Archives