Potential Discrimination in AI Tools

By:  Sally Roller

On November 20, 2024, the U.S. Department of Education Office of Civil Rights (“OCR”) released a new resource titled, Avoiding the Discriminatory Use of Artificial Intelligence to assist K-12 schools and institutions of higher education.[1]  This new resource provides information on the legal analysis OCR uses to determine whether a school is using artificial intelligence (AI)[2] in a discriminatory manner.  AI technology can enhance opportunities and increase educational equity for student but can also contribute, on a large scale, to discrimination.  OCR’s new resource provides examples of conduct that constitute grounds for an investigation under the following laws Title VI of the Civil Rights Act of 1964 (Title VI), which prohibits discrimination on the basis of race, color or national origin; Title IX of the Education Amendments of 1972 (Title IX), which prohibits discrimination on the basis of sex; and Title II of the Americans with Disabilities Act of 1990 (ADA) and Section 504 of the Rehabilitation Act of 1973 (Section 504), which prohibit discrimination on the basis of disability.

Avoiding the Discriminatory Use of Artificial Intelligence provides how OCR will enforce anti-discrimination laws for schools’ AI use.  The resource includes examples of types of incidences that may lead OCR to open an investigation.  Avoiding the Discriminatory Use of Artificial Intelligence does not have the force of law but provides helpful insight into what circumstances may trigger an OCR investigation, which might ultimately prompt fines and penalties for violations.

OCR provides 21 examples where the agency would likely have reason to open an investigation based on the facts provided in the examples.  Some of these examples include:

  • Example 1 (Title VI): A high school English teacher assigns the class a book report. After the students turn in their reports, the teacher grades them but decides to also run the reports through a free online service that inaccurately claims it can spot use of generative AI and prevent cheating and plagiarism. Unbeknownst to the teacher, the service has a low error rate for evaluating essays written by native English speakers for use of generative AI but has a high error rate when evaluating essays written by non-native English speakers. The detector flags as AI-generated the essays written by the class’s only two students who are ELs, though they did not use AI or plagiarize. As a result, the teacher gives them both a failing grade and writes them up for academic dishonesty. A parent of one of the students appeals the student’s grade and academic dishonesty charge. The principal tells the parent that she believes the AI-detection checker over the student’s objections and that the school will continue to use the checker to ensure all papers are free from plagiarism. OCR would likely have reason to open an investigation based on this complaint. Based on the facts, as alleged, the students who are ELs may not be able to participate equally and meaningfully in the standard instructional program of the school.
  • Exhibit 7 (Title IX): A student uses a generative AI application to send repeated and continuous sexually explicit messages to a classmate’s school email. That classmate’s parents report the issue to administrators, asking the school to take actions to stop the messages. The principal states that because she does not have access to the tool the student is using to spam their classmate’s email, she cannot investigate further or prevent the messages from being sent. OCR would have reason to open an investigation based on this complaint. Based on the facts, as alleged, the student receiving the sexually explicit messages may have experienced prohibited harassment about which the school knew and failed to appropriately respond.
  • Exhibit 8 (Title IX): A high school student discovers that a public anonymous social media account is posting images of a female student and other classmates that depict photographs of the students’ faces with AI-created nude bodies (sometimes referred to as “deepfake nudes”) and include sexually explicit comments and tags. Each new post by the account becomes a popular topic of discussion amongst students at the school. Another student brags to friends at the school about creating the account and then is reported to administrators. A third student reports the social media account to her parents who in turn file a complaint with the school. The school tells the parents that they are aware of the account due to multiple reports and have reported it to the police but that they cannot do anything further to ameliorate the situation until the police complete their investigation. OCR would have reason to open an investigation based on this complaint. Based on the facts, as alleged, the students may have experienced prohibited harassment about which the school knew and failed to appropriately respond.
  • Example 13 (Section 504): An AI test proctoring software uses facial recognition technology and eye movement tracking to monitor students for behavior that indicates they might be cheating during exams. A student with a disability receives a failing grade for an exam after the software flags her behavior as suspicious and her professor accuses her of cheating. The student appeals the grade because her vision impairment causes eye movements that the software falsely flagged as suspicious. The student also requests that she receive an academic adjustment so that she does not have to take tests using this particular proctoring software. The university threatens to expel the student if she is flagged for the same behavior again and does not respond to the student’s request. OCR would have reason to open an investigation based on this complaint. Based on the facts, as alleged, the student may not have been provided with necessary academic adjustments.

[1] U.S. Dep’t of Educ. Office for Civil Rights, Avoiding the Discriminatory Use of Artificial Intelligence (Nov. 20, 2024), available at: https://www.ed.gov/media/document/avoiding-discriminatory-use-of-ai.

[2] U.S. Dep’t of Education defines AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.

Archives