University of Washington Leads the Way in Tackling AI-Generated Sexual Abuse

In July 2025, the University of Washington (“UW”) became a trailblazer among universities in addressing the challenges AI-generated images has presented on campuses.  UW updated its sexual violence policy to include AI-generated sexual abuse in its definition of “sexual exploitation.”[1]  In doing so, UW prohibits AI-generated sexual abuse on campus.

UW’s policy on sexual harassment now defines sexual exploitation as the following:

[S]exual exploitation involves taking nonconsensual or abusive advantage of another for the purposes of sexual arousal or gratification, financial gain, or other personal benefit. Examples of sexual exploitation include:

  1. Transmitting, distributing, publishing, or threatening to transmit, distribute, or publish photos, video, or other recordings or images of a private and sexual nature — including consensual sexual activity — without the consent of the subject(s);
  2. Taking, making, sharing, or directly transmitting photographs, films, digital images, or generated images of the private body parts of another person without that person’s consent;
  3. Prostituting another person; or
  4. Knowingly allowing another to surreptitiously watch otherwise consensual sexual activity.[2]

UW’s policy frames sexual exploitation as a form of sexual misconduct that goes beyond non­consensual sexual contact or assault.

AI-generated sexual abuse, often called deepfakes, uses generative AI to create fake but realistic sexual images or videos of people without that person’s consent.[3]  Some examples of synthetic images might include deepfake videos that digitally place a child’s face on an adult’s body or digitally altered photos of real children making them appear nude or in sexual acts.[4]

UW’s inclusion of AI-generated sexual abuse in its policies did not appear overnight.  The inclusion occurred after months of advocacy by Survivors + Allies in Law (SAIL), a student organization at UW’s School of Law that champions survivors of sexual violence, sexual harassment, and gender-based abuse.  SAIL worked with UW’s Title IX Office to develop the policy change. “As legal professionals, I believe we have a duty to think about the negative implications of emerging technology,” said Kendra Kalaf, co-director of SAIL’s Technology Committee.[5] Kalaf continued, “Non-consensual AI-generated imagery isn’t a harmless byproduct of technology — it’s a new form of sexual harassment.”

UW’s definition of sexual exploitation is purposeful as it goes beyond the classic view of sexual assault to include misuse of sexual images. The University’s policy change reflects a modern understanding of the negative impact of AI-generated sexual images on college students.

[1] EO No. 81 Prohibiting Discrimination, Harassment, and Sexual Misconduct, available at: http://policy.uw.edu/directory/po/executive-orders/eo-81-prohibiting-discrimination-harassment-and-sexual-misconduct/

[2] SGB 210 Student Conduct Policy for Discriminatory and Sexual Harassment, Intimate Partner Violence, Sexual Misconduct, Stalking, and Retaliation, available at: https://policy.uw.edu/directory/sgp/sgp-210-student-conduct-policy-for-discriminatory-and-sexual-harassment-intimate-partner-violence-sexual-misconduct-stalking-and-retaliation/?utm_source=chatgpt.com.

[3] What Is Synthetic or AI-Generated CSAM?, available at: https://rainn.org/get-the-facts-about-csam-child-sexual-abuse-material/what-about-ai-generated-csam-like-deepfakes/

[4] Id.

[5] The Legal Argument for University to Fight AI-Generated Sexual Abuse, available at: https://www.kcba.org/?pg=News-Bar-Bulletin&blAction=showEntry&blogEntry=132018

Archives