Fit to Lead: November 2024
Artificial intelligence (AI) continues to skyrocket in popularity with each passing day. For students, AI tools like ChatGPT and DALL-E play a role in everyday tasks, serving as a homework helper for tackling complicated assignments and a convenient tool for creating striking images.
Unfortunately, for all of AI’s practical applications, there have been countless incidents of abuse and misuse. AI has been used to clone real people’s voices or likenesses and manipulate photos, videos, and audio clips. In some cases, the purpose of such content is to entertain, like with videos of presidents playing video games. However, others are designed to intentionally mislead the viewer, such as ads using doctored videos of celebrities to scam consumers. These edited digital media are known as deepfakes.
The deepfake epidemic has spread to the world of K–12 education, with devastating results. A particularly egregious example is the editing of students’ and educators’ faces onto intimate photos and videos in order to humiliate and extort the victims. The recent influx of deepfake attacks in schools is spurring lawmakers to pursue legislation against the misuse of AI.
In this article, we break down how deepfakes are impacting schools and what lawmakers are doing to combat the issue. We also share practical tips on how educators can join the fight against deepfake abuse.
Deepfakes’ Impact
AI deepfake incidents have surged 3,000% in the past year, and we are seeing more of these attacks affecting K–12 school communities across the United States. With the growing prevalence of deepfake generators and software, anyone can be a victim or a perpetrator, including teachers, principals, and students.
In schools, perpetrators have exploited deepfake software for multiple purposes, from spreading damaging disinformation about educators to extorting students. Below are real-life examples of the damage deepfake abuse has caused in school communities.
Defamation
At a high school in Maryland, an athletic director forged an audio recording of the principal making disparaging comments about Black students and Jewish individuals. The clip was posted online, causing the principal to be placed on leave and inciting a flow of hate messages on social media. Not only was the principal and his family put at risk, but staff expressed feeling unsafe at school.
Firearm Violence Threats
Students at a high school in New York created an AI-generated recording of their principal going on a racist tirade against students and making a shooting threat against the school. TikTok removed the video,
and disciplinary action was taken against the perpetrators, but the damage it had caused was irreversible, with students of color fearing for their safety after the incident.
Sexual Abuse
In Texas, a middle school teacher’s face was superimposed onto a pornographic video as a “prank.” The video was AirDropped to staff and students.
Adam Dodge, founder of Ending Tech-Enabled Abuse, commented on the incident: “This is a form of abuse. This is a crime. This is a form of sexual violence. It’s not just a goof or something fun to do online. It actually harms people and ruins lives.”
Alarmingly, school-age children are also being targeted by doctored explicit content, which is often distributed and sold on deepfake porn websites and apps. In February of this year, five eighth graders in California were accused of using generative AI to create fake nude photos of 16 classmates. The images were circulated among other students via messaging apps. One student expressed that, due to the incident, they and their peers were afraid to attend school over fear of being the next targets. According to the president of the Cyber Civil Rights Initiative, Mary Anne Franks, “The problem with image-based abuse is once the material is created and out there, even if you punish the people who created them, these images could be circulating forever.”
Lawmakers’ Response
U.S. lawmakers are pursuing legislation at the federal and state levels to mitigate the risks of AI deepfake abuse.
In September of 2023, Representative Yvette D. Clark introduced the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2023, also known as the DEEPFAKES Accountability Act. The act aims to “protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes.”
Additionally, Representative Anna G. Eshoo sponsored the Protecting Consumers from Deceptive AI Act in March of this year, which would “establish task forces to…ensure that audio or visual content created or substantially modified by generative artificial intelligence includes a disclosure acknowledging the generative artificial intelligence origin of such content.”
As of July, the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (also known as the DEFIANCE Act of 2024) had passed the Senate. Introduced by Senator Richard J. Durbin, this act would “improve rights to relief for individuals affected by non-consensual activities involving intimate digital forgeries.”
Individual U.S. states have proposed or enacted laws to regulate deepfake abuse. Ten, including California, Texas, and New York, passed laws to prohibit the creation and distribution of sexually explicit deepfakes. Depending on the state, violations are punishable by jail time or fines. Victims can also sue for damages in several states.
What You Can Do
As legislation is drafted and enacted to curb the misuse of AI, it is crucial that educators take action to arm themselves against this threat and defend those being targeted.
- Engage in media literacy education. Make sure that you and your students are trained in how to identify and respond to fake images and content generated by AI. Consider looking into a digital literacy curriculum, such as the Social Media Literacy Curriculum by the nonprofit organization Digital4Good (shown above).
- Join the Social Media Task Force. Digital4Good is assembling a task force of educators to develop sample policies for schools to help prevent deepfake incidents and provide support for victims and survivors. Whether you’re a teacher, administrator, or resource officer, please consider joining the task force today.
- Call the Cyber Civil Rights Initiative helpline. If you or someone you know has been targeted by deepfake abuse, preserve all evidence and call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge 24/7.
- Request to have the images taken down. Create a case with StopNCII.org, a free tool designed to support victims of Non-Consensual Intimate Image (NCII) abuse. If the victim is under 18, the National Center for Missing & Exploited Children may be able to remove the content through their Take It Down service.
Conclusion
The proliferation of malicious deepfake abuse has caused incalculable damage to the lives of students and educators across the nation. Not only does it jeopardize the safety and well-being of victims, but it also affects the entire school community. It ultimately impairs the ability of students and staff to feel safe at school.
Digital literacy education is a vital resource for preventing and mitigating the impact of deepfake abuse. At a minimum, educators should hold discussions with students about how to identify deepfakes and walk them through the steps of reporting malicious content.
The next time you see a shocking video or photo on social media, don’t take it at face value. If it’s someone you know, check if the person’s mannerisms match up with their real-life interactions. More importantly, if the content is malicious or intentionally misleading, don’t hesitate to report it and encourage friends and colleagues to do the same. The more reports a post gets, the faster it can get taken down.
Janiyah Gaston is a senior at Southern Illinois University Carbondale and a public relations intern at Digital4Good, where Marisa McAdams is an administrative assistant. Learn more at icanhelp.net.
References
Finley, B. (2024, April 25). Athletic director used AI to frame principal with racist remarks in fake audio clip, police say. Associated Press. apnews.com/article/ai-artificial-intelligence-principal-audio-maryland-baltimore-county-pikesville-853ed171369bcbb888eb54f55195cb9c
Hurtado, D. (2023, April 19). Aldine ISD middle school teacher demands accountability after face used in ‘deep fake’ porn video. ABC13. abc13.com/texas-teacher-deep-fake-video-aldine-isd-investigation-shotwell-middle-school-pornographic-prank-public-safety/13157020/
Jimenez, K., Weise, E., & Santucci, J. (2024, January 26). Were Taylor Swift explicit AI photos illegal? US laws are surprising and keep changing. USA Today. usatoday.com/story/news/nation/2024/01/26/was-deepfake-taylor-swift-pornography-illegal-can-she-sue/72359653007/
Onfido. (2024). Identify fraud report 2024. onfido.com/landing/identity-fraud-report/
Paul, M. L. (2023, March 14). Students made a racist deepfake of a principal. It left parents in fear. The Washington Post. washingtonpost.com/nation/2023/03/14/racist-deepfakes-carmel-tiktok/
Tenbarge, K. (2024, March 8). Beverly Hills middle school expels 5 students after deepfake nude photos incident. NBC News. nbcnews.com/tech/tech-news/beverly-hills-school-expels-students-deepfake-nude-photos-rcna142480