Skip to content

Definitions and Facts

“Is this you?”: How AI Is Being Used to Exploit Teens and What You Can Do About It

Picture this: You are a regular teenage girl. As with most teenage girls, you like to post about yourself on your Instagram. Innocent pictures, like random school pictures, hangouts with your friends, and birthday posts. You also text your friends whenever you’re bored until one day, a text pops up stating the following. “Is this…

Picture this: You are a regular teenage girl. As with most teenage girls, you like to post about yourself on your Instagram. Innocent pictures, like random school pictures, hangouts with your friends, and birthday posts. You also text your friends whenever you’re bored until one day, a text pops up stating the following.

“Is this you?”

You open it, and you find sexually explicit content of yourself. You know you haven’t done anything close to what these videos are, so how come your face is plastered onto these? 

“That’s not me.”

Although most people assume that deepfakes/AI videos are for funny TikTok posts, charged political content (which falls under misinformation), or to create art without the needs of an artist, it can also be used to make sexually explicit content of regular people — like yourself. The story outlined at the start of the article isn’t a figment of the imagination. Rather, it is the shared experience of South Korean school girls who have had to deal with years of sexual abuse through chats like Telegram. Chats filled with teenage boys and relatives, unlike the stereotypical “old man online” character attached to sexual abusers. As one PBS article details, the effects of nonconsensual deepfake content have been devastating on these victim’s lives, as one of them tried to kill herself after finding videos of herself in explicit situations (Source).

It doesn’t stop in South Korea either. As one Thorn article states, it is estimated that 1 in 10 minors reported that they knew of cases of peers creating nudes of other kids. It is a widespread epidemic, and although many victims suffer mental illnesses such as depression, anxiety, and deep isolation afterward, there is little to no push on legislation involving nonconsensual deepfake content. However, there are some tips that you can follow yourself, as provided by the National Sexual Violence Resource Center.

Navigating Nonconsensual AI Imagery

Let’s be real about deepfakes.

What To Do
What NOT To Do
If you see something messed up online, report it ASAP on whatever app you found it on.
If the deepfake involves CSAM, go to Take It Down (run by the National Center for Missing and Exploited Children) and report it there. They actually help get it removed.
If the person behind this goes to your school, tell a guidance counselor or social worker. It’s not snitching—it’s making sure they don’t do this to anyone else.
If your friend is the victim and they’re too scared to speak up, have their back and encourage them to report it or report it for them.
Talk about this. With your friends, on social media, with family—people need to know this is happening. The more people call it out, the more pressure there is to stop it.
Don’t ignore it. If you see something shady, don’t just keep scrolling. Pretending it’s not there won’t make it go away.
Don’t share it. Even if it’s to say “WTF is this?”—sharing makes it spread faster and makes things worse for the victim.
Don’t joke about it. This isn’t just “drama” or some internet scandal. It ruins real people’s lives.
Don’t blame the victim. They didn’t “deserve” this or “put themselves in that situation.” Deepfakes can happen to anyone.
Don’t stay quiet. If it’s happening to someone you know, speak up. Silence just helps the people doing this get away with it.

Nicole F.

2024-2025 Youth Innovation Council Member

Need to talk?

Text NOFILTR to 741741 for immediate assistance.