The utilization of artificial intelligence (AI) to create highly realistic fake nude images is becoming more prevalent, with campaigners warning this disturbing trend is being “normalized”. Recent findings indicate that this issue is particularly alarming among teenagers, with a survey by Internet Matters revealing that 13% of adolescents have encountered nude deepfakes. The NSPCC has emphasized that a new harm is emerging in this context.

In response, Ofcom plans to introduce new codes of practice aimed at internet companies to curb the illegal sharing of fake nudes. However, victims, like social media influencer Cally Jane Beech, contend that existing laws need to be more robust. Beech experienced the horror of finding that an AI-generated nude image of her had circulated online after someone manipulated a clothed photograph from her underwear brand. She described the image as alarmingly realistic, commenting, “It looked so realistic, like nobody but me would know. It was like looking at me, but also not me.” Her frustration deepened when her police report went unheeded; law enforcement seemingly lacked the tools to tackle this global issue effectively.

Assistant Chief Constable Samantha Miller highlighted this inadequacy when addressing MPs, stating, “the system is failing” due to inconsistent practices across police forces. With new legislation set to outlaw AI-generated nudity expected next year, the situation remains precarious. Although it will still be illegal to create fake nudes of minors, the proliferation of easily accessible apps designed to manipulate images increases the risk of victimization, primarily targeting women.

Experts like Professor Clare McGlynn have observed a significant surge in sexually explicit deepfakes, with one notorious website receiving approximately 14 million visits per month. McGlynn noted, “These nudify apps are easy to get from the app store, they’re advertised on TikTok, so young people are downloading them and using them. We’ve normalized the use of these nudify apps.” This normalization has ignited discussions about the psychological toll on victims, many of whom experience profound emotional distress.

An anonymous victim, referred to as “Jodie,” shared her harrowing experience of being victimized by a fake sexual video uploaded to a pornographic site. The perpetrator turned intimate photos into sexually explicit content, amplifying her trauma, particularly as the culprit was her own best friend. Jodie’s perseverance led to a conviction based solely on offensive comments tied to the fake images, as soliciting the creation of such content remains unpunishable under current laws, emphasizing the necessary urgency for legal reform.

With the increasing incidence of deepfake images being used as tools for bullying, especially among young girls in schools, the potential harm is alarming. Cally noted that this practice is often seen as a comedic lark, ignoring the severe psychological repercussions it can have on victims, who are often left feeling isolated and frightened.

The NSPCC has noted a rise in calls to its helpline regarding nude deepfakes, stressing the need for protective measures tailored to children’s safety online. This emerging harm not only calls for legal redress but also poses necessary questions about how society adjusts to the rapid advancements of digital technology and its implications for mental health.

As the government plans to legislate against the creation of deepfakes, advocates for victims like Cally and Jodie express their concerns that any new laws might lack sufficient protections to address the solicitation and distribution of harmful content. The urgency for effective legal measures to protect individuals from AI-generated fake nudes has never been clearer.

For anyone dealing with emotional distress or suicidal thoughts, help is available through services like Samaritans at 116 123 in the UK or local branches in the US.