The rise in AI-generated child sexual abuse videos has alarmingly surged online, as those with malicious intent exploit advancements in technology. The Internet Watch Foundation (IWF) has reported that the number of AI-created videos depicting child sexual abuse material (CSAM) has sharply increased, reaching a staggering 1,286 verified videos in the first half of 2025 alone. In stark contrast, only two such videos were recorded in the same period last year.
This year, over 1,000 of these videos fall under category A abuse, representing the most severe and egregious type of material. The IWF indicates that heightened investment in AI technology is yielding powerful video-generation models, which are being manipulated by offenders to produce content that is now almost indistinguishable from actual imagery.
As a reflection of this disturbing trend, URLs featuring AI-generated child sexual abuse content have seen an incredible 400% increase in just six months. The IWF received reports of 210 such URLs this year, a considerable jump from 42 last year, with each webpage containing hundreds of abusive images and the burgeoning volume of video content.
One analyst from the IWF noted that individuals involved in these crimes acknowledged the rapid progression of AI capabilities, suggesting that they quickly become adept at one tool only to have a superior model emerge shortly thereafter. Reports indicate that many of the most realistic AI abuse videos are created by fine-tuning basic AI models using a limited number of real CSAM videos.
Derek Ray-Hill, interim chief executive of the IWF, expressed serious concern over the implications of these developments, emphasizing the risk that AI-generated CSAM could significantly outpace efforts to regulate and combat online abuse. The pervasive abuse of existing victims’ images indicates a troubling trend of expanding CSAM volumes without the need for new victims, potentially exacerbating child exploitation and trafficking.
To address this crisis, the UK government is implementing strict measures to combat AI-generated CSAM. New regulations will make it illegal to create, distribute, or possess AI tools designed for producing abusive imagery, with penalties of up to five years in prison for violators. Additionally, manuals instructing on the abuse of AI tools for these purposes will also be outlawed, carrying a potential sentence of three years.
In the announcement of these measures, Home Secretary Yvette Cooper emphasized the urgent need to tackle child sexual abuse in both online and offline arenas. The Protection of Children Act 1978 criminalizes any indecent photograph or pseudo photograph of a child, reaffirming the legal framework governing these horrific offenses.