AMARILLO, Texas (KFDA) – Experts in child safety are increasingly concerned about the accessibility of artificial intelligence applications to children, particularly given the alarming rise in AI-related child sexual exploitation cases. Reports to the National Center for Missing and Exploited Children noted a staggering increase of more than 1,000% in exploitation incidents involving generative AI, with 67,000 reports in 2024 alone.
Child advocacy professionals Courtney Ma and Sarahbeth Cook from The Bridge Children’s Advocacy Center highlight that many parents remain unaware of the myriad AI applications their children might be using. Some of these apps allow users to create their own virtual partners, customizing interactions in disturbing ways. Ma states, “There are also apps that basically you can create your own partner. It can be male, female, it can talk how you want it.” These include platforms like Candy.AI, C.AI, and ChatGPT, which can enable troubling interactions.
Additionally, the emergence of AI undressing applications and content creation tools raises further concerns. Ma explains, “There are AI apps where you can upload a picture and it undresses the person,” illustrating the potential for harmful exploitation.
The implications of deepfake technology compound the threats posed by these applications. Experts warn that many users cannot differentiate between authentic images and those manipulated by AI, which can result in significant harm. “It’s not good for kids to have access to an app that can make videos of someone doing something sexual,” Ma highlights, revealing the risks associated with sharing such content.
To combat these dangers, Ma and Cook recommend open communication between parents and children regarding AI usage. They stress the importance of setting clear boundaries and establishing healthy technology usage habits. “We have to tell our kids, ‘This isn’t real.’ We need to help our kids keep healthy boundaries with technology, including AI,” Ma advises.
Cook emphasizes the critical role of human connection, suggesting that parents should engage with their children meaningfully rather than relying on technology for companionship. “Kids are just looking for a connection, and as parents, we need to have a human connection to address their needs,” she asserts.
For those affected by the circulation of sexually explicit images, whether real or AI-generated, services like Take It Down provide free assistance in removing or stopping the online sharing of inappropriate materials. Additionally, stopncii.org offers help for individuals threatened with the release of intimate images, providing crucial support to vulnerable youth.
The need for vigilance and informed discussions about the implications of AI tools is ever more pressing as technology continues to evolve, underscoring the critical role of parental guidance.