Artificial intelligence (AI) is being exploited by a range of criminals, including paedophiles and scammers, according to Alex Murray, the national police lead for AI. During a recent address at the National Police Chiefs’ Council conference, he emphasized that the accessibility of AI contributes to its rapid adoption in committing crimes. The police, he noted, must step up their efforts to stay ahead of emerging threats.
Murray highlighted the alarming rise of AI-driven fraud, particularly ‘AI heists’ where criminals use deepfake technology to impersonate executives in video calls. This tactic has led to significant financial losses, as was the case when a finance worker was duped into transferring £20.5 million after being convinced during a video conference that they were dealing with their CFO. Such scams underscore the need for greater vigilance and adaptation by law enforcement.
The use of generative AI for creating child abuse imagery is, unfortunately, a significant concern. Murray indicated that paedophiles are utilizing this technology to develop thousands of illegal images. The case of Hugh Nelson, who was sentenced to 18 years for generating such content, is a stark reminder of the extent of this issue. These activities not only highlight the legal and ethical violations involved but also point to the urgent need for effective regulation and police intervention.
Another troubling development is in the realm of sextortion, where criminals use AI to manipulate images and extort money from victims. Historically, such exploitation involved real images shared by individuals, but AI is now enabling the creation of synthetic images to blackmail victims, further complicating the landscape of online crime.
Murray also discussed AI’s potential in cybercrime, stating that hackers leverage AI to identify vulnerabilities in software for cyber-attacks. Additionally, the government’s independent reviewer of terrorism legislation, Jonathan Hall, pointed out that AI could incite criminal and terrorist activities through so-called ‘chatbot radicalization.’ The ease of creating convincing AI personas poses new challenges for security services.
With AI advancements, Murray expressed that the expectation is for an increase in various crime types, including fraud and child exploitation, over the coming years. Policing agencies must act swiftly to adapt and combat these evolving threats. Both accessibility and effectiveness of AI tools in crime indicate a pressing need for enhanced monitoring and proactive measures by law enforcement to ensure public safety in an increasingly digital world.