Dr. Katie Paxton-Fear, a cybersecurity expert and ethical hacker, has uncovered alarming methods used by cyber criminals to access sensitive data through artificial intelligence (AI). As she partners with Vodafone Business, she sheds light on the newest threat facing UK businesses: AI phishing scams. Recent studies show that young office workers, particularly those aged 18 to 24, exhibit a heightened vulnerability to these scams, highlighting an urgent need for increased cybersecurity awareness among younger employees.
The research indicates an ‘age gap’ in cybersecurity preparedness, with nearly half of Gen Z workers failing to update their passwords regularly. As the majority of UK firms express feeling ill-equipped to handle sophisticated phishing attacks, the potential risks escalate rapidly. Dr. Paxton-Fear emphasizes the ease with which scammers can exploit these vulnerabilities through AI technology.
Dr. Paxton-Fear outlines a five-step approach that hackers use in executing AI-driven voice sampling scams, commonly referred to as ‘vishing’. This process begins with reconnaissance, where they gather personal information from social media accounts to identify targets.
A hacker identifies their victim and gathers information from social media profiles, like that of Chris Donnelly, a businessman and target in this demonstration.
By simply downloading a video containing the victim’s voice, the hacker needs just three seconds of audio to create a convincing imitation. The AI software enables them to recreate the victim’s speech patterns.
Next, the hacker contacts an employee, claiming to be their boss, and prepares them for a follow-up call, building trust in the process.
Using the AI-generated voice, the hacker calls the employee, issuing a directive about an urgent invoice payment, heightening the sense of urgency and legitimacy.
Finally, the hacker waits to see if the employee takes the instructed action, successfully executing the scam if they comply.
Chris Donnelly expressed his shock at how easily Dr. Paxton-Fear breached his company’s defenses, underscoring the pressing need for enhanced cybersecurity measures. This incident serves as a wake-up call for businesses to bolster their security protocols and continuously educate employees about AI phishing tactics.
Dr. Paxton-Fear warns that AI technology allows attackers to craft highly personalized and convincing messages, making it harder to differentiate between legitimate communications and phishing attempts. Therefore, businesses, regardless of size, are urged to adopt advanced security measures, such as improved detection systems and staff training, to safeguard against AI-driven threats.