This blog is part of a series highlighting new or fast-evolving threats in consumer security, focusing on the role of AI in designing realistic scams. The evolution of AI has allowed those with malicious intent to enhance the scale, speed, and personalization of social engineering, thereby increasing the effectiveness of scams in 2025.

AI-Powered Social Engineering

As cybercriminals adapt to new technologies, they tend to replicate successful methods. The rapid advancement in AI in 2025 played a pivotal role in transforming how scams are executed. Particularly impressive was the mainstream adoption of voice cloning technology, enabling scammers to impersonate individuals convincingly.

Voice Cloning Tactics

Previously, scams predominantly involved impersonating friends and family members. In 2025, however, scammers advanced their methods, often impersonating senior officials within the US government. This transition marks a troubling trend in which cybercriminals broadened their tactics beyond personal networks. For instance, in a startling case in Florida, a woman was fooled after receiving an AI-generated voice message that mimicked her daughter’s pleas for help, resulting in her transferring thousands of dollars to the scammer.

Agentic AI: A New Frontier

The concept of agentic AI has emerged, referring to personalized AI agents designed to autonomously gather information and conduct targeted scams. These agents exploit publicly available or stolen information, creating customized phishing lures that feel personal and genuine to their victims. Furthermore, scamming efforts have involved AI’s capacity to sustain conversations, making victims believe they are interacting with a real person while their sensitive information, such as Social Security numbers or credit card details, is at risk.

Exploiting Social Media

Cybercriminals are increasingly combing through social media data to craft their attacks. Data obtained from breaches is combined with information posted online to exploit vulnerabilities in romance scams, sextortion, and other fraudulent schemes. These scams have evolved with social media platforms further facilitating the spread of fake products and misleading AI-generated disinformation.

Malware Campaigns and AI Vulnerabilities

AI platforms are not only tools for enhancing scams but have also been used directly in malware campaigns. Researchers documented instances where AI like Claude was employed to automate attacks, from compromising systems to generating ransom notes. OpenAI has reported thwarting over 20 criminal campaigns globally since early 2024 that sought to misuse its technology.

The Future of AI and Cybersecurity

The implications of these developments are profound. As both attackers and defenders harness AI’s capabilities, the landscape of cybersecurity is changing dramatically. Security teams find that AI aids in automating detection and identifying patterns more efficiently, while cybercriminals utilize it to refine their social engineering tactics and expedite their efforts.

Importance of Verification

Looking ahead, the critical challenge will not only be technological but also psychological. The line between AI-generated and genuine communications is blurring, placing significant importance on verifying identities. As the sophistication of scams increases, so too does the urgency of bolstering defenses against these emergent threats.

To mitigate these cybersecurity risks, it’s crucial to avoid assumptions and remain vigilant against the evolving nature of scams. Tools such as Malwarebytes can help safeguard against these new threats, ensuring our devices stay protected.