The AI Cyberwar Is Here: CrowdStrike Report Exposes How Attackers Use AI to Automate Vishing, Phishing, & Espionage
Generative AI models are rapidly evolving into fully-fledged instruments within the arsenals of cyber adversaries. This trend is underscored in CrowdStrike’s 2025 annual report, which highlights a sharp increase in the use of artificial intelligence not merely for boilerplate phishing, but as a foundational element in scalable, sophisticated, and increasingly covert operations.
Attackers have moved beyond generating simple phishing emails and are now leveraging AI in technically intricate campaigns, where models serve not as assistants, but as integral components of the offensive infrastructure. CrowdStrike notes that large language models (LLMs) are now employed to automate malware development, vulnerability analysis, and scripting—augmenting rather than replacing the attackers’ well-established tactics, whether state-sponsored or cybercriminal in nature.
One of the most alarming signals is the surge in vishing (voice phishing) incidents: in the first half of 2025 alone, the number of such attacks surpassed the total recorded in 2024. Additionally, hands-on keyboard intrusions—where attackers are directly engaged in live operations—have risen by 27%, underscoring the growing synergy between human adversaries and generative tools.
The report also casts a spotlight on the exploitation of vulnerabilities within AI systems themselves. In April, for instance, threat actors exploited CVE-2025-3248—a flaw in Langflow AI, a popular framework for building AI agents—to achieve remote code execution without authentication. This marks a pivotal shift: AI services are no longer just targets but have become integral components of the very infrastructure under attack—one previously deemed secure.
Social engineering, too, has acquired unprecedented potency. Neural networks are crafting personalized phishing messages, constructing digital personas with convincing backstories, and generating counterfeit documents capable of deceiving even seasoned HR professionals. The report emphasizes that GenAI is now capable of translating propaganda into multiple languages, producing credible media assets, and disseminating disinformation at an unprecedented pace.
Particular emphasis is placed on cyberespionage operations involving fraudulent IT personnel. CrowdStrike highlights the Famous Chollima group, linked to North Korea, which successfully secured employment at foreign companies 320 times over the past year. Their tactics include fabricated résumés, AI-generated cover letters, deepfake interviews, and English-language correspondence—also crafted with the help of neural networks. These operatives often perform legitimate work for months, concealing their presence while quietly accessing sensitive corporate data.
According to CrowdStrike, such campaigns have been enabled by tools like Microsoft Copilot and VSCodium, which assist North Korean operatives in overcoming multitasking demands and language barriers. Each individual is reportedly able to work for three or four companies simultaneously. Over the past 12 months, incidents of this nature have increased by 220%, pointing to a highly organized system. Analysts estimate that a dedicated division of North Korea’s 75th Bureau may be generating hundreds of millions of dollars annually through these schemes—funds that are funneled into national defense programs.
To counter such threats, organizations must overhaul their hiring processes: verifying candidates through professional social networks, deploying deepfake detection during video interviews, enhancing geolocation checks, and implementing tailored training for HR departments and cybersecurity teams. As CrowdStrike’s experts stress, the threat is no longer theoretical—it is an operational reality.