AI is the New Malware: AI-Generated Python Package Infects 1,500+ with Stealthy Crypto-Stealer
A malicious package discovered in the NPM ecosystem by researchers at Safety turned out to be far more than a simple trojan for cryptocurrency theft—it stood as a striking example of an attack orchestrated with significant assistance from artificial intelligence. Masquerading under the name “NPM Registry Cache Manager”—allegedly a tool for license validation and registry optimization in Node.js—the module was, in fact, an “Enhanced Stealth Wallet Drainer,” a program designed to siphon funds from cryptocurrency wallets across Windows, macOS, and Linux platforms.
Upon activation by the user, the infected package began scanning for crypto wallets, transferring all discovered assets to a designated address on the Solana network. Intriguingly, it would leave a small residual balance in each wallet—enough to cover transaction fees for potential future withdrawals. This subtle tactic helped avoid suspicion and increased the likelihood of the operation remaining undetected. The success of the attacks was corroborated by a list of executed transactions shared by the researchers. Within just two days, nineteen variants of the malware had been uploaded to NPM.
Particular attention was paid to the code style and accompanying documentation. The authors noted that the comments and project descriptions were “too polished to be true”—written in flawless English, structurally sound, and technically persuasive. The documentation repeatedly featured the word “Enhanced” and included atypical elements for seasoned developers—such as emojis. According to Paul McCarty of Safety, this was a telltale sign of code generated by AI models, particularly Claude. He remarked that such platforms frequently insert emojis into code without functional justification—an anomaly rarely seen in code written by experienced human programmers.
Additionally, the structure of markdown files, the tone of comments, and an overabundance of console.log
statements all bore the hallmarks of AI-generated output, rather than the deliberate craftsmanship of a human developer. These clues strongly suggest that much of the malicious code was created using generative AI tools.
The module was uploaded on July 28 and, within two days, began to be flagged en masse as malicious. Despite swift intervention from security systems, the package had already been downloaded over 1,500 times by that point. The precise geographic distribution of infections remains unknown. The name “Kodane,” chosen for the module, is also noteworthy—it means “child” in Japanese, though in this context, it likely served as a distracting flourish, unrelated to the substance of the attack.
This investigation underscores a growing trend: generative AI is now being leveraged not only to accelerate legitimate software development but also to mask malicious functionality behind the veneer of polished documentation and meticulously styled code. The result is a new tier of threat—where the more convincing the presentation, the harder it becomes to distinguish a threat from a legitimate tool.