AI Coding Assistants Under Attack: “Slopsquatting” Malware Exploits AI Hallucinations for Supply Chain Compromise
A new threat has emerged in the realm of AI-assisted programming, known as “slopsquatting.” This attack has become particularly dangerous amid the surging popularity of AI coding assistants like Claude Code CLI, OpenAI Codex CLI, and Cursor AI—tools widely adopted by developers for automatic code generation and dependency suggestion.
Unlike the more familiar typosquatting attacks, which rely on user misspellings, slopsquatting exploits the hallucinations of AI models themselves. These systems may generate convincingly realistic, yet entirely fictitious, library names—such as “starlette-reverse-proxy.” These suggestions appear plausible and seamlessly integrated into the coding context, leading developers, particularly during rapid prototyping or so-called “vibe coding,” to skip manual verification.
Cybercriminals preemptively register these fabricated dependencies in public repositories like PyPI, embedding them with malicious payloads. When a developer, trusting the AI’s recommendation, installs such a package without scrutiny, the malware is surreptitiously introduced into their environment.
Even agents equipped with online validation mechanisms are susceptible to such errors. Tests across a hundred web development tasks revealed that large language models often hallucinate between two and four nonexistent packages per session, particularly in response to complex queries. Advanced logical models offer only partial improvements—and even then, not consistently.
Cursor AI, which leverages the Model Context Protocol (MCP) for real-time validation, demonstrated the lowest rate of false dependencies. However, it too faltered in edge cases involving cross-ecosystem name borrowing or syntactic manipulations of morphemes. These subtle missteps create exploitable openings for attackers.
Conventional safety practices—such as checking for a package’s existence in a registry—are insufficient. Malicious actors can reserve plausible-looking names in advance and populate them with deceptive, harmful code. Combating slopsquatting demands a holistic and layered approach to security.
Organizations are advised to adopt cryptographically signed Software Bills of Materials (SBOMs) to trace the origin of all dependencies. Integrating vulnerability scanning tools—like OWASP dep-scan—into CI/CD pipelines can help detect threats prior to deployment.
Moreover, new dependencies should be installed within isolated environments, such as Docker containers or ephemeral virtual machines, with minimal external access. This allows for sandboxed testing, shielding core infrastructure from potential harm.
Additional safeguards include manual approval of unfamiliar dependencies, multi-step validation of AI-generated suggestions, execution monitoring, the use of immutable container base images, and comprehensive system logging of all actions.
AI assistants remain powerful assets, but their propensity for fabricating data necessitates constant oversight. The fusion of cutting-edge tools, stringent policies, and human vigilance is essential to mitigate risks and ensure the integrity of modern software development.