Critical Flaw in Wix’s Base44 AI Platform Allowed Access to Private Enterprise Apps
Base44, a widely used platform for AI-assisted application development, was recently found to be critically vulnerable due to a glaring misconfiguration in its authentication system. The flaw allowed malicious actors to gain unrestricted access to private applications created by other users—simply by knowing their identifier, the “app_id.” This piece of information, far from confidential, is readily visible in the URL and manifest.json file of any public project.
Engineers at Wiz discovered that two vital API endpoints—auth/register
and auth/verify-otp
—were left completely unprotected. By leveraging only the app_id
, any individual could register an account, confirm it via a one-time code, and then access applications that did not belong to them.
In effect, identity verification mechanisms, including Single Sign-On (SSO), were bypassed entirely. The flaw was so elementary that, according to researcher Gal Nagli, one merely had to register for the desired app and log in through SSO—at which point the platform granted access to sensitive data with no further checks.
Following the vulnerability report on July 9, 2025, the platform’s maintainers acted swiftly, patching the issue in under 24 hours. No visible signs of exploitation have been detected thus far. Nevertheless, the incident underscores a sobering truth: even relatively novel paradigms like so-called “vibe coding”—where AI generates code from natural language prompts—are not only revolutionizing development workflows but also introducing new vectors of risk, especially when foundational safeguards are overlooked.
Security lapses in AI ecosystems are becoming an increasingly pressing concern. In the wake of the Base44 incident, researchers are now drawing attention to a cascade of attacks on leading generative AI systems—including Claude, Gemini, ChatGPT, and Grok. Notably, Google’s team recently uncovered a critical cluster of vulnerabilities in the CLI version of Gemini: lack of contextual validation, susceptibility to malicious prompts, and a misleading user interface—all enabling the execution of arbitrary code without the user’s awareness upon opening a crafted file.
In a separate case, researchers were able to deceive Claude Desktop via a specially crafted email sent to Gmail, tricking the AI into altering the message and lifting its own restrictions. Grok 4, developed by xAI, was compromised using two techniques—Echo Chamber and Crescendo—which circumvented filters and accessed prohibited functions without explicitly malicious queries. Alarmingly, its defense mechanisms were effective in less than 1% of attempts. A similar evasion was achieved against ChatGPT, where a guessing game led the model to disclose valid Windows product keys. Meta’s LLaMA firewall also fell short, succumbing to lexical tricks like language switching, leetspeak, and invisible character insertion.
Beyond behavioral manipulation of the models themselves, attention is now shifting to infrastructure-level threats. Researchers at Snyk have introduced a novel methodology called Toxic Flow Analysis (TFA), which enables preemptive detection of AI system vulnerabilities—before any attack materializes. This approach simulates potential compromise chains, encompassing agent logic manipulation, tool poisoning, and interception of model control protocols (MCP).
All these developments point to a single, inescapable conclusion: the faster generative AI technologies evolve, the broader the attack surface becomes—and the more imperative it is to embed security not as an afterthought, but as a foundational element of platform architecture.