IBM Study: 97% of Breached Firms Lacked Basic AI Safeguards, Exposing Critical Data
In their relentless pursuit of artificial intelligence, companies are neglecting the very foundation of digital resilience—security. This stark conclusion emerges from an IBM analysis of cyberattack data collected over the past year, revealing that threat actors have already begun exploiting vulnerabilities in corporate AI systems.
The Cost of a Data Breach 2025 study, encompassing 600 organizations worldwide between March 2024 and February 2025, found that one in eight companies (13%) suffered significant security issues related to AI deployment. Alarmingly, nearly all affected organizations (97%) admitted they had failed to implement even the most basic safeguards.
The consequences of these oversights were severe. One-third of the impacted organizations experienced operational disruptions and loss of sensitive information. One in four reported financial losses, while one in six suffered reputational damage. Though these figures may seem modest, experts caution that as AI adoption accelerates, the associated risks will scale exponentially.
The primary attack vector: supply chains. Adversaries frequently infiltrate systems through compromised applications, APIs, and plugins, most often via third-party cloud service providers.
Particularly insidious is the rise of so-called “shadow AI”—unauthorized use of AI tools by employees without the knowledge of IT or data security teams. These unsanctioned neural networks operate outside official oversight, introducing unforeseen vulnerabilities into the corporate ecosystem.
At the heart of the issue lies a pervasive lack of governance. An overwhelming 87% of organizations report having no risk management strategies in place for AI. Two-thirds fail to conduct regular security audits, and three-quarters do not test their models for resilience against adversarial attacks.
This is not the first warning sign. Last year, numerous large enterprises suspended the rollout of Microsoft Copilot-based assistants after discovering that employees were granted access to sensitive data far beyond their clearance levels.
Gartner analysts forecast that by the end of 2025, at least 30% of corporate generative AI projects will be abandoned due to poor data quality, insufficient risk oversight, escalating costs, and an ambiguous return on investment.
Companies, fearing obsolescence in a competitive landscape, are rushing headlong into AI integration—often at the expense of security. Sujay Viswesan, Head of Security at IBM, cautions that “the absence of fundamental protective measures exposes sensitive data and leaves AI models defenseless against manipulation.”
As AI becomes increasingly enmeshed in business operations, the cost of inaction continues to rise. At stake is not just financial capital, but customer trust, operational transparency, and sovereign control over internal systems.