The Dark Side of AI: “Undressing” Services Thrive on Major Tech Platforms, Raking in Millions
Each month, millions of users visit websites offering so-called “undressing” services—platforms that use generative AI to transform ordinary images of women and girls into fabricated nude depictions. A recent investigation by Indicator has revealed that, despite the efforts of certain lawmakers and tech companies, this industry remains active, highly profitable, and deeply embedded within the infrastructure of some of the world’s largest technology giants.
An analysis of 85 such websites showed that many are serviced by companies like Google, Amazon, and Cloudflare. Of these, 62 rely on Amazon or Cloudflare for cloud or CDN services, and 54 utilize Google’s authentication systems. A wide array of other tools—including payment gateways and hosting providers—are also employed, sourced from both major corporations and lesser-known firms.
This ecosystem sustains an illicit industry with an estimated annual turnover of up to \$36 million, a figure based on user counts, subscription fees, and the sale of “credits” used to generate the images. However, researchers note that actual profits may be significantly higher due to off-site activity, such as transactions and promotions on Telegram.
At the heart of the issue lies the inaction of tech companies whose tools and platforms enable the dissemination of content that violates both legal boundaries and ethical standards. A spokesperson from AWS claimed the company responds to reports of abuse, while Google asserted it is working on solutions and has already taken steps to limit access. Still, little has changed: the sites remain online, growing their audiences and expanding monetization strategies through adult industry ads, affiliate programs, and sponsored links.
Beyond the clear financial incentives, operators of these sites appear intent on embedding themselves in the adult content industry as legitimate players. According to Indicator, the platforms are adapting to potential restrictions by using fake domains, intermediary sites, and evasion tactics to bypass automated controls. One of the most common methods involves disguising registrations with neutral-looking URLs when signing in via Google or Apple, allowing them to sidestep content filters.
The threat is becoming increasingly global. Most visitors hail from the United States, India, Brazil, Mexico, and Germany. In recent months, there has also been a surge in traffic from third-party sources, not just search engines. The popularity of such platforms has even attracted the attention of cybercriminals, who are now creating malicious clones of these services laced with malware.
Despite growing regulatory scrutiny, efforts to dismantle these platforms remain scattered and inconsistent. Several lawsuits have been filed in the U.S. against similar websites. Microsoft has identified developers producing deepfakes featuring celebrities, and Meta has filed suit against a company advertising an AI-based “undressing” app. The U.K. has moved to outlaw the creation of explicit deepfakes, while the “Take It Down Act,” signed by Donald Trump, mandates swift removal of such imagery by hosting companies.
Still, experts maintain that unless the major tech players take decisive action to cut off infrastructure support for these exploitative platforms, the issue will persist. Piecemeal enforcement is not enough. Systemic, coordinated measures are needed. Should technology providers collectively refuse to serve these services, they would be relegated to obscure corners of the internet, drastically shrinking their reach and profitability. While this won’t eradicate the problem entirely, it could significantly curtail its scale.