Deepfake Scam Targets WPP CEO with AI Voice Clone

The head of WPP, the world’s largest advertising group, fell victim to a sophisticated fraud involving deepfake technology, including the cloning of voices through artificial intelligence. CEO Mark Read alerted the leadership in an email, cautioning other employees against responding to calls purportedly from top executives. He highlighted the exceptionally high quality of the deepfakes used, indicating significant resources at the criminals’ disposal.

WPP CEO Deepfake Scam

Fraudsters created a WhatsApp account using a publicly available image of Read and orchestrated a meeting on Microsoft Teams that appeared legitimate due to the participation of Read and another senior WPP employee. During the meeting, they used a cloned voice of the executive director and video materials from YouTube, with Read himself being depicted in the meeting chat. Although the fraud was unsuccessful, its target was an “agency head” whom they tried to persuade to start a new business to extort money and personal data.

A WPP representative confirmed that thanks to the vigilance of the employees, including the affected leader, the incident was prevented. The company did not disclose when the attack occurred or which other executives were involved.

Over the past year, there has been a sharp increase in the number of hacking attacks using deepfake technology in the corporate environment. The incident with the head of WPP is just the tip of the iceberg. Artificial intelligence for synthesizing and cloning human voices has already enabled cybercriminals to steal tens of millions of dollars from banks and financial companies worldwide, causing serious concern in the field of corporate cybersecurity.

One of the most notable incidents was the exposure in 2021 of a fraudulent scheme using deepfakes in an attempt to deceive the investment bank Goldman Sachs out of $40 million. The leader of a non-existent digital media startup confessed to stealing personal data and falsifying his voice using specialized software to mislead financiers about a bogus deal.

Experts are sounding the alarm – modern AI-based multimedia synthesis technologies are becoming increasingly accessible and authentic. Cybercriminals only need samples of someone’s voice to create a virtually indistinguishable fake. Combined with social engineering tools, such deepfakes open up vast opportunities for phishing, identity substitution, and large-scale financial fraud.

Read also listed several signs to watch out for, including requests for passports, money transfers, and mentions of “secret purchases, deals, or payments that no one knows about.”

WPP, a public company with a market capitalization of about $11.3 billion, stated on its website that it is combating fake sites using its brand and is cooperating with the appropriate authorities to stop the fraud.

Experts strongly recommend that companies review their information security policies and identification procedures in light of new threats. Stringent multi-factor authentication and in-depth analysis of biometric data are necessary to protect against cyber-attacks using synthetic multimedia technologies.