top of page

With Great Power (AI) Comes Great Threats

  • gbaloria333
  • Mar 26
  • 5 min read

Key Points

  • Research suggests that awareness and proactiveness can help prevent online AI scams, but some companies still fall victim due to increasing sophistication.

  • The Ferrari deepfake scam involved an executive identifying a fake CEO call by asking a specific question, saving the company from potential fraud.

  • It seems likely that deepfake scams are becoming more common, with cases like WPP and a Hong Kong firm losing millions, highlighting the growing threat.


Incident Overview

In March 2025, a Ferrari executive received suspicious WhatsApp messages and a call impersonating CEO Benedetto Vigna, using AI to mimic his voice. The executive noticed mechanical intonations and asked, "What was the title of the book you recommended recently?" The call ended abruptly, revealing the scam, and Ferrari launched an internal investigation.


Broader Context

Deepfake scams are on the rise, with other high-profile cases including WPP's CEO Mark Read being targeted and a Hong Kong company losing $26 million in a deepfake video call. A Deloitte poll shows 25.9% of executives experienced deepfake incidents in the past year, with 50% expecting more attacks soon. These scams exploit AI to create convincing imitations, posing risks to businesses of all sizes.


Lessons for Professionals

To protect against deepfake scams, verify communication sources, look for inconsistencies, ask specific questions, be cautious with sensitive information, and stay informed about cybersecurity trends. Companies like Arup are already training employees to spot such frauds, emphasizing the need for proactive measures.


Survey Note: Detailed Analysis of Deepfake Scams and Ferrari Incident

This note provides a comprehensive analysis of the Ferrari deepfake scam incident, its implications, and the broader context of online AI scams, aiming to inform professionals and encourage proactive measures. The incident, reported in early 2025, underscores the growing threat of deepfake technology and offers valuable lessons for businesses.


Incident Details

On a mid-morning Tuesday in March 2025, a Ferrari NV executive received unexpected WhatsApp messages purportedly from CEO Benedetto Vigna, discussing a confidential acquisition and requesting help with a Non-Disclosure Agreement (NDA). The messages came from an unfamiliar number, and the profile picture, while an image of Vigna, differed from his usual account. The executive grew suspicious and, during a subsequent phone call, noticed the voice, though convincingly mimicking Vigna's southern Italian accent, had slight mechanical intonations. To verify, the executive asked, "What was the title of the book you recommended to me a few days ago?" (the book being Decalogue of Complexity: Acting, Learning and Adapting in the Incessant Becoming of the World by Alberto Felice De Toni). The call ended abruptly, confirming the scam. Ferrari then opened an internal investigation, and representatives declined to comment further.


This incident highlights how deepfake technology can create near-perfect voice imitations, but subtle inconsistencies, like mechanical tones, can be detected with vigilance. The executive's quick thinking, leveraging a personal detail, was crucial in thwarting the scam, potentially saving Ferrari from financial and reputational damage.


Broader Context and Rising Threat

Deepfake scams, powered by generative AI, are becoming increasingly sophisticated and frequent, targeting businesses worldwide. The Ferrari case is not isolated; similar incidents include:

  • In May 2024, WPP CEO Mark Read was targeted in an unsuccessful deepfake scam involving a Microsoft Teams call, as reported by The Guardian.

  • In February 2024, a Hong Kong multinational lost HK$200 million ($26 million) after employees were deceived by deepfake video calls impersonating the CFO, as detailed by CNN.

  • Chinese state media reported a case in Shanxi province in 2024, where a financial employee lost $262,000 to a deepfake video call, illustrating the threat to smaller entities.


Research suggests these scams are escalating, with a Deloitte poll from 2024 revealing that 25.9% of executives experienced deepfake incidents targeting financial data in the past 12 months, and 50% expect a rise in attacks, as noted in Incode Blog. Trend Micro's 2024 report, found in Trend Micro US, indicates that deepfake tools are readily available in underground markets, accessible even to unsophisticated cybercriminals, amplifying the risk.


Implications for Businesses

The evidence leans toward deepfake scams posing a significant threat to businesses, eroding trust in digital communications. Experts like David Fairman, CIO and CSO at Netskope APAC, warn that the popularization of generative AI, such as OpenAI's ChatGPT launched in 2022, has fueled this trend, as seen in CNBC. José Palacio, Global Head of Threat Detection at Santander, notes that a 15-minute video is enough to clone a voice, highlighting the ease of creating deepfakes, as reported in Santander Stories.

Despite the potential for prevention through awareness, some companies still fall victim, as seen in the Hong Kong case, where employees failed to verify identities. This underscores the need for robust protocols, especially given the financial losses, which can reach millions, as evidenced by the $25 million Hong Kong scam.


Preventive Measures and Best Practices

To mitigate deepfake scams, businesses can adopt the following strategies, drawn from various cybersecurity resources:

  • Employee Education: Train staff to recognize deepfakes, using examples of legitimate and fake media, as suggested by BofA Business.

  • Verification Protocols: Implement multi-factor authentication for sensitive transactions and verify identities through multiple channels, as recommended by Coro Cybersecurity.

  • Secure Communications: Use encrypted and secure communication tools, being cautious with unscheduled requests, as advised by The SSL Store.

  • Specific Questioning: Encourage employees to ask personal, specific questions, as the Ferrari executive did, to verify identities, a tactic supported by IT Governance Blog.

  • Stay Informed: Regularly update on deepfake detection technologies, such as AI-driven tools, to stay ahead, as noted in Onix Systems.


Companies like Arup, involved in the Hong Kong scam, are already training employees to spot deepfakes, indicating a proactive approach, as reported in CNBC. This aligns with the user's point that "little awareness and proactiveness can save you from online AI scams," though the challenge lies in scaling these measures across organizations.


Unexpected Detail: Impact on Smaller Businesses

While large firms like Ferrari and WPP often make headlines, the Shanxi province case shows smaller businesses are equally at risk, with losses like $262,000 potentially devastating for such entities. This detail, less discussed, emphasizes the need for universal awareness, not just in corporate giants.


Table: Recent Deepfake Scam Cases

Below is a table summarizing notable deepfake scam incidents, highlighting the scale and impact:

Company/Location

Date

Loss Amount

Description

Ferrari NV

March 2025

Prevented

Executive identified fake CEO call, no loss reported.

WPP Plc

May 2024

Prevented

CEO Mark Read targeted, scam thwarted by vigilant employees.

Hong Kong Multinational

Feb 2024

$26 million

Employees deceived by deepfake video call, transferred funds to fraudsters.

Shanxi Province, China

2024

$262,000

Financial employee tricked via deepfake video call, funds transferred.

Conclusion and Call to Action

The Ferrari incident serves as a cautionary tale, emphasizing that awareness and proactiveness, as the user noted, are crucial, yet some companies still fall victim due to the evolving nature of AI scams. Professionals are encouraged to implement the above measures and share experiences in the comments to foster a community approach to combating deepfakes. Have you encountered a deepfake scam? How did you handle it? Let's discuss best practices to safeguard our organizations.


Key Citations

 
 
 

Recent Posts

See All
The Great Indian Drain

Key Points Research suggests many well-settled Indians leave for better job opportunities, higher salaries, and improved quality of life...

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page