‘Ferrari executives have clearly been taking their scepticism vitamins’
Earlier in July, a Ferrari NV executive received unexpected messages on WhatsApp, seemingly from CEO Benedetto Vigna.
The messages hinted at a significant acquisition and requested the executive’s assistance.
The messages came from something other than Vigna’s usual business number.
The profile picture, though depicting Vigna, was slightly different, raising suspicion.
The executive received a live phone call using an AI-generated version of Vigna’s voice.
The voice convincingly mimicked Vigna’s southern Italian accent.
The caller explained that a different number was needed due to the confidential nature of the discussion, which involved a deal with complications related to China and required a currency-hedge transaction.
Despite the convincing impersonation, the executive noticed subtle mechanical intonations in the voice.
The executive asked a specific question about a book Vigna had recently recommended, which only Vigna could answer.
The imposter failed to answer the critical question, leading the executive to terminate the call.
The incident highlighted the use of deepfake technology in the attempted corporate fraud.
Ferrari increased their cybersecurity measures in response to the incident.
The company is now exploring AI-driven solutions to detect anomalies in communications.
Multi-factor authentication has been implemented to ensure the authenticity of interactions.
Ferrari’s quick response and thorough verification procedures underscore the importance of scepticism and rigorous identity checks in preventing sophisticated scams in the future.
‘Impersonating a Ferrari CEO is like trying to race in a lawnmower at Monaco’