The Growing Threat of Deepfake Video and Audio Calls

icon AUDIO
Photo - The Growing Threat of Deepfake Video and Audio Calls
Envision a business call where a scammer, with convincing accuracy, pretends to be your boss to access confidential info or company funds. Being duped by such a deepfake and falling prey to fraud is simpler than you might assume, as AI now enables the creation of real-time, realistic videos.
The first wake-up call (both literally and figuratively) rang out in 2019, following a leaked story about a British energy company CEO who was conned by a fake call from the headquarters, instructing him to transfer $249,000 to a supplier. It was later revealed that the British CEO was the victim of fraudsters, as the real boss had issued no such orders. An investigation led by an insurance company found that the call was made using AI-based voice simulation software.

By the summer of 2023, Mandiant analysts (a Google subsidiary specializing in cybersecurity) reported that the adoption of deepfake video technologies was on the rise. This software was openly marketed on hacker forums and Telegram channels, with scammers claiming their technology would enhance the success of extortion and fraud schemes. The ability to impersonate someone else during a video call, making the interaction seem more personal, was touted as a game-changer. Creating deepfakes was priced affordably at $20 per minute or $250 for a complete video, with a trial session costing just $200, alarmingly making this technology accessible to potential offenders.
Since we all got used to bad audio and video quality on virtual calls, it’s still good enough to deliver a dangerous social engineering attack,
Cybersecurity experts caution.
In February 2024, another striking case of deception occurred when an employee at a Hong Kong subsidiary of a multinational firm paid $25 million following a fraudulent video call. The scammer, exploiting deepfake technology, impersonated the company's CFO and was not alone; deepfake replicas of other employees joined him. The victim conversed with these AI clones for an extended period without suspicion. The scammers initially contacted the employee with a request for a significant payment, and the video call ultimately dispelled any doubts the victim had. 

Hong Kong police have reported that this method of using video deepfakes for financial deception is not an isolated incident, with several suspects arrested for similar crimes recently.

IBM cybersecurity specialists, one of the leading global providers of software and hardware, have alerted to "audio-jacking," a scam that uses AI to mimic a real person's voice during calls. Generative AI only needs a three-second original voice recording to accurately replicate your boss's speech.
The applications of this threat range from financial crimes to disseminating disinformation in critical communications,
the experts stress.
So, how to avoid falling for the increasingly sophisticated tricks of scammers? The advice from IBM experts essentially boils down to awareness: the individual laughing at your jokes during a work video call could be AI-generated, and a scammer might join a phone conversation at any moment. If your conversation partner provides payment details, ask them to rephrase or reorder the information. This way, you might detect glitches characteristic of AI-generated sounds, potentially saving you from being scammed.