An AFP journalist views an example of a "deepfake" video manipulated using artificial intelligence

An AFP journalist views an example of a “deepfake” video manipulated using artificial intelligence. Source: AFP

Deepfakes — the murky next-gen threat coming to Asia

  • Experts ranked deepfakes as one of the most serious AI crime threats 
  • EY has warned firms to be on the alert for so-called ‘synthetic media’

We’ve all seen those videos of face-swapped individuals, often a parody of a popular film with a different celebrity’s face superimposed over a another’s via the power of artificial intelligence (AI). These videos and images have come to be known as deepfakes, and while many are indeed for harmless fun, the incredible realism of these videos leaves plenty of room for malicious behavior.

Deep Trace Lab, a deepfake detection technology firm, found that the amount of detectable deepfake videos on the internet more than doubled to 49,081 in just the six months between January and June 2020. Not only that, but while deepfake prevalence was picking in the West last year, the first instances of convincing deepfakes here in Asia are starting to surface.

The reported incident of face-swapping was reportedly in a Chinese TV series, when actress Liu Lu was blacklisted in the country, and her contract terminated. But all of Lu’s scenes were already filmed for the series, so producers decided to replace her face with another actress’. Meanwhile in India, doctored videos of the Bharatiya Janata Party (BJP) Delhi unit President Manoj Tiwari surfaced just before the local elections. In the deepfaked video in two languages, a face-swapped Tiwari was seen criticizing his opponents while imploring for votes for his own party.

When the term was first coined, the idea of deepfakes triggered widespread concern mostly centered around the misuse of the technology in spreading misinformation, especially in politics. Another concern that emerged revolved around bad actors using deepfakes for extortion, blackmail, and fraud for financial gain. 

Last year, a UK-based energy firm fell victim to an illicit financial scheme. Bad actors utilized AI-based software to impersonate a chief executive’s voice, which demanded a transfer of about US$244,000 to a previously known Hungarian supplier.

The CEO of the UK energy firm thought he was speaking with his boss, the chief executive of the firm’s German parent company. He was not, and was speaking to an AI voice that mirrored the Germany-based CEO. Cybercriminal experts have called this case “unusual” citing that it’s one of the first and among few cases of a known voice-spoofing attack in Europe that clearly capitalized on AI.

A view of the ‘Deepfake’ stand inside the Congress center ahead of the annual meeting of the World Economic Forum. Source: AFP

Investigators traced the money and found that it was transferred from the Hungarian bank account to another bank in Mexico, and from there it was distributed to other locations. 

Rüdiger Kirsch, a fraud expert, shared in the Wall Street Journal that “whoever was behind this incident appears to have used AI-based software to successfully mimic the German executive’s voice by phone.

“The UK CEO recognized his boss’ slight German accent and the melody of his voice on the phone.”

This is a clear example of how deepfakes capitalize on personalization, where generative AI has the ability to create new yet strikingly similar models based on given datasets. This includes mimicking the finer details of a person’s speech right down to their accent, intonations, and even cadence — that specific speech melody that the duped UK CEO described hearing on the call. 

This tailored-made financial deception can only evolve to be more sophisticated and potentially aim for larger scams with even more dollar signs at stake. Synthetic media that are highly realistic, customizable, and scalable can be wielded to suit the strategy and capabilities of bad actors. 

Ashwin Goolab, a consulting partner of EY Africa, India, and the Middle East, remarked that the advancements of deepfake technologies make it an emerging threat to business. 

“It’s now easier than ever to fabricate realistic graphical, audio, video and text-based media of events that never occurred making synthetic, or fake media, one of the biggest new cyber threats to business,” Goolab shared

Goolab also pointed out the risks that financial firms face if targeted by technologically savvy bad actors, leveraging deepfake for defamation, manipulation, and fraud. 

“A well-timed, sophisticated deepfake video of a CEO saying their company won’t meet targets could send the share price plummeting. Phony audio of an executive admitting to bribing officials is prime fodder for extortion,” Goolab continued, “if released, these could cause serious reputational damage, alienate customers, impact revenue, and contribute to the volatility of financial markets.”

Deepfake content with malicious intent needs to be nipped in the bud, and some actions have already been taken. Professional networking platform LinkedIn intercepted close to 19.5 million fake accounts at the registration stage alone and has managed to root out another 2 million after registration. In one case, a fake LinkedIn account called ‘Katie Jones’ was used to ‘spy’ and phish information from connections— an AI-generated image was enough to dupe unsuspecting businessmen into connecting and potentially sharing sensitive information. 

Government bodies are also seen to collaborate with tech giants in combating these synthetic contents. Ahead of the November election, the US government and TikTok linked arms to ban deepfakes and fight misinformation. Social media giants like Facebook and Microsoft have teamed up with academic institutions in the US for a Deepfake Detection Challenge.

Even though detection and verification tools are progressively refined to fish out deepfakes, correspondingly, synthetic content is evolving as well. Hence, a larger and more sustainable framework in fighting deepfakes is crucial.

Jon Bateman writes in Carnegie Endowment for International Peace, “it would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. […] Real solutions will blend technology, institutional changes, and broad public awareness.”