Deepfake video clips started to appear across the internet in late 2017. By the beginning of 2019, more than 7,900 deepfake videos existed online, and a mere nine months later, that figure nearly doubled to 14,678. Since then, deepfake tech has continued to proliferate. But its continued development poses a threat to our institutions, businesses, and to personal reputations.

In early 2020, a band of cyber criminals used a voice fake to impersonate a company director. The phony voice informed a bank manager about a new business acquisition, and the need for a transfer of $35 million. Believing himself to be speaking with a genuine executive, the bank manager began to move the funds; right into the hackers’ accounts.

This corporate heist represents the second known instance of fake voice technology leveraged for damaging purposes. The first occurred a year prior. In 2019, an incident in the UK involved the manager of a firm and fake voice tech that impersonated the CEO of the group’s parent company. At the request of the person believed to be the CEO, the manager transferred more than $250,000 to an external account.

How can organizations guard against deepfake threats?

  • Companies can provide awareness training around deepfake technologies. Training can focus on how hackers typically use these technologies and how to detect them.
  • Never trust, always verify. Employees who receive phone calls requesting for immediate financial transfers may wish to verify information through a second channel.
  • IT teams can rely on software-based detection tools.
  • In preparation for a potential deepfake attack, organizations can maintain customized incident response strategies. Roles, responsibilities and an action plan should be outlined and presented to relevant persons.

Large-scale deepfake monitoring, tracking and removal

  • Improved digital archiving will allow people and algorithms to more easily identify fake voice and fake video clips.
  • While computers can spot many of today’s deepfakes, current technology remains imperfect. Many deepfakes go undetected. However, Microsoft and, separately, the Defense Advanced Research Projects Agency (DARPA) are working on programs to help spot synthetic media. Development of computer tech to pinpoint deepfakes will improve outcomes.
  • The Content Authenticity Initiative –a joint effort by Twitter, Adobe, The New York Times, the BBC and others- aims to create mechanisms that can verify the authenticity of published digital content. This effort is intended to help consumers avoid ‘fake news’.
  • Another idea on the horizon is the development of a ‘universal timestamp,’ which would provide an unalterable chronology of digital publications. This can potentially help people prove that certain authentic content existed prior to the development of fake, spin-off content.

In summary

Artificial intelligence and machine learning tools make it possible to create artificial versions of any video or any voice. The publication of and widespread damage associated with fictitious content is likely to increase in coming years.

By way of comparison, the falsification of documents has been possible for centuries. However, challenges associated with doing so, document authentication tactics, and legal penalties have broadly dissuaded people from this practice. As a global society, there are measures that we can take in order to disrupt the distribution of and societal harm inherent in deepfakes.

For more expert insights into deepfake technology, see Cyber Talk’s past coverage. Lastly, to get cutting-edge insights, analysis and resources in your inbox each week, sign up for the Cyber Talk newsletter.