Micki Boland is a global cyber security warrior and evangelist with Check Point Technologies’ Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.
In this two-part interview series, Micki Boland discusses the new realities created by deepfake technology. Individuals and organizations can suffer grievous consequences due to the spread of deepfakes. Can detection technology keep up? Get outstanding expert insights below.
For those who may be unfamiliar, tell us about what deepfake videos are…
A deepfake combines deep learning and images, video, and audio inputs to produce high-quality video or audio of a person doing and saying things that are not real. Deepfake technology is a type of Neural Network called Generative Adversarial Networks (GAN). GAN is utilized in creating fake videos for political disinformation, as well for political satire, and celebrity attention-getting pieces.
Voicefakes are becoming increasingly utilized for impersonation attacks. Lyrebird is a DeepVoice platform to ML used to create voice imitation that realistically mimics any person’s voice. Google Voice Builder does this also.
For the techies: One of the technologies that helps make deepfakes so realistic is the use of Generative Adversarial Networks (GAN), a class of machine learning. These networks have two neural network models, a generator and a discriminator. The generator takes training data and learns how to recreate it, while the discriminator tries to distinguish the training data from the recreated data produced by the generator. The two “artificial intelligence actors” play this adversarial game repeatedly, each getting iteratively better at its job. To see NVIDIA GAN in action, navigate to the website “This Person Does Not Exist”. If you want to learn more and work with GAN, visit DeepFaceLab.
We’ve reached an “ominous point” where deepfake tech can easily be abused. What should we do now?
Indeed. Things will only get more interesting! Deepfake technology, along with other AI technologies, can be utilized in irregular warfare and proxy attacks, disinformation and disruption campaigns to manipulate and influence public opinion, as a means to foment criminal violence, as a method to infiltrate organizations to conduct fraud, scam, harass, highjack legitimate real human accounts for impersonation, and to distribute malware.
How good are tech-based deepfake detectors?
We may see soon a “Claymation” type video of “Battle of the AI”: Deepfake detector versus deepfake! Right now, the quest is on to detect Deepfakes and AI is the answer. In 2020, the Deepfake Detection Challenge (DFDC) was launched by AI’s Media Integrity Steering Committee (Partnership on AI), and a consortium of academic groups, AWS, Facebook, and Microsoft. The goal of DFDC Challenge was to drive innovation in building new technologies to help detect deepfakes and manipulated media. From DFDC, Microsoft launched its “synthetic media” detector called Video Authenticator, in September 2020.
Why is it becoming increasingly difficult to identify deepfake videos?
The creator of a deepfake does not have to be an expert to make believable deepfake videos. The deepfake video creator need only provide as much input data as possible on the target to the GAN. Target digital images, video clips, and sound bites can be obtained via the internet, social media, mobile applications, and online news platforms. In August 2020, Law Technology Today reported “300,000,000 photos are uploaded to Facebook every day, and 46,740 photos are uploaded to Instagram every minute”. Having more inputs enables the GAN to generate higher quality deepfake videos. Humans have limited ability to detect deepfake videos. The methodology or technology platform in which the human is consuming deepfake video is also an important consideration. If a human trusts the source media platform or technology being used to host or disseminate deepfake videos, and the source is considered a reliable and authentic media source, then “believability” is easier to achieve.
What kinds of threats do deepfakes pose for organizations?
A very clear example of a deepfake used in direct financial fraud involved a voice fake. The targeted firm was UK-based and a voice fake of the CEO was utilized to request Electronic Funds Transfer to an offshore account. The voice fake was so good, so well matched to the CEO’s German accent and manner of speaking, that the fraudsters successfully received EFT twice before being stopped on a third attempt — due to technical issues with the EFT transfer, not the voice fake! The fraudsters took the company for over 500K GBP.
And for individuals within those organizations, what risks can deepfakes pose?
First, individuals within organizations should always maintain their guard and be aware of online and digital surroundings. This includes social media, online news, using mobile applications, and media platforms. Deepfakes and other AI technologies provide an alternative method for scamming a person or an organization. Individuals need to be aware that they may be a target for “friending” by social media trolls. In addition, using automated bots, deepfakes can be used as clickbait, paired with phishing campaigns, and in concert with highly targeted and well-crafted social engineering attacks (as in UK firm above). The adversary may seek to gather information/intelligence about the individual, their family, their networks, their company or organization.
Second, users and organizations can protect themselves and reduce risk and exposure to the organization by seeking awareness and enforcing GRC, ethical, and cybersecurity guardrails on corporate utilization and consumption of social media, online media, and mobile applications.
Did you like this interview? Check back for part two next week!