Keely Wilkins is an Evangelist with the Office of the CTO as well as a Pre-Sales Security Engineer in Virginia.  She has worked in the technology industry for nearly thirty years, holds a MS of Cybersecurity and a variety of certifications. Keely endeavors to find balance among transparency, predictability, and security.

Design business and security strategies based on the challenges and opportunities ahead. Get the latest trends, metrics and more. In this outstanding interview, Keely Wilkins discusses deepfakes, deep learning, synthetic content, the weaponization of information and how these elements affect your future.

How are deepfakes made?

Making a deepfake starts with thousands source data (video/audio/image) files, a vision of the intended end result, and an Artificial Intelligence (AI) capable of learning the data, re-creating it from the measurements taken during its learning phase, and assessing its own accuracy.  The AI will go through numerous revolutions of learning, generating, and assessing before it will declare its synthesized output as complete.  The human at the controls may then fine tune aspects of the synthesized content to make it more convincing to the eyes and/or ears of other humans.

The US Government Accountability Office provides a detailed definition here.

How does synthetic content help people?

There are many examples of deepfake technology used for good.

Positive uses include:

  • Editing movie content to change its rating without having to reshoot scenes
  • Generating content in films that allow projects to move forward when an actor is no longer available
  • Replicating lost artistic masterpieces using a combination of AI and 3D printing
  • Improving diagnostic and prediction accuracy for medical patients
  • Recreating or generating a voice to help people communicate
  • Recreating historical events for immersive educational experiences
  • Building adaptive chat box conversations for online assistance

The short film “In the Event of a Moon Disaster” demonstrates the potency of deepfake technology in 2019.  The face of the president is nicely replicated, but take note of his jaw line.  Specifically where it overlaps his neck.  The AI had trouble smoothing that out.  This film won numerous awards in 2020; refer to the About section for additional information.

How is deepfake technology being weaponized?

Anything can be used as a weapon based on intent and opportunity.  Just the word, deepfake, sounds sinister.  In contrast, the term synthetic content sounds much more approachable.  They are the same thing.

Weaponizing a technology that’s primary function is to mimic human behavior, appearance and speech is no longer difficult. Feed all available source data into the AI for its learning process then craft a malicious fraudulent output.  That newly generated video and audio is then distributed through existing communication channels: email, SMS, social media, local or national news agencies.

If it is a well-crafted influence campaign, there will be multiple deepfakes in circulation aimed at the same objective.  The intent behind the deepfakes help to determine whether or not  criminal, religious, or moral laws are being violated.

Consider what a criminal could accomplish if they could:

  • Fabricate “evidence” of anyone doing or saying anything
  • Edit historical video or audio of world leaders to alter narratives
  • Produce fake medical documents
  • Influence the actions of world leaders with synthetic content

In recent months, there has been an uptick in the use of deepfakes to secure remote interviews for tech jobs.  The criminal steals/appropriates the identity of a viable candidate from LinkedIn (or similar), applies for a job that requires admin level access, and tries to get hired for a remote position. The FBI PSA on this scam is available at ic3.gov .

What are some ways people can spot a deepfake?

Identifying, or spotting, a deepfake can be challenging.  There is a saying in cyber security: The attackers only have to be right once; defenders have to be right every time.  With deepfakes, the creator need only build consensus to advance fiction as fact.  Compound that with spin doctors across social media platforms and a deepfake campaign can grow legs to influence people to act on false information.

According to a VMware study that polled 125 cyber security and incident response professionals, email was the top delivery method for deepfake attacks, accounting for 78% of them.

Knowing this, we can prioritize our security efforts on email, phishing, content-filtering, and other established prevention mechanisms.

Deepfake detection tools like Sensity and Operation Minerva are available and continue to improve anti-deepfake technologies, but that technology is not inherent in today’s security stack.

Since not all deepfakes are malicious, the objective cannot be preventing the distribution of deepfake content.  Doing so could prevent good (educational, medical, etc.) content from being shared.  The objective, from a cyber security perspective, is preventing all malicious content from gaining access to our organizations.

Sensity, a threat intelligence company based in Amsterdam, found that 86% of the time, anti-deepfake technologies accepted deepfake videos as real.  It is an interesting statistic, but consider the way by which deepfakes work – countless iterations of creation and assessing.  The AI learns from each iteration.

There is disparity in the maturity of the AIs used to create synthetic media versus the AIs used to detect it.

We need to get to a point where deepfake detection mechanisms are more proactive than reactive.  As we approach that milestone, we must remain mindful of the good uses of deepfake technology.  Just because content is flagged as a deepfake doesn’t mean it is malicious.  How do we programmatically determine intent?  Existing AI models for behavioral analysis will be key in making such determinations and those AIs will learn from each encounter with synthetic content.

In the meantime, it is not appropriate to burden the end user with making high value decisions on-the-fly about the legitimacy of media content.  Deepfakes are designed to trick people.

For individuals, situational awareness is key in all social-engineering endeavors.  Take a step back and assess the situation before taking any action.

  • Is your first response to the video/image/audio one of emotion?
  • Is there a sense of urgency to act?
  • Does the message seem reasonable or sensational?
  • Can you validate the information through independent channels? Ex. factcheck.org, politifact, or snopes
  • Are you prompted to download a file, enter login credentials, or provide other PII?

In real-time situations, like remote interviews, do not take the humanness of the subject for granted.  The accuracy of the synthetic media is reliant on the source data of the person being imitated.  Source data of a person’s profile are difficult to obtain, so it is difficult to fake.  AIs also have difficulty with the nuances of skin tone, hair color and texture, light reflections in eyes, shape and movement of mouths, and the glint of jewelry.  It is acceptable to be reasonably and politely paranoid in remote interviews.

What can be done to manage the use of deepfake technology?

There is an effort to watermark original content then protect it with blockchain technology.  This shows promise.   The challenge I see with this is the volume of original content that would have to be authenticated, watermarked, and protected.

The watermark + blockchain effort combined with a deepfake detection technology could be an intermediary solution as the technology matures to a point where some level of regulation is applied.  The question of intent is not addressed in this scenario.  Combining these two technologies with behavioral analysis AIs may be the long-term answer.

AI Ethics Committees must continue to be involved in the development of the technology as well as the intended use of it.

Are deepfakes illegal?

The technology itself is benign.  How the technology is used may result in legal action, but that must be examined on a case-by-case basis.

Is there a way to report the malicious use of deepfake technology?

If the incident occurs on your work accounts, devices or services, follow your organizations procedures for reporting social-engineering or cyber-attacks.

If the incident occurs on your personal accounts, devices, or subscription services you can report it directly to the FBI at www.ic3.gov.

This topic is evolving.  Look for more articles on this & related topics in the near future. For more from this author, click here and here.

Lastly, to receive more timely cyber security news, insights into emerging trends and cutting-edge analyses, please sign up for the cybertalk.org newsletter.