Are you prepared for a world in which a malicious entity creates manipulative, convincing deepfake videos of your top executives reciting untruths? Modern deepfake technology is eerily powerful and persuasive, but many organizations and individuals are unprepared to respond to deepfake threats. Here’s what’s happening right now…

Broadcast journalist and “CBS Mornings” co-host Gayle King, actor Tom Hanks and YouTube personality MrBeast have all recently fought back against the unsanctioned deepfake videos of themselves that are circulating on social media.

On Monday, King posted a warning on Instagram and shared a snippet of a video that used her likeness for advertising a weight loss product. “People keep sending me this video and asking about this product, and I have NOTHING to do with this company,” King wrote. She added the words “fake video” to the corresponding image.

“I’ve never heard of this product or used it! Please don’t be fooled by these AI videos,” she said. Representatives for King have requested removal of the fake video from corresponding platforms.

It’s not just King…

Academy Award winner Tom Hanks recently shared a warning with his 9.5 million online followers, citing a similar scam. “There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” Hanks wrote on Instagram.

Tom Hanks providing followers with a warning concerning fake video
Tom Hanks warns social media followers of deepfake video.

YouTube star MrBeast, whose real name is Jimmy Donaldson, has also recently reported a deepfake scam ad that uses his likeness in an iPhone giveaway video.

“If you’re watching this video, you’re one of the 10,000 lucky people who will get an iPhone 15 Pro for just $2,” the ad announced to viewers. “I’m MrBeast and I’m doing the world’s largest iPhone 15 giveaway. Click the link below to claim yours now.”

Over the years, social media sites have struggled to contend with deepfake content, an issue that other media outlets have brought to light by creating their own deepfakes of public figures.

Legal recourse and regulations

At present, few concrete laws exist around deepfakes and unauthorized AI-generated content; both within the U.S. and around the world.

In Hollywood, the use of actors’ AI-based likenesses is on the negotiating table between the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and major film studios.

SAG-AFTRA is advocating for protections around the use of its members’ likeness, voice and performances used without explicit consent or without compensation. AI’s capacity to mimic creative expressions is a “real and immediate threat” to the work of the actors, says the labor union.

Is your organization prepared?

The aforementioned incidents highlight the need for companies to proactively develop strategies and tools to combat deepfake threats.

  • When it comes to content circulating on social media, companies should encourage employees to verify the authenticity of questionable information and report suspicious content.
  • Brand monitoring tools can assist organizations in quickly spotting false information or deepfake content that could result in reputational damage.
  • Deepfake detection and analysis tool are also available to help identify manipulated content. These tools typically leverage AI and machine learning algorithms to analyze videos, images and audio.
  • Collaborate with cyber security and digital forensics professionals who specialize in deepfake detection and incident response. They can help create and implement custom strategies for mitigating deepfake-related risks.
  • Organizations should also explore legal options for addressing deepfake-related incidents. Determine which potential legal actions can be taken against those who create and perpetuate malicious deepfake content that targets the organization.

For more insights into deepfakes, voicefakes and AI-related fraud, please see CyberTalk.org’s past coverage. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.