Home Biden audio deepfakes alarm experts

Biden audio deepfakes alarm experts

February 14th – Last month, deepfake robocalls that impersonated U.S. President Joe Biden raised alarm among government officials.

In the state of New Hampshire, an AI version of Biden encouraged voters not to go to the polls in order to vote this year.

Statement auth

The emergence of these robocalls, combined with a general rise in deepfakes and AI-based content, has prompted White House advisors to devise a statement authentication plan.

At present, White House officials are exploring cryptographic verification.

Ben Buchanan, Special Advisor for AI, told Business Insider that the team is looking into “cryptographically verifying our own communications so that when people see a video of the President on whitehouse.gov, they know this is a real video…and there’s some signature in there that does that.”

Cryptographic verification

Cryptographic verification means that a private encryption key or digital signature is assigned to a piece of content. The corresponding public key is rendered available for signature decryption purposes. In the event that the content is altered by a person of mal-intent, the signature will also be altered or removed.

Thus, it will not be possible to decrypt it using a known public key. As a result, experts will be able to clearly see that the content was manipulated.

Deepfake proliferation

The President is not the only target of malicious robocalls or disturbing deepfakes. New research shows a 704% increase in “face swap” deepfake fraud attempts against identity verification systems. Threat actors are now using virtual cameras, emulators and low-cost or no-cost deepfake tools.

To explain what that looks like in-action, earlier this month, Hong Kong police reported that a team of cyber criminals leveraged deepfakes to trick an employee of a multi-national company into sending them $25 million.

In the scam, cyber criminals weaponized deepfakes on a video call, leading the employee to believe that he was talking to the CFO of the company and other colleagues.

“Employees may still assume today that live audio or video cannot be faked…security teams should see this as another threat to their organizations and [should] update their practices and training accordingly,” stated Nick France, CTO at Sectigo.

Related resources

  • Shocking AI deepfakes convincingly imitate celebrities & fool fans – Read story
  • AI misinformation: World’s biggest short-term threat – See story
  • How governments can address today’s cyber security challenges – Watch video
  • Cyber security predictions for 2024 – Click here