Micki Boland is a global cyber security warrior and evangelist with Check Point Technologies’ Office of the CTO. Micki has over 20 years in ICT, cyber security, emerging technology and innovation. Micki’s focus is helping customers, system integrators, and service providers reduce risk through the adoption of emerging cyber security technologies. Micki is an ISC2 CISSP and holds a Master of Science in Technology Commercialization from the University of Texas at Austin, and an MBA with a global security concentration from East Carolina University.
In this two-part interview series, expert Micki Boland discusses deepfake technologies and how they’re distorting our reality. Get amazing insights from an extremely knowledgeable expert in the field. Did you miss part one of this series? Check it out here!
Are organizations/authorities thinking about or removing open source deepfake tools from the web?
As it stands, there appears to be limited interest in constraining or removing open source deepfake tools available on the web. As with all AI, ethical AI utilization is an important topic of consideration and concern by data scientists and the AI community as well as for individuals, enterprise organizations, and governments. The number one shortcoming of AI is its lack of abstract reasoning and real-world common sense. While deepfake tools are being rapidly developed and shared in the open source community, there is no ethical oversight regarding how these deepfake tools are being utilized and already we are seeing lack of common sense.
If we can no longer trust what we see, will people potentially write-off real events, like humanitarian tragedies, as fake?
This is a valid concern. RAND Corp created the phrase “Truth Decay” and this is blurring of lines regarding fact versus fiction. With deepfakes and with the prevalence of social media and online news as primary sources of information consumption, it becomes increasingly difficult for consumers of media and content to discern facts from the truth. I will give you a very concrete false flag attribution situation using deepfake technology.
In 2017, Syrian fighter jets dropped chemical munitions on the village of Khan Al Shekhoun, killing 100 people and severely wounding 200 people. A false flag operation (false attribution to enemy or adversary) was conducted. This resulted in a Twitter storm surrounding two narratives: 1) the Syrian president was responsible for the chemical attack and 2) the false narrative launched by Russian and Syrian governments to attribute the attack to US and NATO forces. The false narrative was widely disseminated by a 10X factor. When social media influencer Mike Cernovich (with >500,000 followers) was influenced to spread the false narrative, within 24 hours, the fake news story was the number one trending topic on Twitter.
Hypothetically—Could the US Supreme Court be fooled by a deepfake?
Wow this is a seriously thoughtful question! Deepfakes and the platforms and technologies used to disseminate deepfakes should be on the SCOTUS radar. As far as deepfakes go, the most likely areas to involve legal challenges are deepfakes that include political candidates or elections deception, nonconsensual deepfake pornography, and child exploitation. Digging into research, I found only one or two articles in this arena.
There are only two states, Virginia and California, that have laws dealing with faked or deepfaked media. While 46 states have some ban on revenge pornography, there are no legal options if relating to nonconsensual deepfake pornography. The UK evidently bans revenge porn, but the law does not encompass anything that has been faked. No other country appears to have national laws dealing with this. In the future, likely see deepfake legal challenges ranging from criminal deception, defamation, privacy, and copyright infringement to privacy.
How is the cyber security industry working to block the proliferation of deep fakes?
In the United States, from a policy perspective, there is increasing debate around social media platforms as publishers in context of the Communications Decency Act (CDA) Section 230. Under Section 230 today, the social media platform is not liable for media posted by users (this would include deepfakes or other synthetic media such as newsfakes.
Right now, there is growing demand for private citizens to hold technology platforms accountable for disseminating harmful or slanderous content uploaded by its users. Social media platforms have a stake in detecting deepfakes and blocking proliferation of deepfakes for integrity and in preparation for lesser protections under Section 230. This is an area where the social media platform will need deepfake detection technology that is accurate in detecting real deepfakes; else the result could result in censoring factual content.
Anything else that we haven’t asked, but that you’d like to share?
I will seek to learn more about the legal perspectives and uprange usage of deepfake technology and love to have another discussion with Cyber Talk around this specific area. Thank you Cyber Talk!