Brian Linder is an Emerging Threats Expert and Evangelist in Check Point’s Office of the CTO, specializing in the modern secured workforce. Brian has appeared multiple times on CNBC, Fox, ABC, NBC, CBS, and NPR radio, and hosts Check Point’s CoffeeTalk Podcast and Weaponizers Underground, and has teamed on keynote CyberTalks at Check Point’s CPX360 events. For 20+ years, Brian has been an advisor at the C-level to firms big and small in financial, legal, and telecommunications, on next generation cyber security solutions and strategies for cloud, mobile, and network. Brian holds a B.S. in computer science from Drexel University and an M.S. in Information Science from the Pennsylvania State University. 

In this dynamic and eye-opening interview, Check Point Expert Brian Linder explores how AI has democratized cyber threats. Discover new insights into malware risks, explore the jarring challenges posed by generative AI, and most importantly, get strategies, solutions recommendations and best practices that will empower you to create a more secure future for your organization.

Brace yourself — Get ready for an exciting journey through the ever-changing cyber security landscape as it pertains to AI. Learn about how top-tier cyber security professionals and security solutions providers are fighting back against unintended AI outcomes and providing essential AI-focused resources.

How is AI democratizing cyber threats and why does that matter?

What’s happening with AI goes far past the buzzwords and the hype. The democratization of AI refers to how artificial intelligence is becoming increasingly accessible to hackers; they’re getting all of the opportunities that AI affords and all of the benefits, and it’s a danger to enterprises. Let me explain…

With very little effort, anyone can now leverage Large Language Models (LLMs) — a type of AI tool that’s been trained on a massive corpus of text data — to create entertaining fake videos or beautiful artwork, but hackers are also learning how to weaponize these accessible, easy-to-access and low-cost tools.

For hackers, these AI tools open new doors, as they can function as co-pilots. Unfettered access to AI tools makes it really easy for someone to wake up in the morning and to say ‘hey, I’m going to become a partner in the supply chain within the cyber threat actor ecosystem.’ Suddenly, they have tons and tons of AI tools that enable their bad actions.

Again, the “democratization of AI” makes artificial intelligence tools even more accessible, with less need for extensive funding and infrastructure than ever before.

Does the democratization of AI (or the democratization of cyber threats as through AI) mean that we’ll have more malware that can evade traditional defenses?

It’s a complicated question. You have very few partners on the prevention side that are actually aware of the scope of the issue. I’ll summarize it this way — When an increasing number of threat actors can leverage, innovate and develop tools, then there’s a high risk that companies aren’t paying attention; that they are not actively countering the potential threat of severe impact.

The short of it is that there will be more malware and more threats, because innovation has not only proven successful for these bad actors, but it’s also very profitable. The more that they innovate, the more profit that they can capture. As AI tools become widely available, hackers will attempt to use them in order to innovate, deploy new tactics, and to acquire more in profits.

As cyber attackers find ways to exploit AI-based tools for their gain, companies will likely see an expanded attack surface and new challenges.

What does the democratization of cyber threats, as through AI, mean from a geopolitical standpoint?

First of all, the force of making AI tools more available on the planet cannot be stopped. There’s no motivation, at all whatsoever, to stop it. There may have been concerns raised, and those concerns will continue to be raised. But there’s a reason that these tools are gaining steam.

Bad actors have been using a variety of tools to craft blended attacks for years (and that’s not breaking news). What is breaking news is the sophistication through which these AI tools will enable them to take attacks to the next level.

Of course, we know that humans are always the weakest link — there’s no getting around that. It will never change. However, the evolution and availability of these AI tools across the entire planet will make it easier, for example, to socially engineer people. In sum, you’ve got all kinds of opportunities for these bad actors to create politically driven disruptions.

When there’s nation-state funding involved — and we expect high levels of funding — bad actors will have more and more access to these AI tools.

Thus, you can expect to see AI tools moving through the bad actor supply chain to create a legitimate disruption in the geopolitical environment — I think, in a short period of time.

What is generative AI? Why is it important? What cyber security concerns does it present?

Generative AI is getting a lot of attention right now. We’ve all heard about ChatGPT. One study related to generative AI, conducted by a cyber security research firm, unearthed an interesting and paradoxical weakness in the technology:

The study proved that an earlier version of the generative AI ChatGPT tool could create malware. At the time, this was groundbreaking research. In response, ChatGPT ultimately created a rule that prevented users from generating malware.

One negative use-case handled, right? Well, since then, researchers have explored how to go about circumnavigating the anti-malware generation protections that were put in-place.

To do so, researchers cleverly phrased questions and built on existing answers given by the tool to get around its safeguards; to extract really malicious data from the engine.

Cyber attackers are clever and they’re going to find means of circumventing conventional cyber security measures. They’re going to do this by exploiting weaknesses and they’ll likely be successful doing it. The law of unintended consequences says that it’s impossible for the owners and operators of AI-based tools to stop persons with malicious intent from exploiting them.

Generative AI has received so much air-time because people are very curious about it. I see that playing a more and more difficult role, because what’s going to happen is very simple…

Right now, we’re at ChatGPT version 4. They’ll be a version 5, a version 6, a version 7….You don’t have to be too imaginative to see that in two or three years from now, the language recognition capacity will increase. And it will become increasingly difficult to control these things — including the development of malware or malicious scripts — with human intervention.

So, I think it’s a big problem that’s simmering and that will come to a boil in the future. I think it’s correct to focus on it right now.

How are top providers of cyber security approaching AI? 

I think that a lot of people see AI as a buzzword or something that you’re going to put in marketing material. But let me tell you about how a legitimate cyber security provider puts forward an innovative solution that leverages AI tools in a way that goes well-beyond generative AI. I’m talking about tools that are operating within the cyber security solution itself.

The first thing you need to realize about legitimate cyber security in this day in age is that any solutions that are heavily dependent on human intervention to make complex security decisions are really a non-starter. That’s mainly because we have a shortage of skills.

As I often talk about, no matter how much you might be willing to invest in hiring the best people, you’re likely going to have major challenges (even if you had unlimited funding), simply because there aren’t enough people available to fill open roles. And then, even if you have the best people, having them operate at the speed at which real threats are happening is impossible.

Therefore, you have to look at solutions that are highly consolidated, highly collaborative in terms of cooperating within an ecosystem of other tools, and that have the ability to basically manage everything with as few people as possible – and they need to be comprehensive.

Within these tools, you need real-time prevention, not detection. Again, your dashboard insights need to be shown in real-time.

Let me conclude this with a recommendation to those who are reading…

There really are three guiding concepts that I would recommend pursuing. The first one is that you want to have a comprehensive strategy. If you’re only protecting a few vectors with the shiny point solutions that looked kind of cool – where one protects against one vector and one protects against another – that’s not going to scale very well.

To that effect, you need comprehensive coverage across all attack vectors. And what needs to happen there is that what’s learned in one vector needs to be shared and applied to another. So, if I learned something in the cloud, my solution should automatically adapt that to, say, mobile devices, or standard network devices or email phishing attacks, because there’s a lot of similarity and overlap.

So, you need a comprehensive approach. Not a point solution approach. The point solution approach is long-outdated.

We are facing a huge security skills shortage, and this is going to continue forever. There’s no end in sight to it. As a result, we have fewer people to manage platforms. In light of that, what we would recommend is a consolidated platform. You want to be able to manage it all from one place. You don’t want to have a variety of different management interfaces. We don’t have enough people to run all that.

And even if we did have enough people, it’d be 50 people running 50 interfaces that have no commonalities between them. And again, that does not lend itself to preventing threats in real-time. Rather, it just creates more scaling problems.

And the final piece of it is: We are in recognition of the fact that there is an ecosystem out there that has to cooperate with other known ecosystems. So we are emphasizing a collaborative approach. We recommend that you looks at ways, with APIs, or even open communication between other legitimately developed risk mitigation, to bring a collaborative approach to managing your environment.

Here’s the TL;DR version…

So to summarize, introducing next-level cyber security to your organization is really about three things. 1) A comprehensive solution that covers all vectors, learns across all vectors, and that applies AI, learning and engines to all vectors. 2) It has to be consolidated because we will never have enough people to run all of it. There’s still that human component required at some level, so it has to be unified. 3) And finally, the ability to collaborate within the ecosystem, open APIs, the ability to participate in all different kinds of cooperation that happens for real-time threat intelligence. That’s by-and-large the strategy that we recommend when you look for your next cyber security solution.