EXECUTIVE SUMMARY:

In this highly informative and engaging interview, Check Point expert Sergey Shykevich spills the tea on the trends that he and his threat intelligence team are currently seeing. You’ll get insights into what’s happening with AI and malware, you’ll find out about how nation-state hackers could manipulate generative AI algorithms, and get a broader sense of what to keep an eye on as we move into 2024.

Plus, Sergey also tackles the intellectual brain-teaser that is whether or not AI can express creativity (and the implications for humans). Let’s dive right in:

To help our audience get to know you, would you like to share a bit about your background in threat intelligence?

I’ve been in threat intelligence for 15 years. I’ve spent 10 years in military intelligence (various positions, mostly related to cyber space intelligence) and I’ve been in the private sector for around 6 years.

These last two years have been at Check Point, where I serve as the Threat Intelligence Group Manager for Check Point Research.

Would you like to share a bit about the cyber trends that you’ve seen across this year, especially as they relate to AI?

Yes. We have seen several trends. I would say that there are 3-4 main trends.

  • One trend we see, which is kind of in-flux, is in relation to ransomware ecosystem development. The ecosystem and the threat actors are increasingly operating more like nation-state actors, as they’re becoming very sophisticated.To illustrate my point, they now use multi-operation system malware. What does that mean? It means that they not only focus on Windows, but that they’re increasingly focused on Linux.

    This matters because, for many organizations, critical servers are Linux servers. In many cases, the impact of disrupting these servers is much bigger than, say, disrupting the activity of 100 Windows laptops, for instance.

    So, that’s a huge part of what’s happening in terms of ransomware. In addition, we’ve also seen mega ransomware events this year, like the MOVEit hack and use of it for a large-scale supply chain attack.

  • Another trend that we’re seeing is the resurgence of USB infections. When it comes to USBs, many consider it an old technology. A lot of people are no longer using them. And, the infection of USBs goes back to 2012, or even 2010 – with Stuxnet in Iran or the well-known Conficker malware. But what we’re seeing here is an influx in USB infections, as propagated by nation-state actors, like China and Russia, and by everyday cyber criminals.Why do we think that we’re seeing a resurgence of USB-based threats? We think that the barriers for hackers in other areas – such as network security and email security – have become much higher. So hackers are trying different methods, like USB infections.
  • We’re also seeing a resurgence of DDoS attacks. Mostly from hacktivist sites. They’re trying to disrupt the functionality of websites.
  • And of course, our team sees all of the threats related to AI. The AI-related threats that we observe are mostly related to phishing, impersonation and deepfakes.We do see AI used in malware development, but in terms of AI and malware, we aren’t seeing extremely sophisticated threats or threats that are “better” or more sophisticated than what a good code developer could create.

    In contrast, in relation to phishing and deepfakes, AI allows for a level of sophistication that’s unprecedented. For example, AI allows cyber criminals who don’t know a particular spoken language to craft perfect phishing emails in that language, making the emails sound like they were written by native-speakers.

    I would say that AI will be able to take malware to a new level in the near future, but we’re not there yet.

How can AI be leveraged to counter some of the threats that we’re seeing and that we’ll see into the future?

On the phishing and impersonation side, I think AI is being used and will mostly be used to identify specific patterns or anomalies within email content, which is no easy job for these tools. Most of the phishing content that’s created by AI is pretty good, especially since the data is now pulled directly from the internet (ex. the latest version of ChatGPT). The AI-based solutions can much better identify suspicious attachments and links, and can prevent the attacks in the initial stages.

But of course, the best way to counter AI-based phishing threats, as they exist right now, is still to avoid clicking on links and attachments.

Most cyber criminals aim to get people to take further action – to fill out a form, or to engage in some other activity that helps them. I think that a big thing that AI can do is to identify where a specific phishing email leads to, or what is attached to the email.

Of course, there’s also the possibility of using AI and ML to see what emails a person receives, whether or not they look like phishing emails (based on the typical emails that a person receives on the day-to-day). That’s another possible use-case for AI, but I think that AI is more often used for what I mentioned before; phishing attack assessment.

Could our cyber crime-fighting AI be turned against us?

In theory, yes. I think that this is more of an issue for the big, well-known AI models like ChatGPT — there are a lot of theoretical concerns about how these companies protect their models (or fail to).

There are really two main concerns here. 1) Will unauthorized people have access to our search queries and what we submit? 2) Manipulation — a topic about which there is even more concern than the first. Someone could manipulate a model to provide very biased coverage of a political issue, making the answer or answers one-sided. There are very significant concerns in this regard.

And I think everyone who develops AI or generative AI models that will be widely used needs to protect them from hacking and the like.

We haven’t seen such examples and I don’t have proof that this is happening, but I would assume that big nation state actors, like Russia and China, are exploring methods for how to manipulate AI algorithms.

If I were on their side, I would investigate how to do this because with hacking and changing models, you could influence hundreds of millions of people.

We should definitely think more about how we protect generative AI, from data integrity to user privacy and the rest.

Do you think that AI brings us closer to understanding human intelligence? Can AI be creative?

It’s an interesting set of questions. ChatGPT and Bing now have a variety of different models that can be used. Some of these are defined as ‘strict’ models while others are defined as ‘creative’ models.

I am not sure that it really helps us understand human intelligence. I think that it may put before us more questions than answers. Because I think that, as I mentioned previously, 99.999% of people who are using AI engines don’t really understand how they work.

In short, AI raises more questions and concerns than it does provide understanding about human intelligence and human beings.

For more AI insights from Sergey Shykevich, click here. Lastly, to receive timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.