Jonathan Fischbein is the Chief Information Security Officer for cyber security vendor Check Point Software. He has more than 25 years’ experience in high-tech security markets, shaping security strategies, and in developing ad-hoc solutions to help large corporations mitigate security threats.

In this exciting and tell-all interview, Check Point’s Global CISO Jonathan Fishbein shares a treasure trove’s worth of insights into how your organization can thoughtfully embrace the cutting-edge technological power-house that is ChatGPT. Discover how to realize untapped potential while simultaneously maintaining vigilance when it comes to risk management.

With a blazing passion for innovation and a dynamic mastery of his craft, Fischbein illuminates the intersection of ChatGPT, AI, risk and cyber security. Transform your approach in order to achieve better outcomes. Find out how.

1. Broadly speaking, what should CISOs know about ChatGPT and business security? 

The topic of AI used to be an experimental, niche area that only pertained to very specific sectors; they were using AI agents or methodologies to improve the products that they worked on. But now, the Pandora’s Box is open and is accessible to everyone.

It means that we’ve released a toy — that can be used for good and not-so-good stuff — to the masses for free. Everyone wants to play with it and even try its limits. The good guys are aiming to leverage it for good use, while bad guys are already using it to develop a new generation of super sophisticated cyber campaigns.

                    …

Let’s look at the mandate for CISOs or BISOs, who need to allow the business to flourish, innovate and move forward. If security leaders do not adopt the available tools (like ChatGPT), businesses will languish and stay down.

Your competitors want to use any available technique or tool in order to improve their business. And that’s completely Kosher. The problem is that with every new “toy” the adoption of the item — in this case ChatGPT, and behind the scenes, it’s AI — is much more disruptive than what we all anticipated.

CISOs and those who are in charge of risk management all see the risk, but we don’t have enough tools to control it. For example, an employee, like you, could take an Excel file and input it into ChatGPT. You could then ask ChatGPT, ‘What do you think about this Excel file? Can you run a data analysis on this Excel file?” It’s the same with a database.

We’ve all seen tools that are add-ons to ChatGPT that allow APIs. Let’s say that I do an API with ChatGPT and I say, “ChatGPT, allow me to input this data and give me your opinion…” And we know that the input data can contain Personally Identifiable Information (PII), customer data, and very critical assets that shouldn’t be outside of certain locations.

I think that we need to use the same conservative approach that we do for other aspects of security and to be careful, and to teach our people about what they can do and what they cannot do when using ChatGPT or other AI solutions that are out there.

2. How can ChatGPT actively support security professionals in their day-to-day tasks? 

Improved policies. We can ask ChatGPT to help us improve our security policies. And it’s amazing. It’s just staggering. And it’s very nice. For example, I could ask, “Please give me an updated policy for data governance” and boom, less than five seconds later, you have a complete, new, updated data policy.

Instead of having contractors invest hours in writing policies and polishing stuff, this AI engine gives you the solution. We understand that ChatGPT didn’t reinvent the wheel; the tool is just basing answers on a massive amount of other information. In other words, what others are using as best practices is simply condensed, and it’s reproducing the information in a very nice and organized way.

Communications. I can ask ChatGPT ‘Please help me to write a communication to our users, explaining X, Y and Z’. I can even ask ChatGPT to explain it in a way that a 12 year-old boy would understand. And ChatGPT will do it. Or if I want to try to explain the latest phishing campaign to my mother, ChatGPT will put the right language together. You get the idea.

This kind of stuff — these areas where ChatGPT can provide us with some help — doesn’t present any risk whatsoever.

3. Where would you suggest that cyber security professionals think twice before using ChatGPT?

At the moment, I think that there aren’t enough safeguards within ChatGPT to control the risks.

We’re already seeing development teams trying to do APIs to allow these AI-solutions to help us as a third-party aid. For instance, my rule-base, for the access list, has more than 5,000 rules. But we all agree that not all of those rules are needed. So, I could connect my rule-base, which is something that’s extremely confidential, to ChatGPT. However, should I connect it to ChatGPT to ask it to be analyzed?

I don’t have enough trust in these solutions to share such confidential information. In security, it always comes back to trust. Trust is the #1 challenge. If we don’t have complete trust, it’s difficult for us to move to the second or third stages with this technology.

4. There are several specific risks associated with using language models like ChatGPT and some key things to keep in mind:

Data leaks: Sensitive information such as confidential business information, code, customer data, and personal information can be leaked.

Privacy breaches: Personal information can be disclosed if language models like ChatGPT are not properly configured to protect privacy. This can result in violations of privacy laws and regulations, and can damage the reputation of a company.

Bias and fairness: Language models like ChatGPT can be trained on biased data, which can result in biased outcomes. This can impact decision-making and lead to discrimination and other unfair outcomes.

Misinformation: Language models like ChatGPT can produce incorrect or misleading information, which can have negative consequences, such as spreading false information, making incorrect decisions, and damaging a company’s reputation.

When using ChatGPT or other similar tools, please make sure to use data anonymization and remove or mask sensitive information such as confidential business information, confidential code, customer data, employee details, addresses, and other personal information. This will reduce the risk of data leaks and privacy breaches.

5. What about the policy and regulatory aspects of ChatGPT and AI? 

I think that these AI tools should be regulated and that they should be monitored. When it comes to these AI tools, nobody knows what is behind them, who is behind them, where the data is going and this reminds me a little bit of the Cambridge Analytica case.

In that situation, the problem was not that data collectors were taking the data and redistributing it. The problem was that the data collectors knew that they were using it to influence politics.

This was a big surprise for some and it was illegal. I think that it would not be a surprise if some of these AI solutions are used to gain an advantage at a later point in time.

6. How are cyber attackers using ChatGPT? 

We’re seeing cyber attackers improving their phishing techniques using ChatGPT. The composition is much more thorough, and much more believable than previously. We are seeing malware coming from ChatGPT. Humans can ask it to write Python scripts…etc. It’s possible to develop a zero day via ChatGPT. None of this is good. There are some safeguards/controls within ChatGPT, but at the moment, they can be easily bypassed.

7. Will OpenAI, the ChatGPT parent company, do anything about that?

Yeah. Definitely. Because they understand the risks. Even if ChatGPT is a net good 90% of the time, they still need to make every effort to mitigate known vectors of misuse of the platform.

There have been some good improvements and I think it’s an exciting topic.

8. Moving from ChatGPT to AI: How is AI being infused into cyber security today?

Within Check Point, we have been using AI for a few years to improve our threat prevention mechanisms and to achieve more accurate verdicts. We are using several AI engines. AI, in general, is being reviewed to be utilized in various different areas within R&D.

Also, AI is being used within other departments and roles. Think of analysts who are taking care of campaigns, or assistance centers using third-party solutions that are already using AI.

 …

In general, while organizations have found that AI is sexy, it doesn’t mean that we need to use AI everywhere. We need to be careful. We need to use it when it’s relevant, and not when it’s irrelevant.

Here’s an example of AI overuse. In the last 5 years, we have seen the model where 60-70% of high-tech firms use AI-powered chatbots for customer support. Usually, when you need customer support, you need something that is not on the chatbot’s menu. And so when you need real help, it’s more than challenging to get an actual person on the phone to assist. In my opinion, this is the bad side of the technology.

9. A few weeks ago, global leaders signed a letter saying that there should be a 6 month pause on AI development due to ‘profound risks to our society.’ Your thoughts?

I think that not a single developer of AI stopped working. I think the opposite. Developers are taking their work more seriously.

But that letter, in my opinion, is a wakeup call saying ‘guys, we need to be careful with this technology.’ I mean, we all loved it, it’s cool, but it’s also a Pandora’s Box.

It’s started and now we need to see where it’s going. Hopefully, it will not endanger lives and no one will connect anything to AI without thinking through all of the implications.

10. Further insights for business leaders on the topics of ChatGPT and/or AI?

For business leaders, adopt a baby steps approach. Don’t rush into it without dictating policy and safeguards following a risk assessment process.

  • Adopt it into existing policies.
  • Communicate: Share do’s and don’ts with all employees via the Security Awareness programs.
  • Monitor: Use SOC to provide alerts in case internal users expose classified data to AI platforms.
  • Enforce: Ensure that PII or critical data is not get exposed to AI platforms which didn’t obtain the required level of trust according to the sensitivity levels. Leverage enforcement mechanism such as DLP + WAP (Web application and API protections) security capabilities.

We are sure to hear more about ChatGPT in the coming weeks and months. Stay tuned for further developments on the topic.