By Hendrik De Bruin, Security Engineer, Check Point Software Technologies.

As you know, 2023 was the year where AI took off. Organizations quickly adopted AI-based products to stay competitive, increase productivity and improve profitability.

However, much of this rapid adoption, which often occurred unofficially, has left organizations to contend with serious cyber security vulnerabilities – and CISOs are exposed.

Secret and confidential information leakage

There have been instances in the past where engineers and developers have uploaded proprietary source code to ChatGPT for purposes of evaluating and improving on the code.

This little oversight could prove extremely costly if a competitor, or anyone with malicious intent, were to illicitly obtain access to ChatGPT’s underlying technology.

How can CISOs protect organizations from AI-related risks?

The CISO role is ever evolving. It appears that artificial intelligence will become another area of responsibility for CISOs globally.

Whether you are the CISO for a Fortune 500 company or a small business, chances are that the organization you represent has already integrated AI into a number of its day-to-day activities.

If not adopted in a controlled and responsible manner, AI does pose a significant potential risk to organizations. The following recommendations may enable CISOs to better manage AI-based risks:

Evaluation of the current situation. Before any risk can be mitigated, it is critical to first have a thorough understanding of the risk. You need to understand the probability of a risk’s likely manifestation. This will ensure that appropriate controls can be put in-place.

In order to better understand the risk posed by artificial intelligence to your information security, the following questions should be answered:

  • What AI systems are currently in use? There may be some official use cases and some unofficial (shadow IT) cases where the organisation is making use of artificial intelligence.
  • How are these AI systems being used? For what purposes are these systems being used and does the mere usage of these systems pose a risk to the organisation or its reputation?
  • What information is used in conjunction with AI systems? Considering the risks involved in managing and processing personal identifiable information (PII), it is critical to know how that information is being used and the implications thereof. The same pertains to confidential and classified information.

Asking the above questions should allow you to identify the most obvious risks posed to the organisation in terms of regulatory and compliance risks, data privacy risks, data leakage risks, and adversarial machine learning risks.

Define and implement administrative controls

Once a thorough understanding of the organisations’ current artificial intelligence landscape has been obtained and risks identified, the next step is to produce policies and procedures that adequately protect the organisation against risks identified during the evaluation stage.

These policies should deal with all aspects of AI usage within the organization. They should also go hand-in-hand with awareness training, ensuring that employees internalize the policies.

Once implemented, adherence to these policies should also be monitored.

Define and implement technical controls

After policies and procedures have been developed and applied, technical controls must be deployed as a means of policy and procedure enforcement.

Arguably, “Defense-in-Depth,” as enforced by solutions leveraging artificial intelligence and machine learning, is your best bet against unknown and increasingly sophisticated threats – such as those facing organizations today.

The human element

In the age of artificial intelligence, the human element may be the most critical “ingredient” in mitigating risks and keeping the organisation safe.

Critical thinking is a human superpower that should be employed to differentiate fact from fiction, so to speak.

The keep-the-human-in-the-loop or Human In The Loop (HITL) approach should be considered. This approach allows AI to make tactical decisions, perhaps even some strategic ones, while humans maintain managerial decision making powers over processes and activities related to these systems. This ensures that humans are always in the loop and available to apply critical thinking, good judgement and oversight.

What does the future hold for AI and cyber security?

During 2024 and over the next few years, I’m certain that adoption of AI will continue to grow on the part of threat actors and defenders alike.

These are “…engines that learn and improve themselves against the kind of attacks we don’t yet know will happen,” says Check Point’s CTO, Dr. Dorit Dor.

It is clear that artificial intelligence is here to stay. Adoption is growing at a phenomenal rate on the part of attackers and defenders alike, however it is end-users and their adoption of AI and generative AI that may pose the biggest risk to organisations and their secret/confidential information.