EXECUTIVE SUMMARY:

Although generative AI chatbots and Large Language Models (LLMs) can be a double-edged sword when it comes to corporate risk, they can measurably advance cyber security initiatives in what you might find to be unexpectedly helpful ways. Read on to learn more. 

ChatGPT and large language models have been heralded for the multitude of opportunities that they present in terms of technological capabilities, efficiencies and productivity across a variety of industries, sectors and job functions.

While introducing ChatGPT or LLMs into an enterprise ecosystem can present risks, the tools can also boost efficiency, productivity and cyber security staff job satisfaction rates.

As a cyber security professional, if you’re able to understand the new technology well, you’ll be able to use it well. In this article, we break down how you can leverage ChatGPT and LLMs to advance your cyber security. Let’s get started.

5 ways ChatGPT and LLMs can advance cyber security

1. Vulnerability scanning and filtering. A number of experts and expert groups, from global CISOs to the Cloud Security Alliance, contend that generative AI models can be used to significantly enhance scanning and filtering for cyber security vulnerabilities.

In a recent report, the Cloud Security Alliance (CSA) demonstrated that OpenAI’s Codex API can effectively scan for vulnerabilities in programming languages like C, C#, Java and JavaScript.

“We can anticipate that LLMs, like those in the Codex family, will become a standard component of future vulnerability scanners,” researchers expressed within the paper.

For instance, a scanner could be built to identify and flag insecure code patterns in assorted languages, enabling developers to address key security issues before they turn into critical security risks.

When it comes to filtering, AI models can provide information about threat identifiers that might otherwise go unnoticed by human security staff.

2. Reversing add-ons, analyzing APIs of PE files. Artificial intelligence and large language models can be employed to develop rules and to reverse popular add-ons. This would be based on reverse engineering frameworks, like IDA and Ghidra. “If you’re specific in the ask of what you need and compare it against MITRE ATT&CK tactics, you can then take the result offline and make it better to use as a defense,” says cyber intelligence engineering manager, Matt Fulmer.

LLMs can also analyze APIs of portable executable (PE) files and inform cyber security staff about what they may be used for. In turn, this can limit the amount of time that security researchers spend hunting through PE files and analyzing API communications within them.

3. Threat hunting queries. Cyber security staff can enhance operational efficiency and expedite response times by using ChatGPT and other large language models to develop threat hunting queries, says the Cloud Security Alliance.

Via the generation of queries for malware research and detection tools, such as YARA, ChatGPT enables quick identification of and mitigation of potential threats. As a result, staff can spend more time on higher-priority or higher payoff cyber security tasks.

The aforementioned capability is helpful when it comes to maintaining a strong cyber security posture in a constantly evolving threat landscape. Rules can correspond to specific organizational needs and common industry threats.

4. Detecting generative AI text in attacks. Everyone knows that large language models can generate text, but did you know that they may soon be able to detect and watermark AI-generated text? This capability is likely to be infused into email protection software in the future.

The ability to identify AI-generated text means that teams will be able to more easily spot phishing emails, polymorphic code and other red flags.

5. Security code generation and transfer. In some cases, large language models like ChatGPT can be used to both generate and transfer cyber security code. Take a moment to understand the following example: A phishing campaign could successfully target several employees within a company, potentially resulting in credential exposure. Although cyber security staff may know which employees opened the phishing email, it may be unclear as to whether or not malicious, credential-stealing code was executed.

By way of investigation, a Microsoft 365 Defender Advanced Hunting query can be used to zero in on the 10 most recent login events by email recipients after the malicious emails were opened. The query assists with tagging suspicious login activity related to the compromised credentials.

In this instance, ChatGPT can supply a Microsoft 365 Defender hunting query to check for login attempts of the compromised email accounts, helping block attackers from the system and providing insight as to whether or not users need to change passwords. Effectively, ChatGPT can help reduce the time-to-action amidst cyber incident response.

Alternatively, based on the same example, security staff may experience the same issue and find the Microsoft 365 Defender hunting query, only to realize that the in-house system does not work with the KQL programming language. Rather than searching for the correct example in the needed language, ChatGPT can assist with a programming language style transfer.

For more insights into how ChatGPT, large language models and AI can advance your cyber security, please see CyberTalk.org’s past coverage. Don’t miss out on the latest trends and insights — please subscribe to the CyberTalk.org newsletter.