By Rachel Teitz, Technical Communications.

ChatGPT, the chatbot launched by OpenAI in November 2022, is the hot new thing that everyone is talking about. As ChatGPT itself will tell you, it is “an AI language model capable of generating human-like text based on the input it is given.” Ask it a question, and it will answer. The many uses ChatGPT has been put to so far include writing letters, poetry, term papers, technical documentation…and malware.

Most Russian hackers have a very low level of spoken or written English, but with ChatGPT they can craft a polished, well-written phishing email. In addition, there is evidence that hackers are also using ChatGPT to write malicious code. This means that even hackers who lack technical sophistication can still produce harmful tools and conduct attack campaigns.

But before we hit the panic button over this new cyber security threat, the truth is that there is already a way for hackers to launch campaigns without having the necessary technical knowledge: Malware-as-a-Service. On multiple underground forums, you can purchase everything from tools to basic instructions and advice on how to implement an attack. So why should we worry about ChatGPT? How is malicious code created by ChatGPT worse than what is for sale on the dark web?

First, ChatGPT is free and is accessible to all. The price ranges on the underground forums vary according to what is offered for sale, but the cyber criminals who make their tools and knowledge available are looking to make a profit.

Another point in ChatGPT’s favor is that it is easy to use and is responsive – you can have an actual dialogue with ChatGPT. Tell it exactly what you need, and it will tailor its answers accordingly.

ChatGPT is “smart.” Of course, ChatGPT is only as smart as you make it; its knowledge base is highly variable and depends heavily on the information it is given.

ChatGPT can elevate a mundane attack into something more serious. A recent report finds that hackers were able to use ChatGPT to improve a simple Infostealer from 2019.

As part of the content policy, the creators of ChatGPT put various restrictions in place to prevent it from being used to create malicious content. But cyber criminals are finding ways around this, primarily by abusing the API.

This is a concern, but experience leads us to believe that these loopholes will eventually be closed just like other vulnerabilities are patched as they are discovered.

Some would say that the danger is not ChatGPT itself, but the class of tools it represents. AI is good at going through multiple iterations of training material to hone its capabilities and could be used to find weaknesses in a system’s interfaces. AI could be used to find new zero-day attacks. It can do more and grow smarter because it is designed to take in information and get feedback to hone its algorithms, resulting in improved performance. McKinsey noted that as early as 2020, the group behind Emotet malware leveraged advanced AI and machine-learning techniques to increase its effectiveness. They used an automated process to send out contextualized phishing emails that hijacked other email threats—some linked to COVID-19 communications.

In the future, AI and ChatGPT may present a bigger worry. While it is on the radar of many researchers, they agree that currently, AI-generated attacks still fall primarily into the category of potential threat.

Further insights on ChatGPT can be found on this Check Point Research article. Lastly, to receive cutting-edge cyber security news, best practices and resources in your inbox each week, please sign up for the CyberTalk.org newsletter.