Jul 17 — Cyber criminals are now developing generative AI tools that function similarly to ChatGPT and that are easy to use for nefarious purposes. In addition to creating these tools, cyber criminals are also advertising them to criminal colleagues and peers.
One of these tools is known as WormGPT. The tool styles itself as a black hat alternative to traditional GPT models. The tool is purportedly being trained on a diverse array of sources, with a concentration on malware-related data.
A developer of WormGPT described the tool as an enemy of ChatGPT, saying that it “lets you do all sorts of illegal stuff.”
Cyber security experts who have tested WormGPT called the results unsettling. The tool produced a persuasive phishing email and corresponding campaign execution strategies.
According to SlashNet, the use of generative AI democratizes the execution of sophisticated Business Email Compromise attacks. In other words, cyber criminals with limited skills can deploy the technology to launch campaigns more easily than ever before.
If you’ve been following the ChatGPT news, you’ll recall that earlier this year, cyber security researchers determined how to “jailbreak” ChatGPT, forcing it to spit out malware code in spite of safeguards.
Cyber criminals are now selling “jailbreaks” for interfaces like ChatGPT and Bard on cyber criminal forums. The “jailbreaks” or specialized prompts enable buyers to disable safeguards placed on these mainstream generative AI tools.
Recently, Check Point researchers showed that Bard’s anti-abuse restrictions are weaker than those of ChatGPT, rendering it simpler to create malicious content through Bard than through OpenAI’s technology.
Further, experts have also demonstrated that ChatGPT and other large language models (LLMs) can create polymorphic (mutating) code in order to evade endpoint detection and response systems.