Jul 28 — A new cyber criminal tool, known as FraudGPT, has appeared on various dark web marketplaces and Telegram channels. As the name implies, the tool is intended to promote malicious activity. It’s been in circulation since at least July 22nd of this year.
“This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.,” says security researcher Rakesh Krishnan.
The tool can be used to create undetectable malware, to identify leaks, to search for vulnerabilities and to create phishing campaigns. The exact large language model used to develop the system remains unknown at this time.
The threat actor behind FraudGPT claims to have more than 3,000 confirmed sales and reviews. FraudGPT subscriptions cost $200 per month (or $1,000 for six months and $1,700 annually).
Tools like FraudGPT and WormGPT could potentially take cyber threats to the next level, resulting in concentrated attack types at scale.
Although ChatGPT can be exploited as a cyber criminal tool, ethical safeguards limit that capacity. Yet, the surge in AI-driven tools like FraudGPT show that it isn’t difficult to re-create the same technologies without the safeguards.
To combat adversarial AI, organizations may wish to use AI-based cyber security technologies. Get more information about the best AI-powered security solutions here. Lastly, to receive more timely cyber security news, insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.