Home The latest AI-powered chatbot hacks

The latest AI-powered chatbot hacks

March 13 – Hackers are quick to capitalize on anything that’s trending, and ChatGPT is no exception. In November of 2022, the AI-powered chatbot launched with zero fanfare, but it rapidly gained worldwide attention and popularity.

ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams, provide code for software development, translate content and more. While the world has attempted to make sense of this technology, hackers have found their own uses for it…

ChatGPT Facebook compromise

Cyber security analysts believe that a hacker may have compromised thousands of Facebook accounts, including business accounts, via a fake Chrome-based ChatGPT-themed browser extension.

Browser extension text read “Quick access to ChatGPT” and promised users an ultra-efficient means of interacting with the overwhelmingly popular Artificial Intelligence chatbot.

However, the browser extension also surreptitiously harvested people’s information, as collected by cookie trackers. Hackers also managed to install a backdoor in users’ devices that granted the malware’s owner admin access to Facebook accounts.

ChatGPT hacks

The aforementioned ChatGPT-related fraud is merely one example of the many ways in which threat actors have attempted to capitalize on public interest in ChatGPT to distribute malware and to turn a profit.

In another example of recent ChatGPT-related cyber criminal opportunism, a hacker set up a fake ChatGPT landing page that tricked people into “signing up” for the ChatGPT service. In reality, the landing page forced people to download malware.

ChatGPT phishing emails

In addition to an uptick in ChatGPT-themed malware operations, researchers are also seeing ChatGPT-focused phishing emails. In at least one phishing scam, individuals across five different countries received unsolicited, fake links to ChatGPT.

If a user clicked on the link, the user was directed to a fake version of the chatbot, and was then asked to invest a certain amount of money. Prompts asked the user to input bank card details, an email address, ID credentials and a phone number.

Afterwards, a copycat version of ChatGPT was delivered to the user. This fake version of the chatbot offered a few pre-determined answers to users’ queries.

ChatGPT fraud insights

  • Only access ChatGPT through its official website.
  • Beware of unsolicited emails or messages that claim to be from ChatGPT and that ask for personal information; such as passwords or bank account numbers. ChatGPT will never request this information.
  • In the event that you seem to have received a phishing email or message, do not reply to it or click on any links. Instead, report it to the appropriate authorities, such as your organization’s IT department or the Anti-Phishing working group.

For more insights into ChatGPT, please see CyberTalk.org’s past coverage. Lastly, to receive more cutting-edge cyber security news, best practices and analyses, please sign up for the CyberTalk.org newsletter.