Artificial intelligence (AI) continually presents an alluring array of new use cases. Organizations leverage AI-powered tools to analyze resumes, to better understand customers, to identify market trends, and more. For hackers, AI also represents a new tool for improved margins and success rates.

Right now, use of artificial intelligence in phishing emails is of interest. Technology researchers have discovered that the deep learning language model (GPT-3), combined with additional AI-as-a-Service platforms can make crafting spear phishing campaigns at-scale easier than ever.

Previously, whether or not AI could craft effective phishing emails remained as an unknown. Phishing emails generally see a low click rate. However, all hackers need to deliver malware or to disrupt a network is a single victim.

Moreover, mass produced and mass delivered AI-driven spear phishing campaigns seemed nearly impossible. Targeting a unique individual was thought to require custom research and hours of human labor. Here’s how things are changing…

Applications of artificial intelligence

Natural language processing (NLP) developments may lead to revised thinking around AI-driven phishing and spear phishing campaigns. If you attended this year’s Black Hat and Defcon security conferences, you’ll know that the Singaporean Government’s Technology Agency presented an experiment involving NLP and phishing. The group demonstrated how an AI-as-a-Service platform delivered phishing messages to 200 colleagues. Their findings? A larger percentage of recipients clicked on the AI-generated messages than on the human-written ones.

The development of AI itself requires specialized knowledge and specialized skills. In addition, it requires huge amounts of funding. Millions of dollars are needed to train an AI model well.

However, once the technology has been fully developed, it’s easy to use. Operators don’t even need to run code. A simple prompt enables researchers to achieve intended outcomes; from self-generated phishing text to more complex text-based projects. Artificial intelligence-based techniques like this also enable mass-personalization of emails. In turn, mass spear phishing may be on the horizon.

AI misuse and cyber attacks

Artificial intelligence development groups are concerned about the potential for the misuse of their products. Some vendors make attempts to audit platforms for suspicious activities and collect information about product users. Similarly, technical measures, such as rate limits, also function to curb malicious use of products. AI developers are continuing to work on tools’ capabilities and their safety.

Researchers point out that monitoring AI tools for safe use is tricky. It could mean surveilling legitimate platform users. Further, some AI-as-a-Service providers may not be concerned with who accesses platforms and how AI tools are used. As a result, it’s possible that some platforms may prove uniquely appealing to online con artists.

Email AI and APIs

It’s easy to access AI APIs, say researchers. While some have requirements around registration and such, other providers offer free trials. They do not ask for email addresses or credit card information. In theory, users could use such tools ad-infinitum.

Will governments step in?

To address malicious use of AI APIs, governments could step in. After all, governments do not want for businesses to contend with phishing or malware that could damage production or supply chains.

Researchers are now working on tools that can identify and police synthetic or AI-generated phishing emails. This is in-line with current work around deep fake detection. If humans can develop mechanisms that can spot synthetic media, hackers will no longer hold the higher ground.

For more information concerning artificial intelligence developments, email AI scams and related content, click here. For further insights into emerging phishing schemes, click here. In addition, sign up for the Cyber Talk newsletter, which provides robust cyber security perspectives on trending topics.