Home NCSC warns of ChatGPT prompt injection attacks

NCSC warns of ChatGPT prompt injection attacks

Aug 30 – The U.K’s National Cyber Security Centre (NCSC) has stated that there is a growing risk of chatbot “prompt injection” attacks.

As implied in the name, in these attacks cyber criminals manipulate prompts, forcing language models like ChatGPT and Bard to behave in unexpected ways.

Because chatbots may share data with third-party applications and services, the NCSC has stated that risks from malicious prompt injection attacks are only expected to increase with time.

Prompt injection attacks

Prompt injection may be an inherent risk of all large language model (LLM) technology. These types of attacks are tricky to identify and difficult to mitigate (and there is no surefire way to mitigate them, experts explain).

According to NCSC, prompt injection attacks could result in real-world consequences if systems lack built-in security. The vulnerabilities inherent in chatbots, along with the ease with which prompts can be manipulated, could not only result in attacks, but also in scams and data theft.

Experts say that increased awareness of prompt injection risk will allow for better system design; preventing exploitation of vulnerabilities. For instance, applying a rules-based system on top of the ML model can prevent it from behaving abnormally, even when prompted to do so.

More information

Due to prompt injection threats and other security risks, enterprise should also remain cautious when integrating large language models into their services.

Another attack related to the manipulation of LLMs that organizations should be aware of is known as ‘data poisoning’. Get more insights here. Lastly, to receive more timely cyber security insights and cutting-edge analyses, please sign up for the cybertalk.org newsletter.