April 12 — ChatGPT promises to make employees more productive than ever before and many enterprise leaders have embraced the technology as a transformative business enabler.
“The smarter and faster-growing companies are leveraging AI tools to improve their competitive advantages; from using ChatGPT to quickly create good Python code, to writers improving their documents by using ChatGPT to consider other related issues and upgraded language,” says CISO Pete Nicoletti.
“…this technology is not just for the super-geeks, fanbois, and those who wish to ace their school exams…it will be rapidly embedded across business sectors and services, including healthcare, legal, architecture, entertainment, retail, technology and cyber security,” explains CISO Deryck Mitchelson.
ChatGPT cyber risks
However, the chatbot can introduce certain cyber security risks, requiring active risk management on the part of technical business leaders, IT teams and cyber security teams.
While using ChatGPT, 4% of employees have placed sensitive corporate data into the language model, raising concerns about major corporate data leaks. For example, one executive is known to have pasted the firm’s 2023 strategy document into the chatbot, requesting for it to create a corresponding PowerPoint deck.
The financial firm JPMorgan restricted workers’ use of ChatGPT and other well-known brands have issued warnings to employees around inputting data into chatbots.
Further, chatbots can easily be used by internal personal or external individuals to create phishing emails or other workplace threats.
ChatGPT risk management
In response, “many CISOs are including AI-based issues in their weekly staff meetings and giving initial guidance as well as updates to executive staff, says Nicoletti.
“ChatGPT and other AI-based tools are lowering the bar for hackers to create very targeted phishing emails based on previous hack info combined with social media information to fool many end-users.”
“CISOs need to review and embrace the significant amount of good guidance that has recently been released.”
For example the AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems.
A consensus resource, the AI RMF was developed in an open, transparent, multi-disciplinary and multi-stakeholder manner over an 18-month time period and in collaboration with more than 240 contributing organizations from private industry, academia, civil society and government.
“It has all of the appropriate steps for CISOs to use,” Nicoletti points out. The document can be found here.
ChatGPT risk communication
CISOs need to lead the discussion around workplace policies pertaining to ChatGPT. “The risks also need to be tracked and shared so that everyone is up-to-speed on the challenges and any negative outcomes,” says Nicoletti.
“Countermeasures and policies need to be effective and tracked and discussed with executives!”
Conversations concerning chatbot development, use, results, outcomes and risks should remain ongoing.
Further ChatGPT insights
As you’ve likely heard by now, Italy has placed a temporary ban on ChatGPT and has launched an investigation into a suspected, corresponding breach of privacy regulations. Germany may follow in Italy’s footsteps, and privacy advocates both France and Ireland have contacted the Italian data regulator to debrief new discoveries.
Against the backdrop of international debate, should your organization continue to trust ChatGPT? “We need appropriate governance, accountability and transparency, which hopefully regulations and legislation will deliver,” says Mitchelson.
“I do however have a major concern around the ownership of ChatGPT and similar AI technologies. We are seeing a pincer maneuver by Cloud Service Providers to control more and more of our services, data, security, automation, collaboration and now AI. I’m not convinced that this amount of power in such a small number of mega-corporations is a good thing, particularly if we ask ChatGPT to advise on our strategy and vendor decisions.”
In the absence of appropriate governance, accountability and transparency, could companies with formidable capital buy-out corporate chatbot owners and influence chatbot responses in such a way as to favor selective business interests?
For more C-level insights into ChatGPT, please see ChatGPT Security Risks: A Guide for Cyber Security Professionals.
Want to stay up-to-date with trends in technology? Check out the CyberTalk.org newsletter! Sign up today to receive top-notch news articles, best practices and expert analyses; delivered straight to your inbox.