As the AI revolution continues to sweep the world, OpenAI’s ChatGPT tool has emerged as a groundbreaking force in the realm of natural language processing. With applications spanning across industries and organizations, the possibilities sometimes seem limitless.
But with the widespread adoption of AI language models, ChatGPT, and adjacent technologies, concerns around data privacy and security have reached fever pitch.
That’s why we’ve brought together the practical insights and strategic recommendations of four top-tier cyber security professionals who can help you protect your organization from the potential threats posed by the new tech.
Jonathan Fischbein is the Global Chief Information Security Officer for Check Point Software Technologies. Field CISOs Deryck Mitchelson, Cindi Carter and Pete Nicoletti cover the EMEA and Americas regions for the company. Read on to learn more about their expert perspectives on this critical topic.
Global CISO Jonathan Fischbein
“Companies that embrace the disruptive power of generative AI will find themselves at a considerable advantage. However, those that adopt it rapidly need to mitigate the massive security implications that are bundled with the benefits.”
“After an executive management team decision about whether to allow ChatGPT or not (I personally support and encourage the adoption of AI), start with simple and comprehensive approach.
- Adopt it into existing policies.
- Communicate: Share do’s and don’ts with all employees via the security awareness programs.
- Monitor: Use SOC to provide alerts in case internal users expose classified data to AI platforms
- Enforce: Ensure that PII or critical data is not exposed to AI platforms which didn’t obtain the required level of trust according to the sensitivity levels. Leverage enforcement mechanism such as DLP + WAP (Web application and API protections) security capabilities.”
“The fact that ChatGPT is blocked in several countries is a proof of mistrust pertaining to who and how the entities behind AI platforms will use the data. Such platforms are new, and often considered as the new ‘coolest toy’…The good guys are aiming to leverage it for good use, while bad guys are already using it to develop a new generation of cyber sophisticated campaigns. Already we have witnessed a number of cases where security was very low or bypassed easily.
On the privacy side, it is another challenge. There was a case where ChatGPT history searches were visible to other users, which of course shows about the immaturity of the new toy! I am afraid that this is not the last time we might hear about poor privacy practices.”
“Regulation is lacking on every possible level. Italy’s recent decision to ban it is a very interesting one, which I understand but don’t support. We understand that policy requirements may take some time to get used to, but they are essential to protecting our sensitive information and maintaining the privacy and security of our customers and employees.”
Field CISO Cindi Carter
“Artificial Intelligence is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people and businesses to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making.”
“As one of the fastest-moving technological advances, there are growing concerns about privacy implications with the use of OpenAI’s ChatGPT. Although ChatGPT data is amassed from what are considered publicly available sources, it’s the combination of the end user’s input that puts sensitive business and personal data at risk. Even when data is publicly available, its use can compromise what is known as contextual integrity. This is a fundamental principle in legal discussions of privacy. It requires that individuals’ information is not revealed outside of the context in which it was originally produced. Moreover, the data ChatGPT was trained on can be proprietary or copyrighted.
OpenAI, the company behind ChatGPT, offers no procedures for individuals to check whether the company stores their personal information, or to request for it to be deleted. This is a guaranteed right in accordance with the European General Data Protection Regulation (GDPR) —although it’s still under debate as to whether ChatGPT is compliant with GDPR requirements. This “right to be forgotten” is particularly important in cases where the information is inaccurate or misleading, which can be the case.
Lastly, it’s important to recognize that the unprecedented growth of ChatGPT may make the platform uniquely vulnerable, as its creators rush to keep up with demand…”
Field CISO Deryck Mitchelson
“We have been building up to the next digital evolution for many years, and it has finally arrived. The hype of Siri, Cortana, Lyra and chatbots which underwhelmed at best has been replaced by an advancement that will shape digital interaction for the next generation and beyond. The use cases for ChatGPT are boundless, this technology is not just for the super-geeks, fanbois and those who wish to ace their school exams, it will be rapidly embedded across business sectors and services including healthcare, legal, architecture, entertainment, retail, technology and cyber security.”
“As with all technologies/innovations, organisations should perform detailed risk assessments and data privacy impact assessments to get an understanding of what data would be processed, where, by whom, for what purpose, retention periods and any data sharing agreements…”
“I’m not surprised that countries are considering its appropriate use, however the speed of Italy’s response does surprise me. It is debatable whether ChatGPT fully complies with GDPR legislation, in particular the right to be forgotten. However, I’m not sure that banning innovation is necessary the correct thing – we need appropriate governance, accountability and transparency which hopefully regulations and legislation will deliver.”
“I do however have a major concern around the ownership of ChatGPT and other similar AI technologies. We are seeing a pincer manoeuver by Cloud Service Providers to control more and more of our services, data, security, automation, collaboration and now AI. I’m not convinced this amount of power in such a small number of mega-corporations is a good thing, particularly if we ask ChatGPT to advise on our strategy and vendor decisions.”
Field CISO Pete Nicoletti
“First off, CISO’s need to be very aware of all the issues and risks related to their business and the entire corporate and personal ecosystems. With this awareness, they need to provide ongoing guidance and leadership in addressing these new opportunities and their risks. Many CISO’s are including AI-based issues in their weekly staff meetings and giving initial guidance as well as updates to executive staff. For example: ChatGPT and other AI-based tools are lowering the bar for hackers, who create very targeted phishing emails based on previous hack info combined with social media information. The attacks are fooling many end users. Staff members in Security, IT, Network and Development as well as Marketing and Sales are all using ChatGPT and that has to be shared and discussed during CISO led meetings…”
“CISO’s need to share both the good and the risks of using ChatGPT and AI based tools. As noted above, there are lots of upsides and benefits to leveraging AI based tools. That info needs to be tracked, documented, and shared.
The risks also need to be tracked and shared so that everyone is up to speed on the challenges and any negative outcomes….
CISO’s should be leading the discussion and sharing new and continuously updated policies and validation of the policy effectiveness on an ongoing basis. This is not an issue that can be ignored.”
For more insights into artificial intelligence and cyber security, please see CyberTalk.org’s past coverage. Lastly, check out the CyberTalk.org newsletter! Sign up today to receive top-notch news articles, best practices and expert analyses each week.