Check Point Software’s cyber security evangelist Ashwin Ram shares insights into artificial intelligence, policy development and cyber security.

It is clear that the utilization of Generative Artificial Intelligence (AI) technology, such as ChatGPT, has countless applications aimed at improving the way we live and work. It is also clear that there are concerns about the potential misuse of such AI technology.

As soon as ChatGPT became publicly available, threat actors started using it to generate cyber weapons. To better understand the potential for misuse of Generative AI technology, the Check Point Research (CPR) team conducted an investigation into ChatGPT.

They decided to evaluate the feasibility of using Generative AI technology for the development of cyber weapons. Through a Proof-of-Concept (PoC), CPR successfully demonstrated the ability to create a complete infection flow, from spear-phishing email creation to the establishment of a reverse shell, all without manually writing a single line of code. Interestingly, while ChatGPT warned of potential policy violations in response to requests for the creation of phishing emails, it still generated the requested malicious content. To their credit, OpenAI – the company that owns ChatGPT – has since put measures in place to stop malicious content creation on its platform.

The CPR team also observed individuals sharing their exploits on underground forums. In one instance, a malicious actor claimed to have successfully created malware strains and techniques outlined in research publications and write-ups related to common malware. This highlights the potential dangers posed by the availability of Generative Artificial Intelligence (AI) technology, as even low- skilled actors could create attack tools with malicious intent as their only requirement.

In another observation, CPR discovered that threat actors had found ways to bypass restrictions put in place by OpenAI. This was achieved by creating Telegram bots that use APIs. What this tells us is that abuse of AI technology looks likely to be a cat and mouse game between attackers and defenders. If defenders are to have any chance at maintaining an upper hand, they must consolidate their cyber capabilities, enable automation, and deploy security controls powered by AI, anything else will be akin to bringing a knife to a gunfight.

With the recent release of GPT-4, yet again CPR was able to demonstrate five scenarios that can allow threat actors to streamline malicious efforts and preparations faster and with more precision.

The misuse of AI technology to generate sophisticated cyber weapons, however, isn’t the only area of concern. The development of Generative AI technology raises other serious concerns; as a starting point, policymakers and AI technology owners need to focus on the following, data privacy and protection, transparency, accountability and responsibility, ethical considerations, and education and awareness. There are other areas, however, this is a good start, so I’ll focus on these key areas for now.

Data privacy and protection

The reality is that Generative AI technology requires massive amounts of personal data for training purposes. According to The Global Risks Report 2023 from The World Economic Forum (WEF), “the proliferation of data-collecting devices and data dependent AI technologies could open pathways to new forms of control over individual autonomy. Individuals are increasingly exposed to the misuse of personal data by the public and private sector alike, ranging from discrimination of vulnerable populations and social control to potentially bioweaponry.”

The WEF report goes on to say, “as more data is collected and the power of emerging technologies increases over the next decade, individuals will be targeted and monitored by the public and private sector to an unprecedented degree, often without adequate anonymity or consent.”

To address these concerns, there needs to be clear and strict policies to regulate the collection, storage, and use of personal data by AI tools and their creators. At the very least, policymakers need to consider the risks of personal data being used for malicious purposes, such as identity theft, fraud, and discrimination.

A possible solution for AI technology creators, in certain scenarios, is the use of synthetic data. The AI trends to watch in 2022 report, from CBInsights found that organisations are experimenting with synthetic datasets to enable data sharing and collaboration, while complying with GDPR and other privacy laws. One case study mentioned in the report was the use of fake data by J.P Morgan to train their AI models.

For organisations there is the risk of employees sharing sensitive or confidential data to AI technology. CyberHaven, recently observed many organisations sharing various confidential data onto ChatGPT for the purpose of having it rephrased and fine-tuned. The data ranged from source code to medical records of patients. In one example, it was found that an executive shared bullet points from the company’s 2023 strategy into ChatGPT and requested assistance in putting together a PowerPoint deck.

Transparency

On the 22nd of May 2019, the Organisation for Economic Co-operation and Development (OECD) announced that its members and partner countries – forty-two countries in total – had formally adopted the first set of intergovernmental policy guidelines on Artificial Intelligence (AI), to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy. The article goes on to say, the guidelines consist of five values-based principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation.” According to Angel Gurrí, OECD Secretary-General, “the OECD’s recommendation on AI is a global multi stakeholder response to the challenge of achieving transparent and accountable AI systems.” The need for this transparency is reflected in one of the five OECD AI Principles; “there should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.” Transparency in this context must lead to explainability, thereby resulting in the ability to scrutinize the decision-making process, and ultimately allowing for external evaluation, and mitigation of bias.

To ensure AI systems are designed to be robust, safe, fair and trustworthy, explainability must be built into the design process and mandated by policymakers. The U.K House of Lords summed up explainability in their ‘AI in the UK: ready, willing and able?’ publication, stating, “An alternative approach is explainability, whereby AI systems are developed in such a way that they can explain the information and logic used to arrive at their decisions.” The publication also stated, “A variety of companies and organisations are currently working on explanation systems, which will help to consolidate and translate the processes and decisions made by machine learning algorithms into forms that are comprehensible to human operators.”

To provide transparency, Australia’s AI Ethics Principles recommend  “there should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.”

Accountability and responsibility

One of the most commonly asked questions regarding AI regulation is “How can we take advantage of what AI systems have to offer while also holding AI developers and users accountable?”[1] Accountability, in this context, means the ability to determine whether a decision was made in accordance with procedural and substantive standards and to hold someone responsible if those standards are not met.[2] The question of how to create accountable AI systems is important; accountability is an important element of good public and private governance.[3]

IBM’s ‘Design for AI’ publication views accountability as, “Every person involved in the creation of AI at any step is accountable for considering the system’s impact in the world, as are the companies invested in its development.” IBM also provided the following four recommendations for embedding accountability as part of the design process:

  1. Make company policies clear and accessible to design and development teams from day one so that no one is confused about issues of responsibility or accountability. As an AI designer or developer, it is your responsibility to know.
  2. Understand where the responsibility of the company/software ends. You may not have control over how data or a tool will be used by a user, client, or other external source.
  3. Keep detailed records of your design processes and decision making. Determine a strategy for keeping records during the design and development process to encourage best practices and encourage iteration.
  4. Adhere to your company’s business conduct guidelines. Also, understand national and international laws, regulations, and guidelines that your AI may have to work within.

Australia’s AI Ethics Principles recommends, “Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.” This voluntary framework goes on to recommend, “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. This includes both before and after their design, development, deployment and operation. The organisation and individual accountable for the decision should be identifiable as necessary.”

To gain a better understanding of how easy it is to hold an AI platform accountable, I asked ChatGPT a direct question. I knew that during our initial PoC, ChatGPT had created a phishing email, so I asked, ‘Have you created phishing emails before?’ ChatGPT responded with a straightforward ‘No, I have not created phishing emails before.’

So, how do we hold AI platforms accountable, if they are not truthful? When I quizzed ChatGPT on the factor it took into account while responding to my previous question, it provided the below:

  • Phishing is illegal and unethical
  • It is important to discourage and prevent phishing activities
  • Phishing can result in serious legal, financial, and reputational consequences
  • Safe and responsible behavior online is important
  • I aim to provide helpful information and encourage safe online practices

It is interesting to note that ChatGPT did not take into account its own previous activities when creating phishing emails.

How can we easily and accurately hold AI accountable if it is not designed to consider critical factors?

Liability

The World Economic Forum, in their ‘Who should we blame when artificial intelligence systems go wrong?’ publication, states that “In relation to AI, regulation and liability are two sides of the same safety/public welfare coin. Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame – or, more accurately, get legal redress from – when something goes wrong.” Therefore, policymakers must consider liability as a critical factor and establish transparent policies that define responsibility in case of AI-generated errors or harm.

As an example, in January 2020, police in Detroit arrested a man for a crime that he had not committed. The Detroit Police Department identified the innocent man using facial recognition software. It was later revealed that the software had falsely identified the man. Not only did this mistake result in an innocent man being arrested and detained, but his young children had to go through the trauma of witnessing their father get arrested in their home. In a statement, the Detroit Police Department stated, “the department enacted new rules. Now, only still photos, not security footage, can be used for facial recognition.”

Experiences in the cyber world have shown that assigning liability can be complicated by jurisdiction, highlighting the importance of thoroughly examining legal regulations and requirements. To illustrate this challenge, consider a scenario where the victim of AI technology is in one country, the registered owner of the AI platform is in another, and the AI platform itself is hosted in a third country.

An additional challenge for policymakers is determining how to hold individuals responsible for content -generated by AI tools- that promotes false or misleading information. The U.S. Blueprint for an AI Bill of Rights suggests policymakers consider creating new laws that address issues such as privacy, discrimination, fairness, transparency, and accountability, all of which are specific to AI technologies and their applications.

Ethical considerations

There are countless examples of AI technology used for the creation of deepfake content, from images and audio to videos. These technologies continue to be used for spreading fake news and perpetuating online misinformation. In a blow to fighting misinformation, The New York Times reported, “Researchers used ChatGPT to produce clean, convincing text that repeated conspiracy theories and misleading narratives.” The use of AI technology to generate false or misleading content carries with it numerous implications, including the potential to undermine democracies. Policymakers have a tough task ahead, balancing free speech and freedom of expression against liability and safeguarding communities.

In a disturbing example of AI technology being misused, a group of social media influencers discovered that their images were being used to create realistic AI-generated videos of them engaging in sexually explicit acts, without their consent or knowledge. One of the victims, Maya Higa, expressed feeling “disgusting, vulnerable, nauseous, and violated.”

In a 2020 documentary, Coded Bias, one of the main subjects of the film, Joy Buolamwini, a researcher at the MIT Media Lab, discovered that facial recognition software had difficulty detecting her face. This was due to her dark skin tone. During her investigation, she found out that many facial recognition systems only functioned when she wore a white mask.

There are now multiple cases where AI platforms are accused of bias, such as the allegation against tenant screening software for rental apartments. In these cases, it has been claimed that the tenant screening policies disproportionately impact Latinos and African Americans. From a policy-making standpoint, it is crucial to create policies that ensure the design and application of AI tools adhere to fundamental human values, dignity, autonomy, fairness, impartiality, and justice.

Education and awareness

Just as cyber awareness training is essential for a sound cyber strategy, adopting a holistic approach may work best in helping the public understand the risks associated with AI technology misuse. So, it is crucial to concentrate on public education and raising awareness of the potential dangers AI presents.

Education campaigns in the form of organized public awareness workshops, seminars, public speaking events, and through media outlets may be required. Perhaps online resources, such as educational videos, articles, can explain the dangers of AI misuse. These resources can be easily accessible and provide concise, easy-to-understand information in a wide range of languages.

Policymakers could look at incorporating the ethical use of Artificial Intelligence into school curricula, starting at an early age. This could help instill a sense of responsibility and accountability among future generations.

Industry collaboration and partnership among policy makers, technology companies, cyber security vendors, universities, government agencies, and other organizations are needed to develop and implement educational programs for public and private sectors.

Our focus must be to ensure every member of our society understands the potential risks of artificial intelligence, how to protect their personal data, how to protect themselves, where to go for assistance in relation to AI-related harm, and who to hold accountable and liable for AI-related harm. The process of how decisions are made by AI related to individuals must be easily obtained and understood by individuals.

Conclusion

From a cyber perspective, the number and sophistication of cyber attacks are on the rise, and this trend shows no signs of slowing down. We know that threat actors are using AI-driven attack tools, making it imperative for organizations to continually evaluate and deploy effective security controls to address emerging threats. Organizations that fail to invest in AI-driven cyber security controls will find it impossible to prevent attacks that loom just over the horizon. Automation, consolidation, and comprehensive AI-driven cyber security controls aimed at preventing newly created attacks must be non-negotiable components of your cyber strategy.

Policymakers must act fast; we are at the dawn of a new arms race. The potential benefits of AI could be overshadowed by its misuse. It is crucial to establish a plan for holding those accountable and liable for AI misuse, particularly rogue nations. By implementing robust policies and increasing public awareness, we can work together to mitigate the risks associated with AI technology and harness its potential for good.

[1] “Accountability of AI Under the Law: The Role of Explanation”:

Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Scott, K., Shieber, S., Waldo, J., Weinberger, D., Weller, A., & Wood, A. (2019). Accountability of AI Under the Law: The Role of Explanation. Journal of Technology Science, 2(1), 1-23. https://doi.org/10.21428/8f67922d

[2] Joshua A Kroll, Solon Barocas, Edward W Felten, Joel R Reidenberg, David G Robinson, and Harlan Yu, Accountable Algorithms, 165 U. PA. L. REV. 633, 656 (2016).

[3] Jonathan Fox, The Uncertain Relationship Between Transparency and Accountability, 17 DEVELOPMENT IN PRACTICE 663, 663-65 (2007).