EXECUTIVE SUMMARY:

In the wake of ChatGPT’s debut on the world stage, interest in generative artificial intelligence (AI) exploded. Generative AI is being deployed in ecosystems everywhere, and some major tech players believe that it’s high time for a common set of standards around the development and deployment of artificial intelligence based technologies.

On Friday, Google introduced its new Secure AI Framework (SAIF), a conceptual framework that offers industry standards for responsibly creating and implementing AI systems.

SAIF was inspired by security best practices – like reviewing, testing and controlling the supply chain – that Google has applied to software development. It also accounts for and incorporates Google’s understanding of security mega-trends and risks to specific AI-based systems.

SAIF arguably represents a step towards ensuring that AI technology is safe-by-design and secure-by-default when implemented. Here’s what to know…

SAIF key details

SAIF is designed to help mitigate specific risks to AI systems. These include model theft, data poisoning of the training set, injection of malicious input via prompt injection and extraction of confidential information in the training set.

As AI capabilities grow and become increasingly integrated into products and services, adhering to a responsible set of AI standards will be a critical move for developers and users of AI alike.

SAIF 6 security principles

The SAIF framework is built around the following 6 principles:

1. Expanding strong security foundations for the AI ecosystem. This includes using secure-by-default infrastructure protections and relying on expertise that can protect AI systems, applications and users. Enterprises are encouraged to develop knowledge to keep pace with the advances in AI and to start to scale and adapt infrastructure protections in the context of AI and evolving threat models.

For example, injection techniques, such as SQL have existed for a long time. Organizations can adopt mitigations, like input sanitization and limiting, to help reduce the impact of prompt injection style threats.

2. Extend detection and response to bring AI into an organization’s threat universe. In responding to AI-related cyber incidents, timeliness is critical. To speed up time-to-detect and time-to-remediation, organizations need threat intelligence capabilities, and they need to monitor input and output of generative AI systems to detect anomalies using threat intelligence. Such an endeavor typically requires collaboration with trust and safety, threat intelligence and counter abuse teams, according to Google.

3. Automating defenses to keep pace with both new and existing threats. The most recent developments in AI can lead to increased cyber security efficiency. However, cyber adversaries will likely use AI to scale their impact. Thus, it’s important to use AI and its current and emerging capabilities to remain agile and to curb cyber security expenditures.

4. Harmonizing platform-level controls to ensure consistent security across an organization. Consistency across control frameworks can assist with AI risk mitigation and can help scale protections across platforms and tools for all AI applications.

5. Adapting controls in order to adjust mitigations and create faster feedback loops for AI deployment. Continuous testing of implementations via continuous learning can ensure that detection and protection capabilities appropriately address the evolving threat environment. Such techniques include reinforcement learning based on incidents and user feedback, and involve steps such as updating training data sets, fine-tuning models to respond strategically to attacks, and allowing the software that is used to build models to embed further security in context.

6. Contextualizing AI system risks in surrounding business processes. Conducting end-to-end risk assessments related to how organizations will deploy AI can inform decisions. This can include an assessment of the end-to-end business risk. Think data lineage, validation and operational behavior monitoring for select application types. Further, organizations are encouraged to construct automated checks in order to validate AI performance.

The SAIF framework

Google’s SAIF framework is based on the company’s 10 years of experience developing and using AI in its own products. The company hopes that by sharing its own experience and expertise for public good, it will lay the groundwork for a more secure future. As the industry advances, Google says that it remains committed to contributing research and insight to the AI conversation.

By adhering to frameworks like SAIF, industry groups can responsibly develop AI systems, ultimately unlocking the full potential of this transformative technology.

If you’d like more information about Google’s framework, click here. For additional AI insights from CyberTalk.org, please see our past coverage. Lastly, subscribe to the CyberTalk.org newsletter for executive-level interviews, analyses, reports and more each week. Subscribe here.