OpenAI is setting up a new safety oversight committee after facing criticism that safety measures were being deprioritized in favor of new and “shiny” product capabilities.

CEO Sam Altman and Chairman Bret Taylor will co-lead the safety committee, alongside four additional OpenAI technical and policy experts. Committee members also include Adam D’Angelo, the CEO of Quora, and Nicole Seligman, who previously served as general counsel for the Sony Corporation.

The committee will initially evaluate OpenAI’s existing processes and safeguards. Within 90 days, the committee is due to submit formal recommendations to OpenAI’s board, outlining proposed improvements and new security measures.

OpenAI has committed to publicly releasing the recommendations as a means of increasing accountability and public trust.

Addressing user safety

In addition to scrutinizing current practices, the committee will contend with complex challenges around aligning AI system operations with human values, mitigating potential negative societal impacts, implementing scalable oversight mechanisms and developing robust tools for AI governance.

AI ethics researchers and several of the company’s own employees have critically questioned the prioritization of commercial interests over detailed safety evaluations. The release of ChatGPT-4o has amplified these concerns, as ChatGPT-4o is significantly more capable than past iterations of the technology.

Major AI research labs (think Anthropic, DeepMind…etc) and other tech giants pursuing AI development will likely follow OpenAI’s lead by forming independent safety and ethics review boards.

AI and cyber security

The extremely fast development of versatile AI capabilities has led to concerns about the potential misuse of AI tools by those with malicious intent. Cyber criminals can leverage AI to execute cyber attacks, spread disinformation and to compromise business or personal privacy.

The cyber security risks introduced by AI are unprecedented, making solutions — like AI-powered security gateways that can dynamically inspect data streams and detect advanced threats — critically important.

Check Point Software has developed an AI-driven, cloud-delivered security gateway that leverages machine learning models to identify attempted exploitations of AI; deepfakes, data poisoning attacks and AI-generated malware, among other things. This multi-layered protection extends across networks, cloud environments, mobile devices and IoT deployments.

Protect what matters most. Learn more about Check Point’s technologies here. Lastly, to receive practical cyber insights, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.