AI Safety in Focus: OpenAI Rolls Out Risk Mitigation Framework



by FARUK IMAMOVIC

AI Safety in Focus: OpenAI Rolls Out Risk Mitigation Framework
© Getty Images/Justin Sullivan

OpenAI, the organization behind the widely popular chatbot ChatGPT, has unveiled a comprehensive "Preparedness Framework" aimed at mitigating potential catastrophic risks associated with its advanced artificial intelligence technology.

This strategic move underscores the increasing awareness and responsibility in managing the potential dangers of cutting-edge AI.

Comprehensive Safety Measures and Oversight

The 27-page document outlines OpenAI's approach to identify, assess, and safeguard against a range of risks that AI models might pose.

These risks include the potential for AI models to disrupt cybersecurity massively or contribute to the creation of biological, chemical, or nuclear weapons. OpenAI has established several layers of checks and balances to ensure the safety of its AI models.

The company leadership holds decision-making authority on the release of new AI models, while the board of directors retains the power to reverse decisions made by the OpenAI leadership team. Before reaching the point where the board would need to veto the deployment of a potentially hazardous AI model, the company has instituted numerous safety checks.

A dedicated "preparedness" team, led by Massachusetts Institute of Technology professor Aleksander Madry, is responsible for monitoring and mitigating risks associated with advanced AI models.
This team will evaluate potential dangers and synthesize them into scorecards, categorizing risks as "low," "medium," "high," or "critical."
According to the framework, "only models with a post-mitigation score of ‘medium’ or below can be deployed," and further development is restricted to models with a "high" score or lower.

Governance and Public Perception

The framework is currently in "beta" and will be updated regularly based on feedback. This initiative highlights the unique governance structure at OpenAI, which underwent significant changes following a corporate upheaval that saw CEO Sam Altman temporarily ousted and then reinstated.

The incident raised questions about Altman's influence over the company he co-founded and the board's limitations in overseeing his leadership team. The current board, described as "initial" and in the process of expansion, comprises three wealthy, White men tasked with ensuring that OpenAI's technology benefits all of humanity.