Ex-Open AI chief scientist, Ilya Sutskever, launches company for AI safety

Safe Superintelligence Inc. to prioritize AI safety and innovation

by Faruk Imamovic
SHARE
Ex-Open AI chief scientist, Ilya Sutskever, launches company for AI safety
© X/CensoredMen

Ilya Sutskever, the former chief scientist of OpenAI, has launched a new venture, Safe Superintelligence, Inc. (SSI), alongside former OpenAI engineer Daniel Levy and investor Daniel Gross. The company's mission is clear from its name: it aims to develop artificial intelligence (AI) with a strong emphasis on safety and advanced capabilities.

Based in Palo Alto and Tel Aviv, SSI plans to advance AI technology by integrating safety measures directly into its development process. The founders highlighted their commitment in an online announcement on June 19, stating, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures”.

Sutskever and Gross's concerns about AI safety

Sutskever's departure from OpenAI on May 14 marked a significant change in his career trajectory. Previously involved in the controversial firing of OpenAI CEO Sam Altman, Sutskever's role at the company became increasingly unclear after stepping down from the board upon Altman’s return.

Levy, another key figure in AI research, left OpenAI shortly after Sutskever. Sutskever, along with Jan Leike, led OpenAI’s Superalignment team, which was established in July 2023 to address the control of AI systems far surpassing human intelligence, known as artificial general intelligence (AGI).

At its inception, OpenAI dedicated 20% of its computing resources to this team. However, following the departure of its key members in May, OpenAI dissolved the Superalignment team, although it continued to defend its safety measures in a lengthy post by company president Greg Brockman.

Leike has since moved to the Amazon-backed AI startup Anthropic, further underscoring the movement of top AI talent in response to the industry’s evolving landscape.

Widespread concern among tech leaders

The concerns of Sutskever and his colleagues are shared by many in the tech industry.

Vitalik Buterin, co-founder of Ethereum, recently described AGI as "risky," emphasizing that these models pose less of a threat compared to corporate or military misuse. High-profile figures such as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak have also voiced their concerns, joining over 2,600 tech leaders and researchers in calling for a six-month pause on AI system training to evaluate their potential risks.

SSI’s launch announcement also mentioned ongoing recruitment efforts for engineers and researchers, indicating the company’s commitment to building a robust team focused on AI safety.

SHARE