OpenAI’s Former Chief Scientist Unveils New Safety-Focused Company

OpenAI's Former Chief Scientist Unveils New Safety-Focused Company

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has unveiled a new AI company with a strong emphasis on safety. On Wednesday, Sutskever introduced Safe Superintelligence Inc., a startup dedicated to developing a safe and powerful AI system with a singular focus.

In his announcement, Sutskever described SSI as a company that integrates safety and capabilities simultaneously, enabling rapid advancement of its AI system while maintaining a priority on safety. He pointed out the external pressures faced by AI teams at companies like OpenAI, Google, and Microsoft, emphasizing that SSI’s “singular focus” allows it to avoid being “distracted by management overhead or product cycles.”

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement states. “This way, we can scale in peace.” Joining Sutskever in this venture are co-founders Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of the technical staff at OpenAI.

Last year, Sutskever played a key role in the effort to remove OpenAI CEO Sam Altman. He left OpenAI in May, hinting at a new project. Following Sutskever’s departure, AI researcher Jan Leike also resigned from OpenAI, citing that safety processes had “taken a backseat to shiny products.” Similarly, Gretchen Krueger, a policy researcher at OpenAI, expressed safety concerns upon her departure.

While OpenAI continues to forge partnerships with Apple and Microsoft, SSI seems to be charting a different path. In an interview with Bloomberg, Sutskever stated that SSI’s first product will be safe superintelligence and that the company “will not do anything else” until this goal is achieved.