Ilya Sutskever, a co-founder of OpenAI, has launched a new company, Safe Superintelligence Inc. (SSI), just one month after formally departing from OpenAI.
Sutskever, who served as OpenAI’s chief scientist for many years, established SSI with former Y Combinator partner Daniel Gross and former OpenAI engineer Daniel Levy.
Follow THE FUTURE on LinkedIn, Facebook, Instagram, X and Telegram
At OpenAI, Sutskever played a crucial role in enhancing AI safety amidst the advent of “superintelligent” AI systems. He collaborated closely with Jan Leike, who co-led OpenAI’s Superalignment team. However, both Sutskever and Leike left OpenAI in May following a significant disagreement with the company’s leadership regarding AI safety strategies. Leike is now leading a team at the rival AI firm Anthropic.
Sutskever has long focused on the complex challenges of AI safety. In a 2023 blog post co-authored with Leike, he predicted that AI with intelligence surpassing human capabilities could emerge within the next decade. He emphasized that such AI might not necessarily be benevolent, highlighting the urgent need for research into methods to control and restrain it.