Co-founder of OpenAI, Ilya Sutskever, announces the establishment of a secure superintelligence company.

2024-06-20

Ilya Sutskever, the former Chief Scientist and co-founder of OpenAI, firmly believes that truly advanced AI systems are on the horizon. After leaving OpenAI over a month ago, Ilya Sutskever announced the establishment of his new company, Safe Superintelligence Inc. (SSI). The company was registered in Delaware on June 6th and plans to have offices in Palo Alto and Tel Aviv. SSI is described as the world's first "linear superintelligence laboratory," focusing on building a safe and powerful AI system without any immediate plans to sell AI products or services. In an interview, Ilya Sutskever explained the uniqueness of SSI: "The special thing about this company is that its primary mission is to build safe superintelligence. It won't involve itself in any other ventures until that is achieved. It will be completely insulated from external pressures, free from dealing with complex products or engaging in fierce competition." SSI was co-founded by two other individuals: Daniel Gross, an outstanding investor and former head of AI at Apple; and Daniel Levy, an AI researcher who previously worked with Ilya Sutskever at OpenAI. From Maginative's perspective, SSI's prioritization of safety over short-term commercial interests is noteworthy. This focus sets SSI apart from AI research labs that often juggle multiple projects and products simultaneously. However, the economic realities of the AI industry, including the growing computational demands and the need for substantial funding, make SSI a gamble for investors. While Ilya Sutskever and his team are confident in their ability to raise the necessary funds to launch, skepticism remains about the sustainability of this enthusiasm. Furthermore, the core of their endeavor - creating superintelligence - will face strict scrutiny and skepticism. Currently, AI is far from achieving artificial general intelligence (AGI), let alone superintelligence. The concept of "safety" in AI remains vague and elusive. Nevertheless, the background and expertise of the SSI founding team should not be underestimated. Ilya Sutskever, Gross, and Levy's decision to embark on this ambitious project demonstrates their firm belief in the potential of superintelligence and the importance of its safe development.