Safe Superintelligence (SSI), a new artificial intelligence startup founded by Ilya Sutskever, the former chief scientist of OpenAI, has raised $1 billion in funding. The company plans to use this money to develop AI systems that are not only powerful but also safe and aligned with human values.
SSI, which currently has a small team of 10 employees, will use the funds to acquire high-performance computing power and recruit top talent. The company is building a team of researchers and engineers in its offices in Palo Alto, California, and Tel Aviv, Israel.
The successful funding round shows that investors are confident in AI’s potential and willing to support startups led by exceptional talent. SSI attracted investment from leading venture capital firms, including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel.
AI safety has become a critical topic, driven by fears that advanced AI systems could operate in ways that are detrimental to human interests or even pose existential threats. SSI’s focus on developing safe superintelligence aligns with the broader industry trend of ensuring AI safety.
The company’s CEO, Daniel Gross, emphasized the importance of having investors who understand and support SSI’s mission. “Our goal is to focus on research and development for a few years before bringing our product to market,” Gross stated.
SSI’s approach differs from OpenAI’s hybrid corporate structure, which was designed with AI safety in mind. SSI is structured as a traditional for-profit entity and prioritizes character and a genuine interest in the work over credentials.
The latest development at SSI is part of a broader wave of investment in AI, particularly in areas requiring substantial computational infrastructure. As AI technology continues to evolve, investors are recognizing the importance of developing systems that are not only powerful but also aligned with human values and safety protocols.