Apex Trader Funding - News
He tried to oust OpenAI’s CEO. Now, he’s starting a ‘safe’ rival
New York
CNN
—
The OpenAI co-founder who left the high-flying artificial intelligence startup last month has announced his next venture: a company dedicated to building safe, powerful artificial intelligence that could become a rival to his old employer.
Ilya Sutskever announced plans for the new company, aptly named Safe Superintelligence Inc., in a post on X Wednesday.
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI,” a statement posted to the company’s website reads. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”
The announcement comes amid growing concerns in the tech world and beyond that AI may be advancing more quickly than research on using the technology safely and responsibly, as well as a dearth of regulation that has left tech companies largely free to set safety guidelines for themselves.
Sutskever is considered one of the early pioneers of the AI revolution. As a student, he worked in a machine learning lab under Geoffrey Hinton, known as the “Godfather of AI,” where they created an AI startup that was later acquired by Google. Sutskever then worked on Google’s AI research team, before helping to found what would become the maker of ChatGPT.
But things got complicated for Sutskever at OpenAI when he was involved in an effort to oust CEO Sam Altman last year, which resulted in the dramatic leadership shuffle that saw Altman fired, then rehired and the company’s board overhauled, all within about a week.
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023. (Photo by JACK GUEZ / AFP) (Photo by JACK GUEZ/AFP via Getty Images)
Jack Guez/AFP/Getty Images/File
Related article
OpenAI executive is out after key role in CEO Sam Altman’s ouster
CNN contributor Kara Swisher previously reported that Sutskever had been concerned that Altman was pushing AI technology “too far, too fast.” But days after Altman’s ouster, Sutskever had a change of heart: He signed an employee letter calling for the entire board to resign and for Altman to return, and later said he “deeply” regretted his role in the dustup.
Last month, Sutskever said he was leaving his role as chief scientist at OpenAI — one of a string of departures from the company around the same time — to work on “a project that is very personally meaningful to me.”
It is not clear how Safe Superintelligence plans to translate a “safer” AI model into revenue or how its technology will manifest in the form of products. It’s also not clear exactly what the new company thinks of as “safety” in the context of highly powerful artificial intelligence technology.
But the company’s launch reflects the belief among some in the tech world that artificial intelligence systems that are as smart as humans, if not smarter, are not some far-off science fiction dream but an impending reality that is not so far away.
“By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever told Bloomberg in an interview published Tuesday.
Some employees who recently departed OpenAI had criticized it for prioritizing commercial growth over investing in long-term safety. One of those former employees, Jan Leike, last month raised alarms about OpenAI’s decision to dissolve its “superalignment” team, which was dedicated to training AI systems to operate within human needs and priorities. (OpenAI, for its part, said it was spreading superalignment team members throughout the company to better achieve safety objectives.)
The launch announcement from Safe Superintelligence suggests that the company wants to take a different approach: “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Joining Sutskever in launching the new company are Daniel Levy, who had worked at OpenAI for the past two years, and Daniel Gross, an investor who previously worked as a partner at the startup accelerator Y Combinator and on machine learning efforts at Apple. The company says it will have offices in Palo Alto, California, and Tel Aviv, Israel.