Apex Trader Funding - News
More OpenAI drama: Exec quits over concerns about focus on profit over safety
New York
CNN
—
A departing OpenAI executive focused on safety is raising concerns about the company on his way out the door.
Jan Leike, who resigned from his role leading the company’s “superalignment” team this week, said in a thread on X Friday that he disagreed with OpenAI leadership’s “core priorities” and had “reached a breaking point.”
“Alignment” or “superalignment” are terms used in the artificial intelligence space to refer to work on training AI systems to operate within human needs and priorities. Leike joined OpenAI in 2021, and last summer the company announced that he would co-lead the Superalignment team focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”
However, Leike said Friday that in recent months, the team has been under resourced and “sailing against the wind.”
“Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done,” he said on X, adding that Thursday was his last day at the startup. “Building smarter-than-human machines is an inherently dangerous endeavor … But over the past years, safety culture and processes have taken a backseat to shiny products.”
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023. (Photo by JACK GUEZ / AFP) (Photo by JACK GUEZ/AFP via Getty Images)
Jack Guez/AFP/Getty Images/File
Related article
OpenAI executive is out after key role in CEO Sam Altman’s ouster
Leike’s exit, which he announced Wednesday, comes amid a broader leadership shuffle at OpenAI. His resignation followed an announcement by OpenAI Co-Founder and Chief Scientist Ilya Sutskever, who also helped lead the superalignment team, on Tuesday that he would leave the company.
Sutskever said he was leaving to work on a “project that is very personally meaningful to me.” But his exit was notable given the central role he played in the dramatic firing — and return — of OpenAI CEO Sam Altman last year, when he voted to remove Altman as chief executive and chairman of the board.
CNN contributor Kara Swisher previously reported that Sutskever had been concerned that Altman was pushing AI technology “too far, too fast.” But days after Altman’s ouster, Sutskever had a change of heart: He signed an employee letter calling for the entire board to resign and for Altman to return.
Still, questions about how — and how quickly — to develop and publicly release AI technology may have continued to cause tension within the company in the months after Altman regained control of the firm. The executive exits come after OpenAI announced this week that it would make its most powerful AI model yet, GPT-4o, available for free to the public through ChatGPT. The technology will make ChatGPT more like a digital personal assistant, capable of real-time spoken conversations.
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” Leike wrote in his X thread on Friday. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”
Asked for comment on Leike’s claims, OpenAI directed CNN to an X post from Altman saying the company is committed to safety.
“i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said. “he’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”