Tech

‘What did he see???’: Sudden departure of top OpenAI employees sparks speculation over ‘superintelligence’ dangers

Some people think the departing researchers might have seen something they didn’t like.

Photo of Marlon Ettinger

Marlon Ettinger

ilya sutskever over open ai logo

Ilya Sutskever, an OpenAI co-founder and one of the world’s leading researchers in deep learning and neural networks, announced on Wednesday that he’d be leaving the company to pursue a “personally meaningful” project. Sutskever’s statement was accompanied hours later by the departure of Jan Leike, a machine learning researcher at the company who was working with Sutskever on safety guardrails for superintelligence. 

Featured Video

The twin departures—which came just one day after OpenAI staged a demonstration of its soon-to-be released GPT4o model, which features a lifelike and responsive voice feature—prompted speculation online about what kind of path OpenAI might be taking which could prompt them to leave.

“The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of [Sam Altman, Greg Brockman, Mira Murati] and now, under the excellent research leadership of [Jakub Pachocki],” Sutskever announced on X.

Advertisement

Since 2023, Sutskever has co-led the company’s “Superalignment” project with Leike. The project’s aim is to program so-called superintelligent AI systems with a baked-in ethical framework, and to make sure the systems don’t circumvent these ethical guardrails to accomplish their ultimate goals.

“I resigned,” Leike posted laconically Wednesday night.

“Keep fighting the good fight 🫡,” replied Logan Kilpatrick, a Senior Project Manager at Google who led developer relations at Open AI until March this year.

Advertisement
https://twitter.com/OfficialLoganK/status/1790604996641472987

Leike’s tweet and Kilpatrick’s response set off a string of speculation about what exactly it was that they were referring to. Leike and Kilpatrick didn’t respond to questions about why they left the company or what they meant by the posts.

“Okay, so you guys definitely know something, at least give us a hint?” asked @sameed_ahmad12.

“Yes there is something happing people don’t just leave a company on track to build the most important tech in history. Yet a lot of big names left,” replied @Shawnryan96.

Advertisement

After OpenAI CEO Sam Altman was pushed out as head of the company by the board in still murky circumstances last year, posters online speculated that Sutskever may have been behind the move after seeing something which made him uncomfortable given his focus on alignment.

Sutskever has never talked publicly about the reasons for the ouster, and Altman was brought back onboard after a public employee revolt. But Sutskever’s latest move set off the speculation again.

“would you be open to publishing in more detail why?” @SteveMoraco asked Leike. “I don’t think the secrecy is helping the world at this point[.] is it the difficulty of analyzing end to end multimodal model’s safety? The risk of making them widely available? Would love the official take from someone who actually knows.”

Advertisement

“what did he see???” asked @grepmeded more concisely in a tweet quoting a post from Kirkpatrick in March.

On Reddit’s r/Singularity subreddit, which discusses what the world might look like if and when artificial intelligence outpaces human intelligence, posters wondered if the resignations had anything to do with moves from OpenAI which seemed to not align with their mission of developing safe artificial intelligence, citing policy decisions by the company like a softening on their ban on military uses for their technology and a lobbying push against open source.

“I personally think it was all the above and had nothing to do with ‘AGI achieved internally,’” posted u/Mirrorslash. “I believe they used this narrative to hype up AI/ themselves and create fear to create momentum for regulatory capture. What Microsoft and OpenAI are trying will actively harm everyone to benefit from AGI in the future in my opinion.”

Advertisement

“It seems the top people involved in AI safety is quitting or perhaps being squeezed out. This is not good news,” added u/Analog_AI.

But beyond debates about ethics or the possibility of malevolent superintelligence, other posters back on X speculated that the motivations for the move were more pedestrian.

“They have equity, so a smoother exit at a minimally alarming time is financially in their interest,” posted @JonathanLigmas.



The internet is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s web_crawlr newsletter here to get the best (and worst) of the internet straight into your inbox.
Advertisement
 
The Daily Dot