OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups
Company insiders explain why safety-conscious employees are leaving. https://www.vox.com/... vs #ai #openai X: Sam Altman / @sama : i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days. 🧡 @josephjacks_ : IMHO, Anthropic and OpenAI are playing a losing game trying to continue scaling compute by hemorrhaging equity capital losses (massively underwater CAC) as their core competitive strategy. Let me unpack this view: Hyperscalers (specifically MSFT, META and GOOG) have orders of Sam Altman / @samaltsman : Well, what a shock. Jan and Ilya left OpenAI because they think I'm not prioritizing safety enough. How original. Now I have to write some long, bs post about how much I care. But honestly, who needs safety when you can accelerate AI development at breakneck speeds and hope for [video] Rohit / @krishnanrohit : I am truly surprised at the number of people who think Sam's reaction to being fired without an actual cause should have been to turn the other cheek and say sorry Rohit / @krishnanrohit : OpenAI should just add a disparagement clause to the leaver documentation. You can't get your money unless you say something bad about them. Liron Shapira / @liron : NOW can we stop the ad hominem attacks on the “type of people” warning us about AI extinction risk? Is it enough that 4 of TOP LEADERS hand-selected by Sam Altman to build OpenAI have pulled the alarm? * Dario Amodei * Paul Christiano * Ilya Sutskever * Jan Leike Jan Leike / @janleike : But over the past years, safety culture and processes have taken a backseat to shiny products. @iamgingertrash : The Mandate of Heaven has shifted OpenAI are likely to never solve ASI It's gonna be Deepmind, or Opensource (meta + community) Jan Leike / @janleike : Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us. Sonia Joseph / @soniajoseph_ : To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties— During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI Gary Marcus / @garymarcus : The jig is almost up. • Safety was a lie • Openness was a lie • Returns are diminishing • Nothing new and mind blowing is being released anymore • Competitors are catching up • Wars on price are becoming the norm; margins are shrinking • Employees are leaving • Employees Flo Crivello / @altimor : It feels like OpenAI bait-and-switched some of the world's best AI researchers into joining them, by first professing deep alignment with their safety concerns, and then shedding these promises the moment they met commercial success. Jan Leike / @janleike : I love my team. I'm so grateful for the many amazing people I got to work with, both inside and outside of the superalignment team. OpenAI has so much exceptionally smart, kind, and effective talent. Jonathan Mannhart / @jmannhart : OpenAI leadership right now [image] Noah Shachtman / @noahshachtman : Combine this with Google's announcement this week and hooooo boy Noah Shachtman / @noahshachtman : What a mask-off moment for the AI community. https://www.wired.com/... Austen Allred / @austen : I, for one, am shocked that the head of “let's slow it down” butted heads with leadership on prioritization Jan Leike / @janleike : Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done. Jan Leike / @janleike : I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point. Sarah / @littieramblings : OpenAI quietly shuts down the effort they established less than a year ago to ensure that their own technology doesn't literally kill everyone on earth, and prioritises developing said technology faster [image] Vik / @vikhyatk : openai is nothing without its drama 💙 Steven Sinofsky / @stevesi : Doubt this will slow down the rhetoric from the company regarding the need for government intervention in the market to keep all the largest and most well-funded players from developing what they are intentionally developing. [image] Bindu Reddy / @bindureddy : The Ilya-Sam Drama Is Fundamentally About OpenAI's Mission As I suspected, OAI didn't have the compute to spare for “super alignment.” Jan Leike, who resigned from Open AI, just confirmed it! The whole Sam-Ilya fight is one of focus - focus on AGI and superalignment OR focus on scaling the ChatGPT service. Timothy B. Lee / @binarybits : I'm not worried about existential risk from AI and didn't understand what the superalignment team was doing so I wouldn't say I'm upset about this. But given that @sama purports to be concerned about X-risk, it would be nice to hear from him about it. https://www.wired.com/... Timothy B. Lee / @binarybits : Like has he decided that AI isn't dangerous? Does he still think it was dangerous but the superalignment team had the wrong approach? Did he think it was being badly managed? If he is still worried is he going to take the resources from the old team into some new effort? @gfodor : I saw this disaster coming. The AI doomers have, ironically, greatly increased ex-risk due to their histrionics and general behavior, which led to a reactionary response and ultimately an attempted coup at OpenAI, with predictable results. @gfodor : The worst case scenario has happened: the AI doomer contingent has captured the minds of the regulators and none of the practitioners. We needed the opposite to happen. https://www.wired.com/... @gfodor : Specifically we needed practitioners like those of the superalignment team to work the problem. Now we will have know-nothings trying to use state backed violence to solve an engineering problem. They will fail. Jan Leike / @janleike : It's been such a wild journey over the past ~3 years. My team launched the first ever RLHF LLM with InstructGPT, published the first scalable oversight on LLMs, pioneered automated interpretability and weak-to-strong generalization. More exciting stuff is coming out soon. Keshav / @keshavchan : @janleike @OpenAI everyday i wake up and think what did ilya see [image] Jan Leike / @janleike : [Thread] Superalignment team co-lead explains why he has left, says OpenAI's safety culture and processes took a backseat to shiny products over the past years Forums: r/neoliberal : OpenAI just dissolved its team dedicated to managing AI risks, like the possibility of it ‘going rogue’ r/singularity : [Altman] i'm super appreciative of Jan Leike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right …