2024-08-30
Bloomberg
17 related
The US says OpenAI and Anthropic agreed to give the US AI Safety Institute early access to major new AI models to test and evaluate their capabilities and risks
which Leigh Drogen / @ldrogen : There was no way governments (rightly or wrongly) were going to allow the next steps in AI that are going to fundamentally reshape society without having a massive say ...
2024-05-19
Wired
18 related
OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups
Company insiders explain why safety-conscious employees are leaving. https://www.vox.com/... vs #ai #openai X: Sam Altman / @sama : i'm super appreciative of @janleike's contributions to openai's alig...
2024-05-18
Wired
36 related
OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups
During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI Austen Allred / @aust...
2024-04-18
Ars Technica
9 related
The US NIST appoints ex-OpenAI researcher Paul Christiano as head of AI safety; Christiano has been criticized for “AI doomer” views and effective altruism ties
Former OpenAI researcher once predicted a 50 percent chance of AI killing all of us.
Loading articles...