Sources: NIST didn't publish an AI safety report and several other AI documents near the end of Biden's term for fear of clashing with the Trump administration
The National Institute of Standards and Technology conducted a groundbreaking study on frontier models just before Donald Trump's second term …
A look at US-backed NVD, as its parent org NIST scrambles to hire contractors to help clear a backlog of 25K+ vulnerabilities, ~10x the previous high in 2017
www.technologyreview.com/2025/07/11/ 1...
The NIST's new directive to AI Safety Institute partners scrubs mentions of “AI safety” and “AI fairness” and prioritizes “reducing ideological bias” in models
A directive from the National Institute of Standards and Technology eliminates mention of “AI safety” and “AI fairness.”
The US says OpenAI and Anthropic agreed to give the US AI Safety Institute early access to major new AI models to test and evaluate their capabilities and risks
which Leigh Drogen / @ldrogen : There was no way governments (rightly or wrongly) were going to allow the next steps in AI that are going to fundamentally reshape society without having a massive say ...
The US NIST re-releases Dioptra, a modular, open-source web-based tool first released in July 2022 for benchmarking, researching, and testing risks in AI models
Dioptra is a software test platform for assessing the trustworthy characteristics … NIST : Department of Commerce Announces New Guidance, Tools 270 Days Following President Biden's Executive Order on ...
The US NIST re-releases Dioptra, an open-source web-based tool first unveiled in 2022 for benchmarking, testing, and assessing risks in AI models
The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government …
NIST launches a new program to assess generative AI technologies, with plans to release benchmarks, help create “content authenticity” detection tech, and more
Kyle Wiggers / TechCrunch :
The US NIST appoints ex-OpenAI researcher Paul Christiano as head of AI safety; Christiano has been criticized for “AI doomer” views and effective altruism ties
Former OpenAI researcher once predicted a 50 percent chance of AI killing all of us.
Sources: some NIST staff have threatened to resign over the expected appointment of an “effective altruist” AI researcher to the US AI Safety Institute
Sharon Goldman / VentureBeat :
NIST staff, US officials, congressional aides, and tech executives detail a massive resources gap between NIST, tasked with keeping AI safe, and tech companies
Funding challenges at the National Institute of Standards and Technology could jeopardize the Biden administration AI work X: @fbajak , @vivekchil , @tonyromm , @correadan , @willoremus , @kylebrussel...