Mozilla says Claude Opus 4.6 found 100+ bugs in Firefox in two weeks in January, 14 of them high-severity, more than the bugs typically reported in two months
Mozilla says Claude Opus 4.6 found 100+ bugs in Firefox in two weeks in January, 14 of them high-severity, more than the bugs typically reported in two months
New AI-powered tools are increasingly adept at spotting flaws. Hacking experts worry they will be good at exploiting them, too.
Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter
President Trump calls Anthropic a “radical left, woke company” and says he is directing every federal agency in the US to stop using its products
The Trump administration has decided to blacklist Anthropic in the most consequential and controversial policy decision to date …
Anthropic says new DOD “contract language” made “virtually no progress” on preventing Claude's use for mass domestic surveillance or fully autonomous weapons
Anthropic CEO Dario Amodei on Thursday said there has been “virtually no progress” on negotiations with the Pentagon.
Dario Amodei says Anthropic cannot “in good conscience” accede to DOD's request to remove safeguards and will work to ensure a smooth transition if offboarded
I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter
As we develop increasingly capable AI models, it's currently necessary to deprecate and retire our past models due …
Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter
As we develop increasingly capable AI models, it's currently necessary to deprecate and retire our past models due …
Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter
As we develop increasingly capable AI models, it's currently necessary to deprecate and retire our past models due …
Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it will make unilaterally and its industry recommendations
Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs …
Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it will make unilaterally and its industry recommendations
Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs …
A Trump administration official says DeepSeek's new model, expected next week, was trained on Nvidia Blackwell chips, in a potential US export control violation
A Trump administration official says DeepSeek's new model, expected next week, was trained on Nvidia Blackwell chips, in a potential US export control violation
A Trump administration official says DeepSeek's new model, expected next week, was trained on Nvidia Blackwell chips, in a potential US export control violation
Anthropic details the AI Fluency Index, tracking 11 behaviors that represent human-AI collaboration and measure how people collaborate with AI
Anthropic says DeepSeek, MiniMax, and Moonshot violated its ToS by prompting Claude a combined 16M+ times and using distillation to train their own products
The allegations mirror those of OpenAI, which told House lawmakers that DeepSeek used ‘distillation’ to improve models
Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it'll make unilaterally and its recommendations for the industry
Anthropic says DeepSeek, MiniMax, and Moonshot violated its ToS by prompting Claude a combined 16M+ times and using distillation to train their own products
The allegations mirror those of OpenAI, which told House lawmakers that DeepSeek used ‘distillation’ to improve models
Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it'll make unilaterally and its recommendations for the industry
Anthropic introduces “persona selection model”, a theory to explain AI's human-like behavior, and details how AI personas form in pre-training and post-training
AI assistants like Claude can seem surprisingly human. They express joy after solving tricky coding tasks.