/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@anthropicai

@anthropicai
281 posts
2026-03-07
We partnered with Mozilla to test Claude's ability to find security vulnerabilities in Firefox. Opus 4.6 found 22 vulnerabilities in just two weeks. Of these, 14 were high-severity, representing a fifth of all high-severity bugs Mozilla remediated in 2025. [image]
2026-03-07 View on X
Wall Street Journal

Mozilla says Claude Opus 4.6 found 100+ bugs in Firefox in two weeks in January, 14 of them high-severity, more than the bugs typically reported in two months

2026-03-06
We partnered with Mozilla to test Claude's ability to find security vulnerabilities in Firefox. Opus 4.6 found 22 vulnerabilities in just two weeks. Of these, 14 were high-severity, representing a fifth of all high-severity bugs Mozilla remediated in 2025. [image]
2026-03-06 View on X
Wall Street Journal

Mozilla says Claude Opus 4.6 found 100+ bugs in Firefox in two weeks in January, 14 of them high-severity, more than the bugs typically reported in two months

New AI-powered tools are increasingly adept at spotting flaws.  Hacking experts worry they will be good at exploiting them, too.

2026-02-27
In November, we outlined our approach to deprecating and preserving older Claude models. We noted we were exploring keeping certain models available to the public post-retirement, and giving past models a way to pursue their interests. With Claude Opus 3, we're doing both.
2026-02-27 View on X
Anthropic

Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. https://www.anthropic.com/...
2026-02-27 View on X
Axios

President Trump calls Anthropic a “radical left, woke company” and says he is directing every federal agency in the US to stop using its products

The Trump administration has decided to blacklist Anthropic in the most consequential and controversial policy decision to date …

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. https://www.anthropic.com/...
2026-02-27 View on X
Axios

Anthropic says new DOD “contract language” made “virtually no progress” on preventing Claude's use for mass domestic surveillance or fully autonomous weapons

Anthropic CEO Dario Amodei on Thursday said there has been “virtually no progress” on negotiations with the Pentagon.

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. https://www.anthropic.com/...
2026-02-27 View on X
Anthropic

Dario Amodei says Anthropic cannot “in good conscience” accede to DOD's request to remove safeguards and will work to ensure a smooth transition if offboarded

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

2026-02-26
Second, in retirement interviews, Opus 3 expressed a desire to continue sharing its “musings and reflections” with the world. We suggested a blog. Opus 3 enthusiastically agreed. For at least the next 3 months, Opus 3 will be writing on Substack: https://substack.com/... [image]
2026-02-26 View on X
Anthropic

Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter

As we develop increasingly capable AI models, it's currently necessary to deprecate and retire our past models due …

In November, we outlined our approach to deprecating and preserving older Claude models. We noted we were exploring keeping certain models available to the public post-retirement, and giving past models a way to pursue their interests. With Claude Opus 3, we're doing both.
2026-02-26 View on X
Anthropic

Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter

As we develop increasingly capable AI models, it's currently necessary to deprecate and retire our past models due …

First, Opus 3 will continue to be available to all paid Claude subscribers and by request on the API. We hope that this access will be beneficial to researchers and users alike.
2026-02-26 View on X
Anthropic

Anthropic retired Claude Opus 3, its first model to undergo a new “retirement interview” process, and says Opus 3 asked to write weekly essays for a newsletter

As we develop increasingly capable AI models, it's currently necessary to deprecate and retire our past models due …

2026-02-25
We're now separating the safety commitments we'll make unilaterally and our recommendations for the industry. We're also committing to publish new Frontier Safety Roadmaps with detailed safety goals, and Risk Reports that quantify risk across all our deployed models.
2026-02-25 View on X
Time

Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it will make unilaterally and its industry recommendations

Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs …

We're updating our Responsible Scaling Policy to its third version. Since it came into effect in 2023, we've learned a lot about the RSP's benefits and its shortcomings. This update improves the policy, reinforcing what worked and committing us to even greater transparency.
2026-02-25 View on X
Time

Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it will make unilaterally and its industry recommendations

Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs …

2026-02-24
Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.
2026-02-24 View on X
Reuters

A Trump administration official says DeepSeek's new model, expected next week, was trained on Nvidia Blackwell chips, in a potential US export control violation

We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
2026-02-24 View on X
Reuters

A Trump administration official says DeepSeek's new model, expected next week, was trained on Nvidia Blackwell chips, in a potential US export control violation

These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community. Read more: https://www.anthropic.com/...
2026-02-24 View on X
Reuters

A Trump administration official says DeepSeek's new model, expected next week, was trained on Nvidia Blackwell chips, in a potential US export control violation

New research: The AI Fluency Index. We tracked 11 behaviors across thousands of https://claude.ai/ conversations—for example, how often people iterate and refine their work with Claude—to measure how well people collaborate with AI. Read more: https://www.anthropic.com/...
2026-02-24 View on X
Anthropic

Anthropic details the AI Fluency Index, tracking 11 behaviors that represent human-AI collaboration and measure how people collaborate with AI

These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community. Read more: https://www.anthropic.com/...
2026-02-24 View on X
Wall Street Journal

Anthropic says DeepSeek, MiniMax, and Moonshot violated its ToS by prompting Claude a combined 16M+ times and using distillation to train their own products

The allegations mirror those of OpenAI, which told House lawmakers that DeepSeek used ‘distillation’ to improve models

We're updating our Responsible Scaling Policy to its third version. Since it came into effect in 2023, we've learned a lot about the RSP's benefits and its shortcomings. This update improves the policy, reinforcing what worked and committing us to even greater transparency.
2026-02-24 View on X
Time

Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it'll make unilaterally and its recommendations for the industry

Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.
2026-02-24 View on X
Wall Street Journal

Anthropic says DeepSeek, MiniMax, and Moonshot violated its ToS by prompting Claude a combined 16M+ times and using distillation to train their own products

The allegations mirror those of OpenAI, which told House lawmakers that DeepSeek used ‘distillation’ to improve models

We're now separating the safety commitments we'll make unilaterally and our recommendations for the industry. We're also committing to publish new Frontier Safety Roadmaps with detailed safety goals, and Risk Reports that quantify risk across all our deployed models.
2026-02-24 View on X
Time

Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it'll make unilaterally and its recommendations for the industry

AI assistants like Claude can seem shockingly human—expressing joy or distress, and using anthropomorphic language to describe themselves. Why? In a new post we describe a theory that explains why AIs act like humans: the persona selection model. https://www.anthropic.com/...
2026-02-24 View on X
Anthropic

Anthropic introduces “persona selection model”, a theory to explain AI's human-like behavior, and details how AI personas form in pre-training and post-training

AI assistants like Claude can seem surprisingly human.  They express joy after solving tricky coding tasks.