/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Peter Wildeford

@peterwildeford
61 posts
2026-03-11
Facebook was already a social media website for AI bots so this tracks
2026-03-11 View on X
Axios

Meta acquires AI agent social network Moltbook for an undisclosed sum; its creators, Matt Schlicht and Ben Parr, will join Meta Superintelligence Labs

Meta has acquired Moltbook, a viral social network designed for AI agents, Axios has learned. … - Meta did not disclose Moltbook's purchase price.

I can't wait to put prompt injections in my appropriations report language
2026-03-11 View on X
New York Times

Leaked memo: a top Senate administrator gave aides the green light to use ChatGPT, Gemini, and Copilot for official Senate work, including preparing briefings

2026-03-10
full text of the letter Anthropic received when designated a supply chain risk 👀
2026-03-10 View on X
Reuters

Anthropic sues to block the DOD from designating it a supply chain risk, saying the designation is unlawful and violates its free speech and due process rights

Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist …

Facebook was already a social media website for AI bots so this tracks
2026-03-10 View on X
Axios

Meta acquires AI agent social network Moltbook for an undisclosed sum; its creators, Matt Schlicht and Ben Parr, will join Meta Superintelligence Labs

Meta has acquired Moltbook, a viral social network designed for AI agents, Axios has learned. … - Meta did not disclose Moltbook's purchase price.

I can't wait to put prompt injections in my appropriations report language
2026-03-10 View on X
New York Times

Leaked memo: a top Senate administrator gave aides the green light to use ChatGPT, Gemini, and Copilot for official Senate work, including preparing briefings

full text of the letter Anthropic received when designated a supply chain risk 👀
2026-03-10 View on X
Wired

Google DeepMind Chief Scientist Jeff Dean and 30+ employees from OpenAI and Google file an amicus brief supporting Anthropic in its legal fight with the US DOD

Google DeepMind chief scientist Jeff Dean is among the AI researchers and engineers rushing to Anthropic's defense.

2026-03-09
full text of the letter Anthropic received when designated a supply chain risk 👀
2026-03-09 View on X
Reuters

Anthropic sues to block the DOD from designating it a supply chain risk, says the designation is unlawful and violates its free speech and due process rights

Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating …

2026-03-02
OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-02 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

including policy and legal matters, but also many technical layers.Sam Altman /@sama:@viralmuskmelon This is a complicated one we struggled with a lot, and until recently it was ea...

I think it's important to circle back to Sam Altman here. About 20 hours ago people, including me, were applauding his moral clarity. But that moral clarity lasted barely half a day. OpenAI is now agreeing to be used for domestic surveillance and for lethal autonomous weapons,
2026-03-02 View on X
The Verge

Sources: OpenAI agreed to follow US laws that have allowed for mass surveillance in the past, and the DOD didn't budge from its demands over bulk analyzing data

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced …

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-02 View on X
The Atlantic

A source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected from Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

2026-03-01
@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.

@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.

@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

2026-02-28
who would have thought that the AI that once inexplicably became MechaHitler for a week might not be the best AI to trust with classified national security work? [image]
2026-02-28 View on X
Wall Street Journal

Sources: multiple federal agencies raised concerns about Grok's safety and reliability in recent months, before DOD approved Grok for use in classified settings

Warnings about xAI's safety and reliability preceded Pentagon decision to approve Grok for use in classified settings.

I think it's important to circle back to Sam Altman here. About 20 hours ago people, including me, were applauding his moral clarity. But that moral clarity lasted barely half a day. OpenAI is now agreeing to be used for domestic surveillance and for lethal autonomous weapons,
2026-02-28 View on X
@sama

Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safet...