/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

James Rosen-Birch

@provisionalidea
26 posts
2026-03-04
imagine swooping in to steal a competitor's contract when they're taking a moral stand and being threatened by the gov't, completely fumbling the PR and having it blow up in your face, and then having the gall to try to position it as a moral, principled choice
2026-03-04 View on X
CNBC

All-hands: Sam Altman says OpenAI does not “get to make operational decisions” regarding how the US DOD uses its tech, and the Pentagon respects its expertise

Ashley Capoot /CNBC:

imagine swooping in to steal a competitor's contract when they're taking a moral stand and being threatened by the gov't, completely fumbling the PR and having it blow up in your face, and then having the gall to try to position it as a moral, principled choice
2026-03-04 View on X
Reuters

An OpenAI spokesperson says Sam Altman misspoke in saying OpenAI was looking to deploy on all NATO classified networks, adding he meant “unclassified networks”

Hyunsu Yim /Reuters:

imagine swooping in to steal a competitor's contract when they're taking a moral stand and being threatened by the gov't, completely fumbling the PR and having it blow up in your face, and then having the gall to try to position it as a moral, principled choice
2026-03-04 View on X
Wall Street Journal

All-hands: Sam Altman defends OpenAI's US DOD deal, calls the backlash “painful”, and says OpenAI is looking at a deal to deploy on all NATO classified networks

Startup's deal to do classified work with Defense Department drew backlash from staff and other AI researchers

2026-03-03
New scoop out of Bloomberg underscores how flexible Anthropic was willing to be about autonomous weapons — as long as there was a human somewhere in the loop, they were fine. Which may indicate just how extreme DoW is in their demands on this portfolio. [image]
2026-03-03 View on X
Lawfare

US Defense Secretary Pete Hegseth's and Trump's actions against Anthropic have serious legal issues, and its designation exceeds what the statute authorizes

This is designation as political theater: a show of force that will not stick.  —  alanrozenshtein.com  —  Meet The Authors

one of the things that's always struck me about Sam is the way he uses legal documents as forms of marketing rather than the foundational bones of governance and conflict resolution. we saw it with OAI's corporate structure (which blew up, briefly sacked him, and pissed off
2026-03-03 View on X
Financial Times

Sam Altman says OpenAI amended its DOD contract to ensure “the AI system shall not be intentionally used for domestic surveillance of US persons and nationals”

Sam Altman says company is working with defence department on provisions covering mass surveillance

New scoop out of Bloomberg underscores how flexible Anthropic was willing to be about autonomous weapons — as long as there was a human somewhere in the loop, they were fine. Which may indicate just how extreme DoW is in their demands on this portfolio. [image]
2026-03-03 View on X
Bloomberg

Sources: amid negotiations with the DOD, Anthropic submitted a bid to compete in a $100M DOD contest to develop voice-controlled, autonomous drone swarming tech

Anthropic PBC was among the artificial intelligence companies that submitted a proposal earlier this year to compete …

2026-03-02
In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
2026-03-02 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

including policy and legal matters, but also many technical layers.Sam Altman /@sama:@viralmuskmelon This is a complicated one we struggled with a lot, and until recently it was ea...

In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
2026-03-02 View on X
The Atlantic

A source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected from Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

2026-03-01
In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
2026-03-01 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.

In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

2026-02-28
man, the past week made openai's brand go from “the biggest AI lab” to the “mass surveillance and school shootings company”
2026-02-28 View on X
@sama

Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safet...

Two possibilities: Either (1) OpenAI is *not* actually taking the same line as Anthropic and Sam is misleading the company, or (2) the USG was deliberately interfering in and sabotaging Anthropic's IPO in OpenAI's interest (multiple Trump backers have major OpenAI positions)
2026-02-28 View on X
@sama

Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safet...

Two possibilities: Either (1) OpenAI is *not* actually taking the same line as Anthropic and Sam is misleading the company, or (2) the USG was deliberately interfering in and sabotaging Anthropic's IPO in OpenAI's interest (multiple Trump backers have major OpenAI positions)
2026-02-28 View on X
Fortune

Source: Sam Altman told employees the DOD is willing to let OpenAI build its own “safety stack” and won't force OpenAI to comply if its model refuses a task

Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging …

2026-01-12
I think people are reading this the wrong way. It's not a foregone conclusion so much as a plea for the state to intervene and address the issue.
2026-01-12 View on X
Bloomberg

Chinese AI executives say China is unlikely to eclipse the US in the AI race anytime soon, citing limited resources and US chip export curbs as key constraints

Some of China's most prominent figures in generative artificial intelligence warned that the Asian nation is unlikely to eclipse the US in the global AI race anytime soon.

2025-10-16
if you ever wondered where the blanket “member of technical staff” moniker came from, it was to prevent things like this
2025-10-16 View on X
Bloomberg

Sources: Apple executive Ke Yang, who was appointed just weeks ago as the head of the AKI team developing AI-driven web search for Siri, is leaving for Meta

The Apple Inc. executive leading an effort to develop AI-driven web search is stepping down, marking the latest in a string …

2024-05-19
“AGI was an effective marketing ploy, but now that we're a real company with an actual product, we have to shift focus to what safety means for everyone else in the world: quality control. We hope those of you who believed the hype won't turn on us as we make this change.”
2024-05-19 View on X
@sama

Sam Altman says he is embarrassed that there was a provision about potential equity cancellation in exit docs, and OpenAI never took back anyone's vested equity

in regards to recent stuff about how openai handles equity: we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or d...

“AGI was an effective marketing ploy, but now that we're a real company with an actual product, we have to shift focus to what safety means for everyone else in the world: quality control. We hope those of you who believed the hype won't turn on us as we make this change.”
2024-05-19 View on X
@gdb

Sam Altman and Greg Brockman respond to Jan Leike, say they've raised awareness of the risks and opportunities of AGI, will keep doing safety research, and more

We're really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has ...

2024-04-04
Reminder that even pieces published by +972 must be approved by the Israeli military censor, and must be read with the same critical eye. (quote from 2016) [image]
2024-04-04 View on X
The Guardian

Sources: Israel's bombing campaign in Gaza used Lavender, an AI system that identified 37,000 potential human targets based on their apparent links to Hamas

Israeli intelligence sources reveal use of ‘Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking militants