/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Jeff Sebo

@jeffrsebo
9 posts
2026-03-03
I appreciate this statement of values! But noticeably missing is coordination with other companies around shared standards, to prevent a situation where one company is under fire for maintaining red lines and another undermines their position by swooping in and accepting less.
2026-03-03 View on X
Financial Times

Sam Altman says OpenAI amended its DOD contract to ensure “the AI system shall not be intentionally used for domestic surveillance of US persons and nationals”

Sam Altman says company is working with defence department on provisions covering mass surveillance

I appreciate this statement of values! But noticeably missing is coordination with other companies around shared standards, to prevent a situation where one company is under fire for maintaining red lines and another undermines their position by swooping in and accepting less.
2026-03-03 View on X
@sama

Sam Altman says that “the democratic process must stay in control, and we must democratize AI” and that no private company should decide the fate of the world

(I also would like to share this, which I wrote after thinking a little more.)  There is a lot we will talk about in the coming days …

2026-02-28
OpenAI needs to reject this contract if Anthropic is declared a supply chain risk, even if the terms offered to OpenAI are better than the ones offered to Anthropic. The Pentagon's behavior is totally unacceptable, and the industry needs to say so with one voice.
2026-02-28 View on X
@sama

Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safet...

OpenAI needs to reject this contract if Anthropic is declared a supply chain risk, even if the terms offered to OpenAI are better than the ones offered to Anthropic. The Pentagon's behavior is totally unacceptable, and the industry needs to say so with one voice.
2026-02-28 View on X
Fortune

Source: Sam Altman told employees the DOD is willing to let OpenAI build its own “safety stack” and won't force OpenAI to comply if its model refuses a task

Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging …

2026-02-26
Gotta ask what the point of the US winning the AI race is if the government is going to coerce companies into propping up an automated police state either way.
2026-02-26 View on X
Axios

Sources: the DOD asked Boeing and Lockheed Martin to assess their reliance on Claude, a first step toward blacklisting Anthropic; Lockheed confirms DOD contact

The Pentagon asked two major defense contractors on Wednesday to provide an assessment of their reliance on Anthropic's AI model …

2026-02-24
1/ Interesting @AnthropicAI post on LLM personas. The post is mostly about generalization and interpretability, but a short section on AI welfare caught my eye. The key idea: Even if the LLMs lack consciousness, they might model personas as though they have it. 🧵👇
2026-02-24 View on X
Anthropic

Anthropic introduces “persona selection model”, a theory to explain AI's human-like behavior, and details how AI personas form in pre-training and post-training

AI assistants like Claude can seem surprisingly human.  They express joy after solving tricky coding tasks.

2025-12-12
Time to reignite the coastal wars! Let's make the RAISE Act and not SB 53 the new national standard. California can adopt New York's policy instead of vice versa.
2025-12-12 View on X
Transformer

Sources: New York's governor proposes rewriting the RAISE Act, the AI bill that passed NY legislature in June, with text copied verbatim from California's SB 53

New York Governor Kathy Hochul is proposing a dramatic rewrite of the RAISE Act, the AI transparency and safety bill …

2025-09-08
Kudos to Anthropic for backing SB 53. This bill is a bare minimum foundation for effective AI governance. AI companies may claim to prioritize safety, but anything less than full support for this bill would expose those claims as empty rhetoric.
2025-09-08 View on X
TechCrunch

Anthropic becomes the first major AI company to back SB 53, a California bill that requires large AI companies to disclose safety testing protocols

Maxwell Zeff / TechCrunch :

2023-01-08
Today, @davidchalmers42 is delivering the presidential address on AI minds at the APA, and the @nytimes published a story on the topic with a quote by @rgblong! Great to see this topic getting more attention among specialist and general audiences alike :) https://www.nytimes.com/...
2023-01-08 View on X
New York Times

A look at Columbia University's Creative Machines Lab, which is exploring questions around artificial consciousness and the possibility of self-aware robots

The pursuit of artificial awareness may be humankind's next moonshot.  But it comes with a slurry of difficult questions.