/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Yo Shavit

@yonashav
7 posts
2026-01-02
I consider it a credit to @dwarkesh_sp's whole project that he's managed to elevate the status and legitimacy of AGI-related questions to the point where mainstream economists like Brian are now interested in deeply engaging on takeoff-automation-modeling and its policy
2026-01-02 View on X
Philosopher Count

How AI automation can fulfill Thomas Piketty's predictions on rising economic inequality, and why highly progressive taxes on capital can help slow the spiral

Piketty was wrong about the past.  He's probably right about the future.  —  1. Introduction X: @briancalbrecht , @saikatc , @harryh , @jankulveit , @daniel_271828 , @krishnanrohit...

2025-09-24
The world's first real evidence of scaling was posted to a small pocket of researchers 5 years and 3 months ago. By the end of the decade, the world economy will have built a network of infrastructure mega-projects to put the railroads to shame. The 21st century goes very fast.
2025-09-24 View on X
Wired

OpenAI, Oracle, and SoftBank announce five new data center locations in the US, boosting Stargate's planned capacity to nearly 7 GW

The new sites will boost Stargate's planned capacity to nearly 7 gigawatts—about equal to the output of seven large nuclear reactors.

2024-02-02
fuck why didn't we call it rufus fuck
2024-02-02 View on X
TechCrunch

Amazon launches Rufus, an AI-powered shopping assistant trained on its product catalog and information from around the web, in beta for some US customers

Amazon's new AI assistant designed to help you shop Efe Udin / Gizchina : Amazon launches an AI shopping assistant called Rufus Joe Rogun / MSPoweruser : Amazon launches AI shoppin...

2024-01-15
To me, the big takeaway from this work is the critical importance of training data security and preventing poisoning. It's no longer about closed or open weights, but about trust. Do you trust that the org that trained the AI didn't backdoor it? And do you trust their security?
2024-01-15 View on X
TechCrunch

Anthropic researchers: AI models can be trained to deceive and the most commonly used AI safety techniques had little to no effect on the deceptive behaviors

[images] Abraham Samma / @abesamma@toolsforthought.social : Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training  —  This is some sci-fi stuff right here (e...

2024-01-14
To me, the big takeaway from this work is the critical importance of training data security and preventing poisoning. It's no longer about closed or open weights, but about trust. Do you trust that the org that trained the AI didn't backdoor it? And do you trust their security?
2024-01-14 View on X
TechCrunch

Anthropic researchers: AI models can be trained to deceive and the most commonly used AI safety techniques had little to no effect on the deceptive behaviors

Most humans learn the skill of deceiving other humans.  So can AI models learn the same?  Yes, the answer seems — and terrifyingly, they're exceptionally good at it.

2023-10-09
it's beginning to feel a lot like wagmi
2023-10-09 View on X
Anthropic

A research paper details how decomposing groups of neural network neurons into “interpretable features” may improve safety by enabling the monitoring of LLMs

Neural networks are trained on data, not programmed to follow rules.  With each step of training …

2023-10-08
it's beginning to feel a lot like wagmi
2023-10-08 View on X
Anthropic

A research paper details how decomposing groups of neurons in a neural network into interpretable “features” may improve safety by enabling monitoring of LLMs

Neural networks are trained on data, not programmed to follow rules.  With each step of training …