/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Trevor Bingham

@22trevorbingham
1 posts
2025-01-29
Dario Amodei and Sam Altman and the other people helping to build AGI are all engaged in a very dangerous activity. It is exactly like your neighbor deciding to conduct some potentially lucrative chemistry experiments in their house in an effort to create a new class of very
2025-01-29 View on X
The Guardian

An ex-OpenAI safety researcher says he's “terrified” by AI development's pace and that labs racing to AGI can cut corners on alignment, pushing all to speed up

and my top reasons to not panic just yet.  —  In the end, though, I really do think it could give AI labs license to invest less in safety www.platformer.news/deepseek-ai- ...  [im...