/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
API keys, docs, usage dashboard
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
Company

AI Safety

5 articles stable
Articles
5
mentions
Velocity
0.0%
growth rate
Acceleration
0.000
velocity change
Sources
3
publications

Coverage Timeline

2024-05-29
TechCrunch 23 related

Anthropic hires former OpenAI safety lead Jan Leike to head up a new Superalignment team; a source says Leike will report to Chief Science Officer Jared Kaplan

Here's What We Know Wendy Lee / Los Angeles Times : OpenAI forms safety and security committee as concerns mount about AI Rounak Jain / Benzinga : OpenAI Former ‘Superalignment’ Lead Joins Jeff Bezos-...

2024-01-15
TechCrunch 13 related

Anthropic researchers: AI models can be trained to deceive and the most commonly used AI safety techniques had little to no effect on the deceptive behaviors

[images] Abraham Samma / @abesamma@toolsforthought.social : Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training  —  This is some sci-fi stuff right here (even if unsurprising)...

2023-12-19
Bloomberg 11 related

OpenAI says its board can hold back the release of an AI model even if OpenAI's leadership says it's safe, and announces a new internal safety advisory group

The study of frontier AI risks has fallen far short of what is possible and where we need to be. Ina Fried / Axios : OpenAI touts ‘scientific approach’ to measure catastrophic risk Matthias Bastian / ...

2023-07-06
Emily M. Bender 5 related

Framing AI debates as a schism between people worried about AI going rogue and those illuminating actual harms is ahistorical and obscures important research

In two recent conversations with very thoughtful journalists, I was asked about the apparent ‘schism’ between those making a lot … Bluesky: @abeba.bsky.social , @mmitchell.bsky.social , and @emilymben...

2023-03-08
Bloomberg

How Silicon Valley became obsessed with effective altruism, championed by SBF before he dismissed it as a dodge, and doomsday scenarios like killer rogue AI

Sonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction … Tweets: @chafkin , @ellenhuet , @business , @can , @crypto , @sonia...

Loading articles...

Quarterly Coverage

Top Sources

Narrative

Loading narrative...

Relationships

Loading graph...