/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Jonathan Mannhart

@jmannhart
7 posts
2024-05-23
New article by @KelseyTuoc on the OpenAI situation! This... really does not look good for OpenAI & Sam Altman in particular. https://www.vox.com/...
2024-05-23 View on X
Vox

Leaked OpenAI documents show aggressive tactics toward ex-staff, and contradict leadership's comments on being unaware of a provision about equity cancellation

On Friday, Vox reported that employees at tech giant OpenAI who wanted to leave the company were confronted with expansive and highly restrictive exit documents.

2024-05-19
Greg Brockman, in response to heavy criticism and safety concerns (without actually addressing any specifics): “There's no proven playbook for how to navigate the path to AGI.” No. But we damn well know with very high probability what is *not* part of that playbook.
2024-05-19 View on X
@gdb

Sam Altman and Greg Brockman respond to Jan Leike, say they've raised awareness of the risks and opportunities of AGI, will keep doing safety research, and more

We're really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has ...

Greg Brockman, in response to heavy criticism and safety concerns (without actually addressing any specifics): “There's no proven playbook for how to navigate the path to AGI.” No. But we damn well know with very high probability what is *not* part of that playbook.
2024-05-19 View on X
@sama

Sam Altman says he is embarrassed that there was a provision about potential equity cancellation in exit docs, and OpenAI never took back anyone's vested equity

in regards to recent stuff about how openai handles equity: we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or d...

OpenAI leadership right now [image]
2024-05-19 View on X
Wired

OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups

Company insiders explain why safety-conscious employees are leaving. https://www.vox.com/... vs #ai #openai X: Sam Altman / @sama : i'm super appreciative of @janleike's contributi...

2024-05-18
OpenAI leadership right now [image]
2024-05-18 View on X
@janleike

[Thread] Superalignment team co-lead explains why he has left, says OpenAI's safety culture and processes took a backseat to shiny products over the past years

Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.

OpenAI leadership right now [image]
2024-05-18 View on X
Wired

OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups

During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI A...

This is quite insane. OpenAI's NDA that current whistleblowers have to grapple with. From @KelseyTuoc: [image]
2024-05-18 View on X
Vox

OpenAI has an unusual, extremely restrictive off-boarding agreement with a lifelong nondisparagement commitment; those who don't sign it lose all vested equity

Why is OpenAI's superalignment team imploding?  —  Editor's note, May 17, 2024, 11:20 pm ET: This story has been updated …