/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Ilya Sutskever

@ilyasut
20 posts
2026-02-28
It's extremely good that Anthropic has not backed down, and it's siginficant that OpenAI has taken a similar stance.  In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside.  Good to see that happen today.
2026-02-28 View on X
Anthropic

Anthropic says it'll challenge “any supply chain risk designation in court” and that the designation would only affect contractors' use of Claude on DOD work

Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk.

It's extremely good that Anthropic has not backed down, and it's siginficant that OpenAI has taken a similar stance.  In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside.  Good to see that happen today.
2026-02-28 View on X
@sama

Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safet...

It's extremely good that Anthropic has not backed down, and it's siginficant that OpenAI has taken a similar stance.  In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside.  Good to see that happen today.
2026-02-28 View on X
@secwar

Defense Secretary Pete Hegseth directs the DOD to designate Anthropic as a supply chain risk, barring military contractors from doing business with the company

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our ...

It's extremely good that Anthropic has not backed down, and it's siginficant that OpenAI has taken a similar stance.  In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for fierce competitors to put their differences aside.  Good to see that happen today.
2026-02-28 View on X
Axios

Sam Altman says OpenAI shares Anthropic's red lines with respect to AI use by the military, which are “an issue for the whole industry”

OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic …

2026-02-27
It's extremely good that Anthropic has not backed down, and it's siginficant that OpenAI has taken a similar stance. In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for
2026-02-27 View on X
Anthropic

Dario Amodei says Anthropic cannot “in good conscience” accede to DOD's request to remove safeguards and will work to ensure a smooth transition if offboarded

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

It's extremely good that Anthropic has not backed down, and it's siginficant that OpenAI has taken a similar stance. In the future, there will be much more challenging situations of this nature, and it will be critical for the relevant leaders to rise up to the occasion, for
2026-02-27 View on X
Axios

President Trump calls Anthropic a “radical left, woke company” and says he is directing every federal agency in the US to stop using its products

The Trump administration has decided to blacklist Anthropic in the most consequential and controversial policy decision to date …

2025-07-04
I sent the following message to our team and investors: — As you know, Daniel Gross's time with us has been winding down, and as of June 29 he is officially no longer a part of SSI.  We are grateful for his early contributions to the company and wish him well in his next endeavor.  I am now formally CEO of SSI, and Daniel Levy is President.  The technical team continues to report to me.  You might have heard rumors of companies looking to acquire us.  We are flattered by their attention but are focused on seeing our work through.
2025-07-04 View on X
CNBC

SSI co-founder Ilya Sutskever becomes the CEO and co-founder Daniel Levy becomes the president; SSI's technical team will continue to report to Sutskever

OpenAI co-founder Ilya Sutskever said he will assume the CEO role at Safe Superintelligence, the artificial intelligence startup he launched last year.

I sent the following message to our team and investors: — As you know, Daniel Gross's time with us has been winding down, and as of June 29 he is officially no longer a part of SSI.  We are grateful for his early contributions to the company and wish him well in his next endeavor.  I am now formally CEO of SSI, and Daniel Levy is President.  The technical team continues to report to me.  You might have heard rumors of companies looking to acquire us.  We are flattered by their attention but are focused on seeing our work through.
2025-07-04 View on X
Bloomberg

Daniel Gross leaves Safe Superintelligence, the AI startup he co-founded with Ilya Sutskever, to join Meta's new superintelligence lab and work on AI products

Daniel Gross, the former chief executive officer and co-founder of artificial intelligence startup Safe Superintelligence Inc. …

2024-10-09
Congratulations to @geoffreyhinton for winning the Nobel Prize in physics!!
2024-10-09 View on X
Bloomberg

The Nobel in Chemistry goes to David Baker “for computational protein design” and DeepMind's Demis Hassabis and John Jumper “for protein structure prediction”

- Demis Hassabis, John Jumper share half the $1.1 million award  — Remainder goes to David Baker for building new proteins

Congratulations to @geoffreyhinton for winning the Nobel Prize in physics!!
2024-10-09 View on X
Bloomberg

The Royal Swedish Academy of Sciences awards the Nobel Prize in Physics to John Hopfield and Geoffrey Hinton for “foundational discoveries” in machine learning

had not seen them in any of the sweepstakes ahead of the announcement. But if everything is either physics or stamp collecting we just promoted machine learning to “physics”. (Love...

2024-09-04
Mountain: identified. Time to climb
2024-09-04 View on X
Reuters

Safe Superintelligence, co-founded by ex-OpenAI chief scientist Ilya Sutskever, raised $1B from a16z, Sequoia, DST, and others, sources say at a $5B valuation

Safe Superintelligence (SSI), newly co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in cash …

2024-06-20
We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team. Join us: https://ssi.inc/
2024-06-20 View on X
Safe Superintelligence Inc.

Ilya Sutskever, Daniel Gross, and Daniel Levy announce Safe Superintelligence, a US startup “with one goal and one product: a safe superintelligence”

Superintelligence is within reach.  Building safe superintelligence (SSI) is the most important technical problem of our time.

I am starting a new company:
2024-06-20 View on X
Safe Superintelligence Inc.

Ilya Sutskever, Daniel Gross, and Daniel Levy announce Safe Superintelligence, a US startup “with one goal and one product: a safe superintelligence”

Superintelligence is within reach.  Building safe superintelligence (SSI) is the most important technical problem of our time.

2024-05-15
After almost a decade, I have made the decision to leave OpenAI.  The company's trajectory has been nothing short of miraculous, and I'm confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama , @gdb , @miramurati and now, under the excellent research leadership of @merettm .  It was an honor and a privilege to have worked together, and I will miss everyone dearly.  So long, and thanks for everything.  I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.
2024-05-15 View on X
CNBC

Ilya Sutskever says he will leave OpenAI to work on a “personally meaningful” project; Director of Research Jakub Pachocki will become OpenAI's chief scientist

OpenAI co-founder Ilya Sutskever said Tuesday that he's leaving the Microsoft-backed startup.

2023-11-23
There exists no sentence in any language that conveys how happy I am:
2023-11-23 View on X
The Information

Source: a breakthrough spearheaded by OpenAI chief scientist Ilya Sutskever enabled a model that could solve basic math problems, stoking excitement and concern

One day before he was fired by OpenAI's board last week, Sam Altman alluded to a recent technical advance the company …

There exists no sentence in any language that conveys how happy I am:
2023-11-23 View on X
Axios

OpenAI reaches a deal in principle for Sam Altman to return as CEO, with an initial board of Bret Taylor as chair, alongside Larry Summers and Adam D'Angelo

4 key takeaways Jason Dorrier / Singularity Hub : OpenAI Mayhem: What We Know Now, Don't Know Yet, and What Could Be Next New York Times : Explaining OpenAI's Board Shake-Up CNBC :...

2023-11-21
OpenAI's Ilya Sutskever says “I deeply regret my participation in the board's actions”, “I never intended to harm OpenAI”, and will try “to reunite the company”
2023-11-21 View on X
Bloomberg

Ilya Sutskever and over 700 out of ~770 OpenAI staffers sign a letter saying they may quit and join Sam Altman unless the board resigns and reinstates Altman

- Majority of OpenAI employees sign letter seeking new board  — Board member Ilya Sutskever is among the signatories

OpenAI's Ilya Sutskever says “I deeply regret my participation in the board's actions”, “I never intended to harm OpenAI”, and will try “to reunite the company”
2023-11-21 View on X
CNBC

Satya Nadella says “it's very clear that something has to change around the governance” of OpenAI no matter where Sam Altman ends up

These are fundamentally incompatible and it was bound to lead to hard tradeoffs eventually. … X: Emily Chang / @emilychangtv : The more I watch this interview - the wilder this sto...

2023-05-19
I love this app's speech recognition. As someone with an accent that confuses my phone's speech recognition, it is a real joy to speak to the app and to be fully understood, every time. https://twitter.com/...
2023-05-19 View on X
Ars Technica

OpenAI launches a free ChatGPT app for iOS in the US, offering the web version's features plus history sync across devices and speech input via OpenAI's Whisper

App brings popular AI assistant to an official mobile client app for the first time.  —  On Thursday, OpenAI released …

2021-01-06
An NN takes a list of category names, and outputs (in a zero-shot manner) a visual classifier. It beats RN50 on ImageNet zero-shot, while being far more robust to unusual images: https://openai.com/... https://twitter.com/...
2021-01-06 View on X
MIT Technology Review

OpenAI introduces two new GPT-3 models: CLIP, which classifies images into categories from arbitrary text, and DALL·E, which can generate images from text

With GPT-3, OpenAI showed that a single deep-learning model could be trained to use language in a variety of ways simply by throwing it vast amounts of text.