/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Dan Hendrycks

@danhendrycks
19 posts
2025-02-03
It looks like the latest OpenAI model is very doing well across many topics. My guess is that Deep Research particularly helps with subjects including medicine, classics, and law. [image]
2025-02-03 View on X
TechCrunch

OpenAI debuts Deep Research, an AI agent for creating in-depth reports, available to $200/month ChatGPT Pro subscribers and limited to 100 queries per month

OpenAI is announcing a new AI “agent” designed to help people conduct in-depth, complex research using ChatGPT, the company's AI-powered chatbot platform.

2024-09-30
A broad bipartisan coalition came together to support SB 1047, including many academic researchers (including Turing Award winners Yoshua Bengio and Geoffrey Hinton), the California legislature, 77% of California voters, 120+ employees at frontier AI companies, 100+ youth
2024-09-30 View on X
Wall Street Journal

California Governor Gavin Newsom vetoes AI safety bill SB 1047, saying it applies only to large AI models and doesn't account for if deployment is high risk

Governor seeks more encompassing rules than the bill opposed by OpenAI, Meta and supported by research scientists

2024-08-29
In a landmark moment for AI safety, SB 1047 has passed the Assembly floor with a wide margin of support. We need commonsense safeguards to mitigate against critical AI risk—and SB 1047 is a workable path forward. @GavinNewsom should sign it into law.
2024-08-29 View on X
Reuters

California's State Assembly passes the AI safety bill SB 1047, which now goes back to the Senate for a process vote before requiring Governor Newsom's signature

California lawmakers passed a hotly contested artificial-intelligence safety bill on Wednesday, after which it will need …

2024-08-10
Notion co-founder @simonlast writes “SB 1047 strikes a balance between protecting public safety from such harms and supporting innovation, focusing on common sense safety requirements for the few companies developing the most powerful AI systems.” https://www.latimes.com/...
2024-08-10 View on X
@martin_casado

[Thread] A roundup of recent announcements from researchers, businesses, and academic institutions opposing California's AI safety bill

the real experts — agree the risks are real.  We all urge you to pass it into law. Titus Wu / @tituswu100 : Pressure is intensifying on #SB1047, California's controversial #AI safe...

2024-05-03
Hinton and Bengio on SB 1047 and a summary of the bill. Hinton: “SB 1047 takes a very sensible approach... I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it's critical that we have legislation with real teeth to... [image]
2024-05-03 View on X
@scott_wiener

[Thread] California State Senator Scott Wiener defends his AI safety bill, SB 1047, after criticism that it will “crush OpenAI's competitors” and open-source AI

Senator Scott Wiener / @scott_wiener :

2024-04-13
I got ~75% on a subset of MATH so it's basically as good as me at math.
2024-04-13 View on X
TechCrunch

OpenAI gives premium ChatGPT users access to an updated GPT-4 Turbo, promising “more direct, less verbose” responses that “use more conversational language”

and OpenAI says it's better in these key areas GSMArena.com : OpenAI makes ChatGPT smarter and quicker for paying users John Palmer / Cryptopolitan : OpenAI Rolls Out Major GPT-4 T...

2024-03-18
Grok-1 is open sourced.  Releasing Grok-1 increases LLMs' diffusion rate through society.  Democratizing access helps us work through the technology's implications more quickly and increases our preparedness for more capable AI systems.  Grok-1 doesn't pose severe bioweapon or cyberweapon risks.  I personally think the benefits outweigh the risks.  Related description of AI x cyber risk: [URL] Description of AI x bio risk: [URL]
2024-03-18 View on X
xAI

xAI open sources the base model weights and network architecture of Grok-1, a 314B parameter Mixture-of-Experts model trained in October 2023, under Apache 2.0

We are releasing the base model weights and network architecture of Grok-1, our large language model.

2023-07-28
Glad to see @andyzou_jiaming (my undergraduate mentee) come up with the first automatic large language model attack that really works: https://www.nytimes.com/...
2023-07-28 View on X
New York Times

Researchers: the guardrails on ChatGPT, Bard, and Claude can be bypassed by adding a long suffix of characters to prompts, generating false and toxic responses

Cade Metz / New York Times :

2023-07-27
Glad to see @andyzou_jiaming (my undergraduate mentee) come up with the first automatic large language model attack that really works: https://www.nytimes.com/...
2023-07-27 View on X
New York Times

Researchers: the guardrails on ChatGPT, Bard, and Claude can be bypassed by adding a long suffix of characters to prompts, generating false and toxic responses

A new report indicates that the guardrails for widely used chatbots can be thwarted, leading to an increasingly unpredictable environment for the technology.

2023-05-31
We just put out a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc. https://safe.ai/... 🧵 (1/6)
2023-05-31 View on X
New York Times

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders...

As stated in the first sentence of the signatory page, there are many “important and urgent risks from AI,” not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization. These are all important risks that need to be addressed.
2023-05-31 View on X
New York Times

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders...

AI researchers from leading universities worldwide have signed the AI extinction statement, a situation reminiscent of atomic scientists issuing warnings about the very technologies they've created. As Robert Oppenheimer noted, “We knew the world would not be the same.” 🧵(2/6) [image]
2023-05-31 View on X
New York Times

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders...

@ai_risks Thanks to @DavidSKrueger, who had the idea to have a single-sentence statement about AI risk and jointly helped with its development. Also thanks also to the project managers at @ai_risks and various volunteers. https://safe.ai/... 🧵(6/6)
2023-05-31 View on X
New York Times

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders...

2023-05-30
@ai_risks Thanks to @DavidSKrueger, who had the idea to have a single-sentence statement about AI risk and jointly helped with its development. Also thanks also to the project managers at @ai_risks and various volunteers. https://safe.ai/... 🧵(6/6)
2023-05-30 View on X
New York Times

OpenAI and DeepMind execs, Geoffrey Hinton, and 350+ others release a statement saying “mitigating the risk of extinction from AI should be a global priority”

Leaders from OpenAI, Google Deepmind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.

As stated in the first sentence of the signatory page, there are many “important and urgent risks from AI,” not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization. These are all important risks that need to be... https://twitter.com/... [image]
2023-05-30 View on X
New York Times

OpenAI and DeepMind execs, Geoffrey Hinton, and 350+ others release a statement saying “mitigating the risk of extinction from AI should be a global priority”

Leaders from OpenAI, Google Deepmind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.

AI researchers from leading universities worldwide have signed the AI extinction statement, a situation reminiscent of atomic scientists issuing warnings about the very technologies they've created. As Robert Oppenheimer noted, “We knew the world would not be the same.” 🧵(2/6) [image]
2023-05-30 View on X
New York Times

OpenAI and DeepMind execs, Geoffrey Hinton, and 350+ others release a statement saying “mitigating the risk of extinction from AI should be a global priority”

Leaders from OpenAI, Google Deepmind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.

We just put out a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc. https://safe.ai/... 🧵 (1/6)
2023-05-30 View on X
New York Times

OpenAI and DeepMind execs, Geoffrey Hinton, and 350+ others release a statement saying “mitigating the risk of extinction from AI should be a global priority”

Leaders from OpenAI, Google Deepmind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.

2023-03-15
Some impressions from using GPT-4 🧵
2023-03-15 View on X
OpenAI

OpenAI debuts GPT-4, claiming the model “surpasses ChatGPT in its advanced reasoning capabilities”, available in ChatGPT Plus and as an API that has a waitlist

Following the research path from GPT, GPT-2, and GPT-3, our deep learning approach leverages more data and more computation …

2022-11-23
@MetaAI This directly incentivizes researchers to build models that are skilled at deception.
2022-11-23 View on X
Gizmodo

Meta's researchers detail Cicero, an AI trained to “human level performance” in negotiation-based strategy game Diplomacy, ranking in the top 10% over 40 games

for the first time, an AI is able to consistently manipulate humans to act against their own interest, and further the AI's goals, using only natural language. And all along, human...