/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
API keys, docs, usage dashboard
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

OpenAI says GPT-4 poses “at most” a slight risk of helping people create biological threats, per the company's early tests to evaluate “catastrophic” LLM risks

Mark Zuckerberg; Struggling Startups Are Looking For the Exits Michael Nuñez / VentureBeat : OpenAI study reveals surprising role of AI in future biological threat creation Tom Carter / Business Insider : ChatGPT probably won't help create biological weapons, OpenAI says Vish Gain / Silicon Republic : GPT-4 ‘mildy useful’ in creating bioweapons, says ChatGPT X: Tolga Bilge / @tolgabilge_ : It's good to see that this is something that is being worked on. I am unsure to what extent I agree with design principle 3: “The risk from AI should be measured in terms of improvement over existing resources.” In the specific case of open-source, where models can be run on an... Trevor Blackwell / @tlbtlbtlb : I'm glad this experiment was done. Seems like a good test for new models before releasing them. Steven Adler / @sjgadler : I'm proud of the investments we've made here: Developing a careful, rigorous protocol that will continue to be useful into the future. Nathan Benaich / @nathanbenaich : or rather, the result is not statistically significant and can be due to noise. doesn't say what these error bars represent either Aleksander Madry / @aleks_madry : People worry about AI boosting biological threat creation, but how would we know how real this risk is? Here is what we have done in this context so far: Jack / @jack24dd30 : tyler cowen said that he's far more worried about LLMs simply helping terrorist groups run more efficiently and have better organization lol Tejal Patwardhan / @tejalpatwardhan : latest from preparedness @ openai: gpt4 at most mildly helps with biothreat creation. method: get bio PhDs in a secure monitored facility. half try biothreat creation w/ (experimental) unsafe gpt4. other half can only use the internet. so far, gpt4 ≈ internet... but we'll... Greg Brockman / @gdb : Evaluations for LLM-assisted biological threat creation. Current models not very capable at this task, but we want to be ahead of the curve for assessing this and other potential future risk areas: @openai : We are building an early warning system for LLMs being capable of assisting in biological threat creation. Current models turn out to be, at most, mildly useful for this kind of misuse, and we will continue evolving our evaluation blueprint for the future. https://openai.com/... LinkedIn: Aleksander Madry : As part of our Preparedness effort, we are sharing some of our early work assessing LLMs and biological threat creation risk. … Forums: Hacker News : Building an early warning system for LLM-aided biological threat creation r/technology : Mistral CEO confirms ‘leak’ of new open source AI model nearing GPT-4 performance

Bloomberg