/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@flixrisk

@flixrisk
4 posts
2024-01-06
📢📊 Important new survey results from @AIImpacts, surveying 2,700+ AI researchers. Some key points: ⚠️ The median respondent believes there's a 5% or greater chance of AI causing human extinction; one-third to half of respondents believe there's over a 10% chance. 😨 A majority...
2024-01-06 View on X
The Decoder

A survey of 2,778 AI researchers: 38.4% support faster development and 34.7% support slower development, AI development's pace will keep accelerating, and more

The “2023 Expert Survey on Progress in AI” shows that the scientific community has no consensus on the risks and opportunities of AI …

2023-10-19
“Just as we've had public discussions about the danger of nuclear weapons and climate change, the public needs to come to grips that there is yet another danger that has a similar magnitude of potential risks.” -Bengio on AI risks, in @BulletinAtomic ⬇️ https://thebulletin.org/...
2023-10-19 View on X
Bulletin of the Atomic Scientists

Q&A with Yoshua Bengio on nuance in headlines about AI, taboos among AI researchers, and why top researchers may disagree about AI's potential risks to humanity

Susan D'Agostino / Bulletin of the Atomic Scientists : X: @bulletinatomic , @susan_dagostino , @aisafetyfirst , and @flixrisk LinkedIn: Prakash Hebalkar , Michael Robbins , and St...

2023-03-29
As @GaryMarcus and many others have correctly pointed out, you don't have to worry about superintelligence to be concerned about the many other harms that large models pose, including impersonation and disinformation (4/8). https://garymarcus.substack.com/ ...
2023-03-29 View on X
The Road to AI We Can Trust

Discounting AI's short-term risks, from phishing to fraud to propaganda, because artificial general intelligence is not here yet leaves society ill-prepared

As @GaryMarcus and many others have correctly pointed out, you don't have to worry about superintelligence to be concerned about the many other harms that large models pose, including impersonation and disinformation (4/8). https://garymarcus.substack.com/ ...
2023-03-29 View on X
Future of Life Institute

Over 1,000 people, including Elon Musk, sign an open letter urging AI labs to pause training of systems more powerful than GPT-4 for at least six months

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.  —  Signatures