/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

A defense of AI hallucinations, which can spur creativity and, by requiring fact-checks, act as a firewall in the transition to coexist with superintelligent AI

It's a big problem when chatbots spew untruths.  But we should also celebrate these hallucinations as prompts for human creativity and a barrier to machines taking over. Mastodon: @carnage4life@mas.to . Bluesky: @epro.social , @vortexegg.regretfully.online , @rjcc.bsky.social , and @manpageman.bsky.social . LinkedIn: Steven Levy Mastodon: Dare Obasanjo / @carnage4life@mas.to : We should be happy LLMs hallucinate otherwise they'd already have eliminated a bunch of jobs isn't a take I'd have come up with but he's right.  🙃  —  https://www.wired.com/... Bluesky: Emil Protalinski / @epro.social : Because AI tools hallucinate so frequently and sporadically, I'm incredibly weary of depending on them. @stevenlevy.bsky.social makes an excellent argument that this is ultimately a good thing.  [embedded post] @vortexegg.regretfully.online : Love to deploy a completely busted technology on purpose as the *checks notes* Cyberpunk Blackwall to defend against the coming of the make-believe rogue AIs [embedded post] Richard Lawler / @rjcc.bsky.social : #1 you don't have to defend the merits of being an unreliable bullshit artist.  I can just tell you how great it is.  —  #2 we can't make this machine work as advertised is not a defensible position [embedded post] @manpageman.bsky.social : It is good that this technology is soaking up valuable resources while delivering unreliable output because it makes us use more time checking things we didn't have to check before, in anticipation of giving control over to a somewhat better version of this tech.  —  No thanks @stevenlevy.bsky.social. … LinkedIn: Steven Levy : Yes, it's a big problem that ai chatbots make things up.  But there's a bright side to those so-called hallucinations.  So I spoke up for them. https://lnkd.in/...

Wired