/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Lexin Zhou

@lexin_zhou
2 posts
2024-09-26
10/ These unreliability issues are consistently found across multiple LLM families: GPT, LLaMA and BLOOM, comprising 32 models that exhibit different levels of scaling up and diverse methods of shaping up with human feedback: [image]
2024-09-26 View on X
Nature

Study: newer, bigger versions of LLMs like OpenAI's GPT, Meta's Llama, and BigScience's BLOOM are more inclined to give wrong answers than to admit ignorance

Nicola Jones / Nature :

1/ New paper @Nature! Discrepancy between human expectations of task difficulty and LLM errors harms reliability. In 2022, Ilya Sutskever @ilyasut predicted: “perhaps over time that discrepancy will diminish” ( https://www.youtube.com/..., min 61-64). We show this is *not* the case! [image]
2024-09-26 View on X
Nature

Study: newer, bigger versions of LLMs like OpenAI's GPT, Meta's Llama, and BigScience's BLOOM are more inclined to give wrong answers than to admit ignorance

Nicola Jones / Nature :