/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Dr. Donut

@bebischof
2 posts
2024-05-30
Proud to bring you: A Year Building With LLMs, a three part essay published with O'Reilly https://www.oreilly.com/... We get into the weeds on what it takes develop incredible LLM powered applications Advice from: @eugeneyan, @charles_irl, @HamelHusain, @jxnlco, @sh_reya and I.
2024-05-30 View on X
O'Reilly Media

Six AI and ML experts detail what they learned from building real-world applications on top of LLMs over the past year, including common prompting pitfalls

‘What We Learned from a Year of Building with LLMs (Part I)’  —  https://www.oreilly.com/...  Hallucinations are still a big problem; RAGs are better than fine-tuning; long context...

I'm partial to this section, h/t to @willkurt with whom I chatted about this framing a lot: [image]
2024-05-30 View on X
O'Reilly Media

Six AI and ML experts detail what they learned from building real-world applications on top of LLMs over the past year, including common prompting pitfalls

‘What We Learned from a Year of Building with LLMs (Part I)’  —  https://www.oreilly.com/...  Hallucinations are still a big problem; RAGs are better than fine-tuning; long context...