/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Yann LeCun admits that Llama 4's “results were fudged a little bit”, and that the team used different models for different benchmarks to give better results

The interview took place in a great restaurant in Paris: Yannick Alléno's Pavyllon. … Bluesky: Rob Delaney / @robdelaney : 💩💩💩 [embedded post] SE Gyges / @segyges : this puts a bunch of us in the awkward position of agreeing with yann's overall point but disagreeing with his entire argument [embedded post] Tom Wallach / @mdwallach : It is comical how apparent this is to anyone who knows anything about the technology which also makes one wonder why the CEOs do these companies do not seem to know anything about this technology [embedded post] Ian Robberson / @microberust : Really cant emphasize enough just how LeCun is at the heart to modern machine learning.  He pioneered the foundational Convolutional Neural Network research back in the 80s/90s, and has stayed in that space for the past half-century.  —  Which is probably also why he's not too shy about his opinion. … Derek B. Johnson / @derekbjohnson : What's really interesting is that I think LLMs will end up being a sort of experimental dry run for how our society will react to technologies than can actually do what LLMs pretend to.  America did not do well this time around, but perhaps we can learn from this period and get another chance. … Elliot / @1t2ls : I like this guy's vibe.  The interview doesn't say much, but I appreciate the realism that the next high impact areas for AI aren't chatbot/LLM related but in industrial applications.  This seems right to me.  [embedded post] Gareth Watkins / @garethwatkins : There is nobody who can accurately describe how an LLM works who can explain how that model will lead to AGI.  [embedded post] Max Kennerly / @maxkennerly : “Superintelligence” 🙄 claims aside, I think he's right.  LLM boosters always start dissembling when you point out that the tech has two critical problems—it scales poorly and routinely generates errors—and nobody has a clue how to fix either.  Both problems appear inherent to the tech itself. … Jacob Weindling / @jakeweindling : All the smartest people in tech say that AGI is a long ways away, while all the marks and scammers in tech think the hallucination bot is God.  That tech is run entirely by the latter and not the former says a lot about that industry.  [embedded post] Sean Carroll / @seanmcarroll : Opinions on “superintelligence” can reasonably differ.  (Personally I think it's a terrible framing that obscures more than it clarifies.)  But I still struggle to comprehend why anyone would think LLMs are the route to it.  [embedded post] @abstracttesseract : “My integrity as a scientist cannot allow me to do this” == “I can excuse manipulation, misinformation, and contributing to a genocide, but I draw the line at promoting the wrong flavor of magic beans” [embedded post] @zeroisanumber : Philosophically speaking, I'm convinced that humans can't program an intelligence smarter than we are, but it's nice to see someone expert in the field agree with me that LLMs are a dead end.  [embedded post] Janine Gibson / @janinegibson.ft.com : hearing from my legal team that i don't *know* he didn't sign an NDA, I have *surmised* it based on his frankness. @freelunch23 : he is frank but what he tells is really no secret @hoon : I think LLMs are good for  — search  — organization  — synthesis (many inputs into a few outputs)  —  But superintelligence isn't one of the things. @theangelofhistory : Yeah I have no idea if super intelligence or even artificial human intelligence is possible or not - but LLM text generators are not either of those.  —  Whatever the negative impacts of these things, it's not going to be “take over the world and destroy humanity” Janine Gibson / @janinegibson.ft.com : Ex-Meta chief AI scientist Yann LeCun has Lunch with the FT and in one of those instances so rare that you know he didn't sign an NDA, says exactly why as.ft.com/r/e503690d-8...  [image] Jesse Felder / @jessefelder : “I'm sure there's a lot of people... who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence.  But I'm not gonna change my mind because some dude thinks I'm wrong... My integrity as a scientist cannot allow me to do this.” www.ft.com/content/e3c4... Steve Kovach / @stevekovach : If LeCun is right about this, 100s of billions have been spent on a fantasy [embedded post] Justin Hendrix / @justinhendrix : Interesting “Lunch with the FT” column on Yann LeCun and his AI “superintelligence” ambitions.  Not sure how much to read into this- may have been what you say after a big French meal and glasses of wine- but is this really what “we” suffer from?  —  giftarticle.ft.com/giftarticle/ ...  [image] Forums: Msmash / Slashdot : ‘Results Were Fudged’: Departing Meta AI Chief Confirms Llama 4 Benchmark Manipulation

Financial Times