/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Carlos E. Perez

@intuitmachine
14 posts
2026-01-11
The man who predicted the 2008 housing crash just looked at the AI boom. His verdict? We might be watching a historic misallocation of capital. But the co-founder of Anthropic disagrees. I just read the debate between Michael Burry, Jack Clark, and Dwarkesh Patel. Here is the
2026-01-11 View on X
The Substack Post

Michael Burry, Anthropic co-founder Jack Clark, and Dwarkesh Patel on the future of AI, whether AI tools improve productivity, job losses due to AI, and more

The man who predicted the 2008 crash, Anthropic's co-founder, and a leading AI podcaster jump into a Google doc to debate the future of AI—and, possibly, our lives

2026-01-10
The man who predicted the 2008 housing crash just looked at the AI boom. His verdict? We might be watching a historic misallocation of capital. But the co-founder of Anthropic disagrees. I just read the debate between Michael Burry, Jack Clark, and Dwarkesh Patel. Here is the
2026-01-10 View on X
The Substack Post

Michael Burry, Anthropic co-founder Jack Clark, and Dwarkesh Patel on the future of AI, whether AI tools improve productivity, job losses due to AI, and more

The man who predicted the 2008 crash, Anthropic's co-founder, and a leading AI podcaster jump into a Google doc to debate the future of AI—and, possibly, our lives

2025-12-01
3/12 Here's the part people still don't get: Google didn't beat NVIDIA at raw performance.  TPUv7 is “only” 20-30% faster on paper.  They beat them on price-per-token by ~50% at system level.  That's not incremental.  That's the kind of gap that ends empires.
2025-12-01 View on X
SemiAnalysis

An in-depth look at TPUv7 Ironwood, and how the latest Google TPU generation positions Google as the most threatening challenger to Nvidia's AI chip dominance

Fascinating article.  They argue that the reason for NVIDIA's circular investment deals is to intertwine their own fate with that of the big labs, to keep themselves on top  —  Ope...

2025-07-13
Holy s**t! Open source Kimi-K2 is better than Grok 4! [image]
2025-07-13 View on X
VentureBeat

Moonshot's Kimi K2 uses a 1T-parameter MoE architecture with 32B active parameters and outperforms models like GPT-4.1 and DeepSeek-V3 on key benchmarks

Moonshot AI, the Chinese artificial intelligence startup behind the popular Kimi chatbot, released an open-source language model on Friday …

2024-05-07
1/n Beyond Next-Word Prediction: Multi-Token Prediction Imagine you're trying to learn a new language. You could start by memorizing individual words, but that would only get you so far. To truly understand the language, you need to grasp how words connect and form meaningful... [image]
2024-05-07 View on X
VentureBeat

A study by Meta researchers suggests that training LLMs to predict multiple tokens at once, instead of just the next token, results in better and faster models

LLM approach to predict multiple tokens KAN: Kolmogorov-Arnold Networks —"promising alternatives to Multi-Layer Perceptrons" [image] Ethan / @ethan_smith_20 : it was only briefly t...

2024-02-21
Groq is a Radically Different kind of AI architecture Among the new crop of AI chip startups, Groq stands out with a radically different approach centered around its compiler technology for optimizing a minimalist yet high-performance architecture. Groq's secret sauce is this... [image]
2024-02-21 View on X
Gizmodo

Demos from AI chipmaker Groq go viral after the startup's inference engine shows lightning-fast speeds when running LLMs, including for real-time conversations

Two AI companies are claiming the science fiction term, “Grok,” as their own, but only one is turbocharging the AI industry.

2023-06-08
It's finally about time that Google is using Dual Process Theory (System 1 & System 2) to describe Deep Learning architectures. https://blog.google/... [image]
2023-06-08 View on X
TechCrunch

Google updates how Bard handles math, coding questions, and string manipulation via “implicit code execution”, which lets the chatbot run code in the background

Bard, Google's beleaguered AI-powered chatbot, is slowly improving at tasks involving logic and reasoning.

2023-03-02
Wait a second here! Is it attempting to perform visual analogies here?! https://twitter.com/...
2023-03-02 View on X
Ars Technica

Microsoft researchers unveil Kosmos-1, a multimodal LLM they claim can understand image content, pass visual IQ tests, and accepts a variety of input formats

Microsoft believes a multimodal approach paves the way for human-level AI.  —  On Monday, researchers from Microsoft introduced Kosmos-1 …

2023-01-12
With exponential technology growth, it is increasingly prevalent that software engineers must make ethical decisions! No longer can we wash ourselves of accountability and say the boss told me so. https://twitter.com/...
2023-01-12 View on X
Forbes

JPMorgan Chase is suing the founder of Frank, a student loan software startup acquired for $175M, over allegedly lying about scale by creating 4M+ fake users

The financial giant is suing the founder of a Mark Rowan-backed startup it acquired, claiming the fintech, Frank, had sold the financial giant on a “lie.”

2022-11-19
We've got to manage our expectations about generative models. Just as we can generate images of fictional worlds that appear real, we can do the same with text. A style that looks real does not imply that the content represents something real. Fluency is not understanding. https://twitter.com/...
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

2022-09-26
LeCun throws in the towel with current approaches. https://twitter.com/...
2022-09-26 View on X
ZDNet

An interview with Meta Chief AI Scientist Yann LeCun on his critics and why today's most popular approaches to AI won't lead to human-level machine intelligence

In summary, it's a good interview where you can get a glimpse of @ylecun thinking. I'm agreement with many of his ideas, except for the criticism of the language models. I suspect he hasn't read about “code-duality” yet.
2022-09-26 View on X
ZDNet

An interview with Meta Chief AI Scientist Yann LeCun on his critics and why today's most popular approaches to AI won't lead to human-level machine intelligence

I find it curious that LeCun leans toward “energy-based” architectures, yet he cites Lagrangians as the basis of Deep Learning. Aren't energy-based formulations a Hamiltonian perspective and not Lagrangian? @ylecun https://www.zdnet.com/...
2022-09-26 View on X
ZDNet

An interview with Meta Chief AI Scientist Yann LeCun on his critics and why today's most popular approaches to AI won't lead to human-level machine intelligence

Oh, it's unfair for him to dismiss @GaryMarcus as not being an AI person. That's just like Descartes dismissing Fermat or Newton dismissing Leibniz because they were not formally schooled in mathematics. Why should DL people have a monopoly on the conversation?
2022-09-26 View on X
ZDNet

An interview with Meta Chief AI Scientist Yann LeCun on his critics and why today's most popular approaches to AI won't lead to human-level machine intelligence