/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

An analysis of 100T+ tokens from the past year shows reasoning models now represent over half of all usage, open-weight model use has grown steadily, and more

this is not a model I hear much about. [image] @openrouterai : We collaborated with @a16z to publish the **State of AI** - an empirical report on how LLMs have been used on OpenRouter. After analyzing more than 100 trillion tokens across hundreds of models and 3+ million users (excluding 3rd party) from the last year, we have a lot of [image] @a16z : >100 trillion token analysis of reasoning model usage over time Full piece from @MaikaThoughts, @AnjneyMidha, @xanderatallah, and @cclark: https://openrouter.ai/... [image] @scaling01 : The moment open-source models were close to 30% of OpenRouter traffic and almost all of them came from China with the notable models being: DeepSeek V3/R1, Qwen3 family, Kimi-K2 and GLM-4.5 + Air Minimax M2 is now also a major player, but open-weights models token-usage [image] Nathan Lambert / @natolambert : On a prompt count basis this mean reasoning models are not close to a majority on OpenRouter, as reasoning models can use 10-1000x the tokens of non-thinking models per prompt. Lots of need for fast, efficient open models. Reasoning model usage is likely closed labs more. [image] Bluesky: Tim Duffy / @timfduffy.com : Lots of interesting details in this new report on usage trends from OpenRouter. openrouter.ai/state-of-ai I've been wondering about mean coding input token length, in their data it's around 20k tokens.  Other large categories (roleplay, technology science) average around 5k [image]

OpenRouter