/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Chinese AI startup Z.ai releases GLM-4.7, an open-weight model that Z.ai says delivers significant improvements in coding performance compared to GLM-4.6

like 210  —  Z.ai 6.24k  —  Text Generation Transformers Safetensors English Chinese glm4_moe conversational eWeek : Chinese AI Startup Z.ai Takes On OpenAI Via Cheaper Prices Vincent Chow / South China Morning Post : Chinese start-ups Zhipu and MiniMax release latest AI models ahead of Hong Kong listing Markus Kasanmascheff / WinBuzzer : Z.ai Releases GLM-4.7, Claiming GPT-5.1 Parity with ‘Preserved Thinking’ for Agents Pandaily : GLM-4.7 Goes Live and Open Source, Delivering a Major Leap in Coding Performance Maria Garcia / Implicator.ai : GLM-4.7 and the Economics of AI Arbitrage Erin / TestingCatalog : Z.AI launches GLM-4.7, new SOTA open-source model for coding X: Deedy / @deedydas : We have a new best open source model to close out 2025: GLM 4.7! It's been ~6mos since the first closed source model, Opus 4, broke 73% on SWE-Bench and GLM does 73.8%! It's fantastic at math and coding and beats DeepSeek / Kimi. Very cheap at $0.6/M in, $2.2/M out, 200k [image] Awni Hannun / @awnihannun : GLM-4.7 runs quite well on an M3 Ultra with mlx-lm, even at a near lossless precision (6-bit here). It generated the best space invaders game I've seen yet for a local model (even included sound effects!). Generated 6600 tokens and ran at 16 tok/s. [video] @scaling01 : honestly incredible considering its size much smaller than DeepSeek, Kimi and closed models basically Opus 4.1 level performance, just 4 months later but infinitely cheaper and much smaller package and faster inference Elie / @eliebakouch : the gap in design taste and vibe coding ability between GLM 4.6 and GLM 4.7 is impressive (see the blog for more examples), seems to be the main focus of this release expecting minimax M2.1 to focus on the same thing so it's going to be interesting! [image] @kimmonismus : GLM-4.7: insane evals This evals are nuts.  Of course, we need to test whether this is actually Benmaxxing.  But holy moly, the numbers look very good at first glance, especilly for open source HLE (w/ Tools) 42.8% GPQA-Diamond 85.7% τ²-Bench 87.4% [image[ Max Weinbach / @mweinbach : GLM 4.7 is out and it's a good upgrade to the best open coding model plans I'll try it out later but the GLM 4.5 and GLM 4.6 models were extremely impressive Lou / @louszbd : we've been working hard on this release for a long time...and releasing GLM-4.7 before Christmas is our gift from https://z.ai/ to you 🎄 Can't wait to see the excitement! If you have any demos or cool use cases, we'd love for you to share them . https://z.ai/ @vercel_dev : GLM-4.7 is now available on AI Gateway. • Z .ai's best model yet for coding, multi-step reasoning, and tool usage • Set model to 𝚐𝚕𝚖-𝟺.𝟽 @arena : 🚨Code Arena Update: GLM-4.7 by @Zai_org is now #6 on the WebDev leaderboard and takes the #1 open-model spot, surpassing both Claude-Sonnet-4.5 and GPT-5.  This is a +83 pts increase over its previous version GLM-4.6.  Congrats to the @Zai_org team for this leap forward in progress 👏 [image] @zai_org : GLM-4.7 further refines Interleaved Thinking and introduces Preserved Thinking and Turn-level Thinking. By enabling thought between actions and maintaining consistency across turns, it makes complex tasks more stable and controllable. https://docs.z.ai/... [image] @zai_org : GLM-4.7 is here! GLM-4.7 surpasses GLM-4.6 with substantial improvements in coding, complex reasoning, and tool usage, setting new open-source SOTA standards. It also boosts performance in chat, creative writing, and role-play scenarios. Default Model for Coding Plan: [image] @zai_org : Compared with GLM-4.6, GLM-4.7 delivers significant improvements in coding performance. In real-world development scenarios, GLM-4.7 shows a clear advantage over GLM-4.6 and has become the default model in the GLM Coding Plan. [image] Bluesky: Stephen Judkins / @stephenjudkins : Another open-weights model creeping ever closer to the frontier models from the AI majors.  —  While it's very tough to get hardware that can run these, z.ai is selling it as a service for maybe 10% as much as anthropic charges.  Hard to see where the trillions of industry profit are going to come from! Forums: Hacker News : GLM-4.7: Advancing the Coding Capability r/LocalLLaMA : GLM 4.7 released! r/LocalLLaMA : GLM 4.7 is out on HF!

Z.ai