/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Omar Sanseviero

@osanseviero
25 posts
2026-03-05
Qwen friends: if any of you want a new home to build great models and contribute to the open models ecosystem, please reach out! Lots of exciting things in the roadmap and so much to build ahead of us
2026-03-05 View on X
Reuters

Staff memo: Alibaba says it is setting up a new task force to accelerate foundation model development, after the resignation of its Qwen AI head Lin Junyang

Alibaba Group Holding Ltd (9988.HK) said on Thursday it would set up a new task force to accelerate foundation model development …

2026-01-16
I'm excited to introduce the new addition to the Gemma family: TranslateGemma 🎉 - Trained on translation tasks across 55 language pairs - 4B, 12B, and 27B parameters - With multimodal input We're also publishing a technical report with an overview of training and evaluation [image]
2026-01-16 View on X
The Keyword

Google releases TranslateGemma, a suite of Gemma 3-based open translation models available in 4B-, 12B-, and 27B-parameter sizes, with support for 55 languages

2026-01-15
I'm excited to introduce the new addition to the Gemma family: TranslateGemma 🎉 - Trained on translation tasks across 55 language pairs - 4B, 12B, and 27B parameters - With multimodal input We're also publishing a technical report with an overview of training and evaluation [image]
2026-01-15 View on X
The Keyword

Google releases TranslateGemma, a suite of Gemma 3-based open translation models available in 4B-, 12B-, and 27B-parameter sizes, with support for 55 languages

Today, we're introducing TranslateGemma, a new collection of open translation models built on Gemma 3, helping people communicate across 55 languages …

2026-01-14
Introducing MedGemma 1.5, an open-access model for multimodal medical use cases It expands the tasks and data formats it can understand (high-dimensional medical imaging, EHRs, anatomical localization with bounding boxes, etc) https://research.google/... [image]
2026-01-14 View on X
Google Research

Google releases MedGemma 1.5, offering improved medical imaging support, and MedASR, fine-tuned for medical dictation, both on Hugging Face and Vertex AI

Daniel Golden, Engineering Manager, and Fereshteh Mahvar, Software Engineer, Google Research  —  We are updating our open MedGemma model with improved medical imaging support.

2025-12-17
Introducing Gemini 3 Flash ⚡️Performance close to Gemini 3 Pro, with great multimodal and tool use quality ⚡️3x faster than Gemini 2.5 Pro, while cheaper and better at most benchmarks ⚡️LMArena score of 1477 (top 3 model) The time to build is now (and yes, there's a free tier)
2025-12-17 View on X
9to5Google

Google unveils Gemini 3 Flash, which it says has Pro-grade reasoning with lower latency, outperforming 2.5 Pro “while being 3x faster at a fraction of the cost”

Following last month's launch of Gemini 3 Pro, Google today announced Gemini 3 Flash for consumers and developers.

2025-11-17
WeatherNext 2 is here ⚡️ ☀️Can predict hundreds of weather outcomes from a starting point, in under a minute on a single TPU ⚡️Generate forecasts 8x faster 🌨️Available in Earth Engine, BigQuery, and an early access program [image]
2025-11-17 View on X
Bloomberg

Google DeepMind releases WeatherNext 2, a weather model that it says offers faster, more accurate two-week forecasts and includes more tools for energy traders

Google DeepMind has released a new artificial intelligence weather model that it says is faster and more accurate than anything it's built …

2025-10-16
Introducing... Cell2Sentence Scale 27B🤏 Based on Gemma, it's an open model that generated hypotheses about cancer cellular behavior. In collaboration with Yale, we confirmed the predictions with experimental validation in living cells Super excited about this one 🤯
2025-10-16 View on X
The Keyword

Google releases Cell2Sentence-Scale 27B (C2S-Scale), a 27B-parameter foundation model for single-cell analysis built on its Gemma family of open models

We're launching a new 27 billion parameter foundation model for single-cell analysis built on the Gemma family of open models.

2025-10-08
Introducing Gemini 2.5 Computer Use 🖥️🤖 - Control UIs based with vision understanding and reasoning - Use for web and Android control - Try it now with Browserbase or locally I'm super excited about high-impact use cases this model unlocks. Share what you build with us! [image]
2025-10-08 View on X
The Keyword

Google releases the Gemini 2.5 Computer Use model, built on Gemini 2.5 Pro's capabilities to power agents that can interact with UIs, in preview via the API

Google released a new Gemini 2.5 Computer Use model today, specially designed … Carl Franzen / VentureBeat : Google's AI can now surf the web for you, click on buttons, and fill ou...

2025-10-03
Nano Banana GA is out with new features! - More supported aspect ratios (10 in total) - Can specify image-only output - Ready for production https://developers.googleblog.com/ ...
2025-10-03 View on X
Google Developers Blog

Google says Gemini 2.5 Flash Image, aka Nano Banana, is now generally available and supports more aspect ratios, priced at $0.039/image and $30/1M output tokens

Wikipedia:  —  Nano Banana (officially Gemini 2.5 Flash Image) … X: @googleaidevs : 🖼️ Nano Banana is generally available and ready for production. Learn how you can build dynamic ...

2025-08-26
We just shipped Gemini 2.5 Flash Image, aka nano-banana, a SOTA image generation and editing model 🔥 Among its impressive capabilities, you get native world understanding, multi-image merging, character consistency, and more! https://aistudio.google.com/ ... [video]
2025-08-26 View on X
TechCrunch

Google says it is behind the viral “nano-banana” image model and launches it as Gemini 2.5 Flash Image with finer edit controls in the Gemini app, API, and more

Google is upgrading its Gemini chatbot with a new AI image model that gives users finer control over editing photos …

2025-08-15
Some fun things people may have missed from Gemma 3 270M: 1. Out of 270M params, 170M are embedding params and 100M are transformers blocks. Bert from 2018 was larger 🤯 2. The vocabulary is quite large (262144 tokens). This makes Gemma 3 270M very good model to be hyper
2025-08-15 View on X
Google Developers Blog

Google announces Gemma 3 270M, a compact model designed for task-specific fine-tuning with strong capabilities in instruction following and text structuring

ai.google.dev/gemma/docs/c... Tim Duffy / @timfduffy.com : Google just released a 270M parameter Gemma model.  As a tiny model lover I'm excited.  Models in this size class are usu...

Introducing Gemma 3 270M 🔥 🤏A tiny model! Just 270 million parameters 🧠 Very strong instruction following 🤖 Fine-tune in just a few minutes, with a large vocabulary to serve as a high-quality foundation https://developers.googleblog.com/ ... [image]
2025-08-15 View on X
Google Developers Blog

Google announces Gemma 3 270M, a compact model designed for task-specific fine-tuning with strong capabilities in instruction following and text structuring

ai.google.dev/gemma/docs/c... Tim Duffy / @timfduffy.com : Google just released a 270M parameter Gemma model.  As a tiny model lover I'm excited.  Models in this size class are usu...

2025-06-27
We've taken community feedback very seriously, and that's why for Gemma 3n launch we're so proud to partner with so many in this amazing ecosystem Thanks to @huggingface, @ollama, @Prince_Canuma for MLX, @UnslothAI, @ggerganov llama.cpp/GGUFs, @NVIDIAAIDev, @kaggle,
2025-06-27 View on X
Neowin

Google fully releases Gemma 3n, an open weights, multimodal AI model that can run on as little as 2GB of memory; the model was previously available as a preview

Google has announced Gemma 3n, the next generation of its open AI models, and it is a significant step up from what we saw before.

2025-05-13
Gemma just passed 150 million downloads and over 70k variants on Hugging Face🚀🚀🚀 What would you like to see in the next Gemma versions?
2025-05-13 View on X
TechCrunch

Google's Gemma models passed 150M downloads and 70K+ variants on Hugging Face, after its February 2024 launch; Meta's Llama passed 1.2B downloads in April 2025

I may have little love for this ‘revolution,’ but there are bright spots. techcrunch.com/2025/05/12/g... X: Omar Sanseviero / @osanseviero : Gemma just passed 150 million downloads...

2025-04-18
Introducing Gemini 2.5 Flash 🔥 ⚡️High quality for best cost 🤔Dynamic thinking, with fine-grained thinking budget 🚅Available on AI Studio Blog: https://developers.googleblog.com/ ... AI Studio: https://aistudio.google.com/ ... [image]
2025-04-18 View on X
Ars Technica

Google rolls out Gemini 2.5 Flash in preview, with support for Canvas for working on text or code; developers can disable thinking or set a token limit for it

Google's Gemini AI may have had a slow start, but it has been anything but in 2025.  Barely a week goes by that another model …

2025-04-14
Introducing... DolphinGemma! 🐬 🐬Next dolphin token predictor model (audio-to-audio) 🤏Just 400M parameters - can run directly in phones! 🏝️Fin-tuning possible for different species 🌊To be released for the next dolphin season https://blog.google/...
2025-04-14 View on X
The Keyword

Google details DolphinGemma, a new 400M-parameter LLM to decode dolphin communication by analyzing the vocalizations of wild Atlantic spotted dolphins

DolphinGemma, a large language model developed by Google, is helping scientists study how dolphins communicate — and hopefully find out what they're saying, too.

2025-03-12
I'm so happy to announce Gemma 3 is out! 🚀 🌏Understands over 140 languages 👀Multimodal with image and video input 🤯LMArena score of 1338! 📏Context window of 128k Available in AI Studio, Hugging Face, Ollama, Vertex, and your favorite OS tools 🚀Download it today! [image]
2025-03-12 View on X
9to5Google

Google unveils Gemma 3, the “world's best single-accelerator model”, running on a single GPU, in 1B, 4B, 12B, and 27B sizes, and says it outperforms Llama-405B

Following version 1 in February 2024 and 2 in May, Google today announced Gemma 3 as its latest open model for developers.

2024-12-20
Gemini 2.0 Flash Thinking is out! 🚀(experimental) 🤯Solve complex reasoning 🤔Transparent thinking process 👀Text and image input Try it out for free: https://aistudio.google.com/ ... Docs: https://ai.google.dev/...
2024-12-20 View on X
TechCrunch

Google releases Gemini 2.0 Flash Thinking, an experimental “reasoning” model that “explicitly shows its thoughts” and can use them to strengthen its reasoning

Quick: what sort of prompts should you run against GPT-4o vs Gemini 1.5 Flash vs o1 vs o1-pro vs gemini-2.0-flash-thinking-exp? X: Jeff Dean / @jeffdean : Introducing Gemini 2.0 Fl...

2024-09-25
Molmo by @allen_ai - a SOTA multimodal model 🤗Open models and partially open data 🤏7B and 72B model sizes (+7B MoE with 1B active params) 🤯Benchmarks above GPT-4V, Flash, etc 🗣️Human Preference of 72B on par with top API models 🧠PixMo, a high-quality dataset for captioning [image]
2024-09-25 View on X
Wired

The Allen Institute for AI debuts Multimodal Open Language Model in 1B- to 72B-parameter sizes, the most capable open-source AI model with visual abilities yet

A compact and fully open source visual AI model will make it easier for AI to take control of your computer—hopefully in a good way.

2024-03-18
Grok weights are out. Download them quickly at https://huggingface.co/... huggingface-cli download xai-org/grok-1 —repo-type model —include ckpt/tensor* —local-dir checkpoints/ckpt-0 —local-dir-use-symlinks False Learn about mixture of experts at https://hf.co/...
2024-03-18 View on X
xAI

xAI open sources the base model weights and network architecture of Grok-1, a 314B parameter Mixture-of-Experts model trained in October 2023, under Apache 2.0

We are releasing the base model weights and network architecture of Grok-1, our large language model.