/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Prince Canuma

@prince_canuma
3 posts
2025-03-25
@Alibaba_Qwen's Qwen2.5-VL-32B-Instruct now on MLX 🔥🚀 You can now fine-tune and run inference locally on your Mac. Get started: > pip install -U mlx-vlm Model collection 👇 [video]
2025-03-25 View on X
Simon Willison's Weblog

Alibaba releases Qwen2.5-VL-32B, a 32B open model under Apache 2.0, claiming better math reasoning and alignment with human preferences than earlier 2.5 models

Qwen2.5-VL-32B: Smarter and Lighter.  The second big open weight LLM release from China today - the first being DeepSeek v3-0324.

2025-01-28
Qwen2.5-VL port to MLX update # 01 Model is loading fine just need to implement the new vision logic. 3B runs are +30 tok/s in bf16, image the quants 🔥 [image]
2025-01-28 View on X
TechCrunch

Alibaba's Qwen team releases Qwen2.5-VL, a new series of AI models that can control PCs and phones, as well as perform a number of text and image analysis tasks

QWEN CHAT GITHUB HUGGING FACE MODELSCOPE DISCORD Anusuya Lahiri / Benzinga : Not Just DeepSeek - Alibaba Unveils AI Model To Rival OpenAI's Operator Markus Kasanmascheff / WinBuzze...

2024-07-19
Mistral NeMo Instruct (Q4) is blazing fast 🔥 Running locally on M3 Max at 37 tokens/s using MLX 🚀 > pip install fastmlx And install mlx-lm from source (PR #895) [video]
2024-07-19 View on X
VentureBeat

Nvidia and Mistral release Mistral NeMo, a 12B-parameter language model with a 128K-token context window, available under the Apache 2.0 open-source license

Mistral NeMo: our new best small model.  A state-of-the-art 12B model … Jonathan Kemper / The Decoder : Mistral releases three new LLMs for math, code and general tasks X: Prince C...