/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Archie Sengupta

@archiexzzz
8 posts
2025-10-30
Holy shit!!! “10,000x more efficient” > They built probabilistic circuits called Thermodynamic Sampling Units (TSUs) using “p-bits” Mathematically, a p-bit samples from programmable Bernoulli distributions that produces stochastic binary states (0/1) with a tunable
2025-10-30 View on X
Wired

Extropic, which says its chips using probabilistic bits can be 10,000x more energy efficient than current AI chips, shares its first chip with some AI labs

A startup hopes to challenge Nvidia, AMD, and Intel with a chip that wrangles probabilities rather than 1s and 0s.

2025-05-22
Got access to Google diffusion. HOLY SH!T 909 tokens/s ?????? I made a calendar in 3s? 3 fcuking seconds? [image]
2025-05-22 View on X
Fortune

Google DeepMind says Gemini Diffusion, an experimental text diffusion model demoed at Google I/O and available by waitlist, generates 1,000-2,000 tokens/second

Our state-of-the-art, experimental text diffusion model Jose Antonio Lanz / Decrypt : Google Doubles Down on AI: Veo 3, Imagen 4 and Gemini Diffusion Push Creative Boundaries Matth...

Got access to Google diffusion. HOLY SH!T 909 tokens/s ?????? I made a calendar in 3s? 3 fcuking seconds? [image]
2025-05-22 View on X
The Verge

Google is tapping its users' data to give its AI models an advantage over OpenAI and Anthropic, starting with its opt-in “Gemini with personalization” feature

Google is slowly giving Gemini more and more access to user data to ‘personalize’ your responses.

2025-02-26
YC funding kids with rich backgrounds who did not do a single day of work and who are still in college, on their slave-driving tool like this was not on my 2025 list. [video]
2025-02-26 View on X
TechCrunch

After backlash, YC deletes a demo video from X and LinkedIn of Optifye.AI, a startup it backs that is building AI performance monitoring for factory workers

A demo from Optifye.AI, a member of Y Combinator's current cohort, sparked a social media backlash that ended up with YC deleting it off its socials.

2025-01-26
DeepSeek Engineers [image]
2025-01-26 View on X
Financial Times

Industry insiders say DeepSeek's focus on research makes it a dangerous competitor as it's willing to share breakthroughs rather than protect them for profits

China is pulling the same trick.  —  www.ft.com/content/747a... Mastodon: Brian Kung / @briankung@hachyderm.io : “There's a pretty delicious, or maybe disconcerting irony to this, ...

DeepSeek Engineers [image]
2025-01-26 View on X
MIT Technology Review

Rather than weakening China's AI capabilities, US sanctions appear to be driving startups like DeepSeek to innovate by prioritizing efficiency and collaboration

The AI community is abuzz over DeepSeek R1, a new open-source reasoning model.  —  The model was developed by the Chinese AI startup DeepSeek …

2025-01-25
DeepSeek Engineers [image]
2025-01-25 View on X
VentureBeat

Yann LeCun says DeepSeek “profited from open research and open source” like Meta's Llama and is proof that open source models are surpassing proprietary ones

If you hadn't heard, there's a new AI star in town: DeepSeek, the subsidiary of Hong Kong-based quantitative analysis …

2024-05-07
good read: better & faster llms via multi-token prediction by meta. training language models to predict multiple future tokens at once. https://arxiv.org/... [image]
2024-05-07 View on X
VentureBeat

A study by Meta researchers suggests that training LLMs to predict multiple tokens at once, instead of just the next token, results in better and faster models

LLM approach to predict multiple tokens KAN: Kolmogorov-Arnold Networks —"promising alternatives to Multi-Layer Perceptrons" [image] Ethan / @ethan_smith_20 : it was only briefly t...