/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@stanfordnlp

@stanfordnlp
9 posts
2025-05-22
It's an interesting phenomenon of the current age how development of large deep learning text models (LLMs) is sucking in the research brainpower of so many)!
2025-05-22 View on X
Fortune

Google DeepMind says Gemini Diffusion, an experimental text diffusion model demoed at Google I/O and available by waitlist, generates 1,000-2,000 tokens/second

Our state-of-the-art, experimental text diffusion model Jose Antonio Lanz / Decrypt : Google Doubles Down on AI: Veo 3, Imagen 4 and Gemini Diffusion Push Creative Boundaries Matth...

It's an interesting phenomenon of the current age how development of large deep learning text models (LLMs) is sucking in the research brainpower of so many)!
2025-05-22 View on X
The Verge

Google is tapping its users' data to give its AI models an advantage over OpenAI and Anthropic, starting with its opt-in “Gemini with personalization” feature

Google is slowly giving Gemini more and more access to user data to ‘personalize’ your responses.

2025-02-13
The final admission that the 2023 strategy of OpenAI, Anthropic, etc. ("simply scaling up model size, data, compute, and dollars spent will get us to AGI/ASI") is no longer working!
2025-02-13 View on X
TechCrunch

Sam Altman says GPT-5 will include o3, which is no longer set to ship as a standalone model, GPT-4.5 will be OpenAI's last non-chain-of-thought model, and more

OpenAI has effectively canceled the release of o3, which was slated to be the company's next major AI model …

2024-04-16
The AI Index calls out Direct Preference Optimization (DPO) for the now widespread use of RLHF across Large Language Models: https://arxiv.org/... https://aiindex.stanford.edu/ report/ #NLProc #BiasedTakes [image]
2024-04-16 View on X
Artificial Intelligence Index

Stanford's AI Index report: training top AI models is way more expensive, AI still trails humans on complex tasks, people are more nervous about AI, and more

customer support, customer acquisition, and personalization all between 22-26%. [image] @stanfordhai : The #AIIndex2024 tracks the rise of multimodal models, major cash investments...

The 2024 AI Index tacitly shows Natural Language Processing rising to be the central technology of AI 2004: NLP way off in the AI margins 2014: A little excitement over chatbots 2024: AI Index leads with impressive progress of LLMs https://aiindex.stanford.edu/ report/ #NLProc #BiasedTakes
2024-04-16 View on X
Artificial Intelligence Index

Stanford's AI Index report: training top AI models is way more expensive, AI still trails humans on complex tasks, people are more nervous about AI, and more

customer support, customer acquisition, and personalization all between 22-26%. [image] @stanfordhai : The #AIIndex2024 tracks the rise of multimodal models, major cash investments...

The AI Index editors chose “the most notable model releases of 2023”. 9 of the 15 were Large Language Models. 3 more involved language: text to image and speech models. 2 image models and a watermarking model brought up the rear. https://aiindex.stanford.edu/ report/ #NLProc #BiasedTakes [image]
2024-04-16 View on X
Artificial Intelligence Index

Stanford's AI Index report: training top AI models is way more expensive, AI still trails humans on complex tasks, people are more nervous about AI, and more

customer support, customer acquisition, and personalization all between 22-26%. [image] @stanfordhai : The #AIIndex2024 tracks the rise of multimodal models, major cash investments...

2023-04-24
Katherine Forrest @PaulWeissLLP: “I would like to move away from “large language model” because that causes people to get stuck into a language-only space. These models are really foundation models that are not just language-based but also photographic-, video-, audio-based....” https://twitter.com/...
2023-04-24 View on X
The Markup

Q&A with Katherine Forrest, a former federal judge for the SDNY, on copyright and generative AI, the Copyright Office's guidance on AI-generated work, and more

A conversation with Katherine Forrest Before they gobbled up headlines everywhere, large language models ingested truly staggering amounts of data to train their models. Mastodon: ...

2020-08-26
The use of speech is gradually becoming more mainstream—why shouldn't you just speak the first draft of your paper rather than typing it all in? [Microsoft 365 Word on the web transcription] https://www.microsoft.com/...
2020-08-26 View on X
CNET

Microsoft adds an audio transcription feature to Word's web version for Office 365 that will work with prerecorded or live audio

why shouldn't you just speak the first draft of your paper rather than typing it all in? [Microsoft 365 Word on the web transcription] https://www.microsoft.com/... Tom Warren / @t...

2019-08-14
“Nvidia was able to train BERT-Large using optimized PyTorch software and a DGX-SuperPOD of more than 1,000 GPUs that is able to train BERT in 53 minutes.” - ⁦⁦@kharijohnson⁩, ⁦@VentureBeat⁩ https://venturebeat.com/...
2019-08-14 View on X
TechCrunch

Nvidia says it has broken records for real-time conversational AI, training the industry-standard BERT model in 53 minutes and then inferring responses in ~2ms

Darrell Etherington / TechCrunch :