/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Arvind Narayanan

@random_walker
101 posts
2025-05-01
Devastating takedown of Chatbot Arena. It's one thing for leaderboards to suck because they try to quantify the unquantifiable but quite another thing to actively choose flagrantly unscientific and nontransparent practices that benefit the big dogs. https://arxiv.org/... [image]
2025-05-01 View on X
TechCrunch

A study from Cohere, Stanford, MIT, and Ai2 accuses LMArena of helping Meta, OpenAI, Google, and Amazon game its popular crowdsourced AI benchmark Chatbot Arena

A new paper from AI lab Cohere, Stanford, MIT, and Ai2 accuses LM Arena, the organization behind the popular crowdsourced AI …

2025-04-21
I was glad to discuss the “AI as Normal Technology” paper with Kevin Roose and Casey Newton. We had an in-depth and fun conversation where we tried to identify our points of agreement and disagreement. I very much appreciate it when journalists take the time to read researchers'
2025-04-21 View on X
Knight First Amendment Institute

A deep dive into AI as a normal technology vs. a humanlike intelligence and how major public policy based on controlling superintelligence may make things worse

An alternative to the vision of AI as a potential superintelligence  —  We articulate a vision of artificial intelligence (AI) as normal technology. Bluesky: @taumuyi , @knightcolu...

📢📢New paper: AI as Normal Technology. I've worked on this w/ @sayashk for the last 2 years. The full version will be our next book! Utopian & dystopian AGI visions are quite similar. A far more important axis of disagreement—will AI be superintelligence or normal technology? [image]
2025-04-21 View on X
Knight First Amendment Institute

A deep dive into AI as a normal technology vs. a humanlike intelligence and how major public policy based on controlling superintelligence may make things worse

An alternative to the vision of AI as a potential superintelligence  —  We articulate a vision of artificial intelligence (AI) as normal technology. Bluesky: @taumuyi , @knightcolu...

2024-10-22
I've often been building single-use apps with Claude Artifacts when I'm helping my children learn. For example here's one on visualizing fractions https://claude.site/... Here are some things I've learned. Speed of creation My kids are young, so any single activity might last
2024-10-22 View on X
Simon Willison's Weblog

A look at some use cases of Anthropic's Claude Artifacts, which lets users create interactive single-page apps via prompts

I'm a huge fan of Claude's Artifacts feature, which lets you prompt Claude to create an interactive Single Page App (using HTML, CSS and JavaScript) … X: @random_walker , @emollick...

2024-09-25
📢 AI Snake Oil is out today! I have too many feelings because this book has been five years in the making. In 2019 I gave a talk on AI Snake Oil and tweeted out the slides. I had no idea my career was about to change. Within a couple of days I had 30 or 40 invites to write a [image]
2024-09-25 View on X
Wired

An interview with Arvind Narayanan and Sayash Kapoor on their new book AI Snake Oil, which is based on their popular newsletter about AI's shortcomings

A New Book by 2 Princeton University Computer Scientists X: Eric Topol / @erictopol : Is #AI snake oil? Some of it is, as asserted by @random_walker and @sayashk in a new book publ...

2024-09-07
I want to see how well these results translate from benchmarks to real world tasks, but if they hold up, it's an excellent example of how much low hanging fruit there is in AI development. The idea of doing reasoning using tokens hidden from the user is well known and has been
2024-09-07 View on X
VentureBeat

HyperWrite CEO unveils Reflection 70B, based on Llama 3.1 70B Instruct and trained using reflection-tuning, and says it beats GPT-4o in all benchmarks tested

There's a new king in town: Matt Shumer, co-founder and CEO of AI writing startup HyperWrite, today unveiled Reflection 70B …

2024-05-27
Some of these screenshots are likely fake but go viral because people are no longer skeptical that Google could be this broken. Given the non-deterministic and transient nature of AI responses there's no way to know for sure. 25 years of trust flushed down the toilet.
2024-05-27 View on X
Business Insider

Google says the vast majority of AI Overviews provide high-quality information and many of the viral examples have been uncommon queries or have been doctored

2024-05-26
Some of these screenshots are likely fake but go viral because people are no longer skeptical that Google could be this broken. Given the non-deterministic and transient nature of AI responses there's no way to know for sure. 25 years of trust flushed down the toilet.
2024-05-26 View on X
Business Insider

Google says the vast majority of AI Overviews provide high-quality information and many of the viral examples have been uncommon queries or have been doctored

Step 1: Google rolls out a new AI-powered product.  Step 2: Users quickly find the product's flaws and point them out with social-media posts, which become news stories.

2024-03-23
A striking statistic from @matthewstoller's post about the DoJ lawsuit against Apple. Just one of a long list of anticompetitive practices. Apple becoming the Boeing of computing sounds unthinkable, but without intervention maybe that's what will happen. https://www.thebignewsletter.com/ ... [image]
2024-03-23 View on X
The Verge

Some experts say the DOJ's Apple lawsuit makes a strong case for harm to consumers and developers, but proving Apple's market power could be challenging

They now meet a second group of legally trained minds that took the time to read the document: … Dan Moren / @dmoren@zeppelin.flights : Despite my critiques of the DoJ suit, I thin...

2024-02-16
1) This is super impressive. https://openai.com/sora 2) If you look closely at the Tokyo video there are hundreds of little physics violations. So spotting deepfake videos will hopefully remain easy in most cases—for now. 3) Automated detection is a fascinating research problem. [video]
2024-02-16 View on X
Wired

OpenAI unveils Sora, its first text-to-video model, which can create up to a minute of 1080p video, as a research product for some creators and security experts

OpenAI's entry into generative AI video is an impressive first step.  —  We already know that OpenAI's chatbots can pass the bar exam without going to law school.

2024-01-02
A thread on some misconceptions about the NYT lawsuit against OpenAI. Morality aside, the legal issues are far from clear cut. Gen AI makes an end run around copyright and IMO this can't be fully resolved by the courts alone. (HT @sayashk @CitpMihir for helpful discussions.)
2024-01-02 View on X
New York Times

The lawsuits against tech companies could shape what copyright means for AI, or simply serve as leverage for plaintiffs to secure more favorable licensing deals

The bar for fair use is typically that the new work doesn't compete with the original. … X: Sar Haribhakti / @sarthakgh : “If the NY Times successfully argues that reading a third ...

2024-01-01
A thread on some misconceptions about the NYT lawsuit against OpenAI. Morality aside, the legal issues are far from clear cut. Gen AI makes an end run around copyright and IMO this can't be fully resolved by the courts alone. (HT @sayashk @CitpMihir for helpful discussions.)
2024-01-01 View on X
New York Times

The lawsuits against AI companies could shape the future of copyright or may simply serve as leverage for plaintiffs to secure more favorable licensing deals

The use of content from news and information providers to train artificial intelligence systems may force a reassessment of where to draw legal lines.

2023-11-20
I'm guessing that people at OpenAI who aren't part of this religion will want to jump ship now. If that happens, there will be two Anthropics. https://www.theatlantic.com/ ...
2023-11-20 View on X
TechCrunch

Satya Nadella says Sam Altman, Greg Brockman, and OpenAI staff will join Microsoft's new “advanced AI research team” and Microsoft remains committed to OpenAI

Microsoft has hired OpenAI co-founders Sam Altman and Greg Brockman to head up a “new advanced AI research team,” …

2023-11-11
@emollick Also, making RAG accessible without coding feels like a BFD, but it's too early to know for sure. I suspect we'll see a wave of GPTs for Q&A about specific topics / knowledge domains.
2023-11-11 View on X
New York Times

A look at custom chatbots, or GPTs, which are tailored for specific tasks and represent an important step in OpenAI's strategy of “gradual iterative deployment”

The age of autonomous A.I. assistants could have huge implications.  —  You could think of the recent history of A.I. chatbots as having two distinct phases.

2023-11-01
New on the AI Snake Oil blog: How will the Executive Order impact openness in AI? We did a deep dive. On balance, for now, the EO seems to be good news for those who favor openness in AI. https://www.aisnakeoil.com/... with @sayashk and @RishiBommasani. [image]
2023-11-01 View on X
AI Snake Oil

What Biden's EO means for AI openness, and why a compute threshold is unlikely to effectively anticipate individual models' riskiness, but may work in aggregate

Good news on paper, but the devil is in the details  —  The Biden-Harris administration has issued an executive order on artificial intelligence.

2023-10-19
This is a really impressive and thorough effort from Stanford, MIT, and Princeton researchers to document the (lack of) transparency of 10 major foundation models on 100 transparency indicators. 👏 https://crfm.stanford.edu/fmti/ Blog https://www.aisnakeoil.com/... Paper https://crfm.stanford.edu/... [image]
2023-10-19 View on X
New York Times

Stanford unveils the Foundation Model Transparency Index, featuring 100 indicators; Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40%

https://www.nytimes.com/...  [image] Mark Coggins / @coggins@mastodon.social : This is the kind of needed AI regulation—requiring model makers to reveal how they trained their lang...

2023-07-30
In India, a shockingly different approach to data work: -nonprofit -pays 20-30x minimum wage -workers retain ownership of data they create (!) -helps build AI for their mother tongue, benefiting locals The challenge, of course, is signing up AI companies. https://time.com/...
2023-07-30 View on X
TIME

A look at Indian nonprofit Karya, which sells AI training data and redirects all the profit to its workers, who retain ownership of the data they create

I hope Karya's team keep the balance as they grow in the business world!  —  https://time.com/...  #aiethics #aifairness  —  [images] Bluesky: Margaret Mitchell / @mmitchell.bsky.s...

@Sahasrangsu_G There is a detailed explanation in the article. Indian law prevents them from being a nonprofit and doing what they do; so they've set up two companies, one for profit and one nonprofit, which (AFAICT) together function essentially as a nonprofit.
2023-07-30 View on X
TIME

A look at Indian nonprofit Karya, which sells AI training data and redirects all the profit to its workers, who retain ownership of the data they create

I hope Karya's team keep the balance as they grow in the business world!  —  https://time.com/...  #aiethics #aifairness  —  [images] Bluesky: Margaret Mitchell / @mmitchell.bsky.s...

The big question is how to put pressure on AI companies to source their data work ethically. What levers do we have?
2023-07-30 View on X
TIME

A look at Indian nonprofit Karya, which sells AI training data and redirects all the profit to its workers, who retain ownership of the data they create

I hope Karya's team keep the balance as they grow in the business world!  —  https://time.com/...  #aiethics #aifairness  —  [images] Bluesky: Margaret Mitchell / @mmitchell.bsky.s...

2023-07-29
In India, a shockingly different approach to data work: -nonprofit -pays 20-30x minimum wage -workers retain ownership of data they create (!) -helps build AI for their mother tongue, benefiting locals The challenge, of course, is signing up AI companies. https://time.com/...
2023-07-29 View on X
TIME

A look at Indian nonprofit Karya, which sells AI training data and redirects all the profit to its workers, who retain ownership of the data they create

In the shade of a coconut palm, Chandrika tilts her smartphone screen to avoid the sun's glare.  It is early morning in Alahalli village … Mastodon: @OmaymaS@dair-community.social ...