Six stories about OpenAI appeared on Techmeme on February 21. The Wall Street Journal reported that OpenAI staff had raised concerns about a Canadian mass shooting suspect months before the attack — and that the company decided the activity "didn't meet the bar for reporting to police." The other five stories, read together, explain who is setting that bar and what else is on their desk.

The Numbers

On the same day as the reporting failure story:

TechCrunch reported that users aged 18 to 24 account for nearly 50% of ChatGPT's user base, with under-30s representing the majority. The Information reported that OpenAI has more than 200 people working on AI devices, including a smartphone. CNBC reported that OpenAI is telling investors it's targeting roughly $600 billion in total compensation. Bloomberg reported that OpenAI projects its revenue will exceed $280 billion by 2030 — up from $12.7 billion currently.

of ChatGPT users are aged 18-24
total compensation target OpenAI is pitching investors

And Benedict Evans published an analysis of the fundamental questions facing OpenAI, concluding that its models have "a very large market but a very thin moat."

The Platform Question

Every social platform eventually reaches the scale where it has to answer this question: when do you report a user to law enforcement?

Meta sends 27 million reports per year to the National Center for Missing and Exploited Children — 84% of all tips the organization receives. The infrastructure for reporting exists. The legal frameworks exist. The FTC ordered Google, OpenAI, Meta, Snap, and xAI to report on child safety practices in September 2025. OpenAI itself launched parental controls for users aged 13 to 17 that same month.

Five months later, the company's internal staff flagged a mass shooting suspect and the company decided the bar hadn't been met.

The Moat

Evans' analysis provides the structural context the other articles don't. OpenAI has "a very large market but a very thin moat." The technology is replicable. The data advantages are temporary. What OpenAI has is users — hundreds of millions of them, half under 24, growing fast enough to justify a $600 billion compensation target.

When the moat is thin, users are the moat. When users are the moat, anything that reduces the user count — including reporting users to law enforcement — is a threat to the business. This is not an accusation of intent. It is a description of incentive structure. Every platform that has reached this scale has faced the same pressure, and every platform has initially set the bar too high.

February 2026
Sources: OpenAI staff raised concerns about a Canadian mass shooting suspect months ago; OpenAI says her activity didn't meet the bar for reporting to police
Wall Street Journal

Facebook waited years before building its reporting infrastructure. YouTube waited years before addressing radicalization content. Twitter waited years before addressing coordinated harassment. In each case, the platform grew first and built safety systems second, because growth was the metric that mattered to investors and safety was the metric that mattered to everyone else.

The Bar

OpenAI is building a smartphone. Its users are the youngest of any major AI platform. Its revenue projections require 22x growth in four years. Its moat is thin. Its staff flagged a mass shooting suspect and were told the bar hadn't been met.

The company that sets the reporting threshold is the same company whose $600 billion valuation depends on not triggering it.

The question is not whether OpenAI acted with bad intent. The question is whether a company simultaneously targeting $600 billion, building consumer hardware for an audience half under 24, and navigating a thin moat is structurally capable of setting the bar in the right place. Every platform before it wasn't.