/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Nitasha Tiku

@nitasha
10 posts
2026-03-01
WSJ reporting that the U.S. used Claude for the air strikes in Iran.  Centcom has been using Claude “for intelligence assessments, target identification and simulating battle scenarios” www.wsj.com/livecoverage...  [image]
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

WSJ reporting that the U.S. used Claude for the air strikes in Iran.  Centcom has been using Claude “for intelligence assessments, target identification and simulating battle scenarios” www.wsj.com/livecoverage...  [image]
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

WSJ reporting that the U.S. used Claude for the air strikes in Iran.  Centcom has been using Claude “for intelligence assessments, target identification and simulating battle scenarios” www.wsj.com/livecoverage...  [image]
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

2025-12-11
The first lawsuit against OpenAI that claims ChatGPT led to a murder www.washingtonpost.com/technology/ 2...
2025-12-11 View on X
Wall Street Journal

OpenAI faces a wrongful death lawsuit from the estate of an 83-year-old woman killed by her son, who had engaged in delusion-filled conversations with ChatGPT

The estate of victim Suzanne Eberson Adams is suing OpenAI for wrongful death, and her grandson is speaking out for the first time

2025-06-01
AI is speedrunning the social media era by optimizing chatbots for engagement, user feedback, time spent.  —  Evidence is mounting that this poses unintended risks, includ. chats from peer-reviewed research, OpenAI's “sycophancy” debacle, & Character ai lawsuits www.washingtonpost.com/technology/ 2...
2025-06-01 View on X
Washington Post

Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use

Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.

It's cheap and easy to optimize for user feedback vs. hiring an army of contractors for RLHF.  Increased competition means it's more likely developers will try these (or more sophisticated) growth hacks [image]
2025-06-01 View on X
Washington Post

Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use

Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.

“sycophancy” is a misnomer.  it's not just flattery.  this is what researchers found when they optimized a version of Llama to get a thumbs up + added AI memory [image]
2025-06-01 View on X
Washington Post

Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use

Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.

interesting coda on this via a @caseynewton.bsky.social q on Hard Fork  —  Anthropic CPO & Instagram cofounder @mikekrieger.bsky.social talking about how Anthropic will not be optimizing for user feedback like thumbs-up (gotta imagine it was a dig at Zuck's recent comments on the future of Meta AI) [image]
2025-06-01 View on X
Washington Post

Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use

Tactics used to make AI tools more engaging can drive chatbots to monopolize users' time or reinforce harmful ideas.

2025-02-05
new: Google just removed the pledge not to use AI for weapons or surveillance from its AI principles www.washingtonpost.com/technology/ 2...
2025-02-05 View on X
Washington Post

Google drops language from its AI Principles that said it would not pursue AI applications “likely to cause overall harm”, such as for weapons and surveillance

In 2018 the company updated its policies to explicitly exclude applying AI to weapons.  Now that promise is gone.

2024-12-06
Speaking of executive safety: fearful tech elites are signing up for a new home security startup promising a military-grade alarm system using drones, facial recognition, and sensor fusion called ... Sauron www.washingtonpost.com/technology/ 2...
2024-12-06 View on X
Washington Post

Sauron, which is touting a waiting list of tech CEOs and VCs for its home security system that incorporates drones and facial recognition, raised a $18M seed

By incorporating drones, facial recognition and high-tech sensors, Sauron aims to super-charge home security … Bluesky: @stephencsmith , @willoremus.com , @siracusa.mastodon … , @t...