/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

On top of nonconsensual porn images, X users seem to be using Grok to alter images to depict real women being sexually abused, humiliated, hurt, and even killed

and critics say it's harassment by AI Kevin Raposo / KnowTechie : Grok chatbot's safety fails, releases explicit images of minors Selin Hacialioglu / Türkiye Today : Pressure grows in Türkiye as Grok faces backlash over nonconsensual image manipulation Bluesky: Michael / @therightarticle : Abhorrent.  Do not use X. Do not use Grok.  Simple. Snigdha / @snig : Despite X's AI Grok admitting and apologising for creating sexualised images of children, xAI, X/Twitter, and Elon Musk have said nothing.  Do they hope if they ignore it, everyone will forget?  —  arstechnica.com/tech-policy/ ... Casey Newton / @caseynewton : Elon should go back to the MechaHitler version of Grok.  It was safer! [embedded post] Steve Peers / @stevepeers : It gets *worse* than the nude deepfakes.  —  The UK and EU have said nothing.  [embedded post] Charlotte Nichols MP / @charlotte2153 : There is, to my mind, no justification for the continued use by the UK Government of X as a platform for official comms.  There hasn't been for some time, in fact, but if the latest developments around AI-generated image abuse and CSAM don't change the policy I really don't know what will.  [embedded post] Maggie Harrison Dupré / @mharrisondupre : A lot of this was directed at online models and sex workers, which is deeply troubling as sex workers already face a disproportionately high risk of violence and homicide.  —  Several users expressly asked Grok to make women “look scared” in the sexualized images. Kincso Biro / @kincsobiro : “... people started asking Elon Musk's chatbot Grok to unclothe images of real people.  This resulted in a wave of nonconsensual pornographic images flooding the largely unmoderated social media site, with some of the sexualized images even depicting minors.”  —  futurism.com/future-socie... Mastodon: @Khrys@mamot.fr : xAI silent after Grok sexualized images of kids; dril mocks Grok's “apology”  —  https://arstechnica.com/...  For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US. … Forums: r/EnoughMuskSpam : Elon Musk's Pornography Machine

Futurism