/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Abeba Birhane

@abebab
28 posts
2024-06-22
this is too much for a company that has been built on the backbone of stolen work, largely from the creative community
2024-06-22 View on X
Wall Street Journal

Research: the number of freelance jobs on platforms like Upwork, in areas where generative AI excels, have dropped by as much as 21% since ChatGPT's debut

There's now data to back up what freelancers have been saying for months  —  Jennifer Kelly, a freelance copywriter in the picturesque …

2024-04-13
so they lied
2024-04-13 View on X
Bloomberg

Adobe used images created by tools like Midjourney and uploaded to its stock marketplace by users, to train Firefly; Adobe says ~5% of images were AI-generated

2024-04-12
AI researchers/ big corp make unsubstantiated claims and are celebrated for “groundbreaking” advancement of the field scholars (often under-sourced) meticulously examine the claims and find they're inflated/misleading but these corrections merely get traction rinse & repeat
2024-04-12 View on X
404 Media

Researchers say they haven't found “strikingly novel compounds” after analyzing a subset of the 2.2M new crystals DeepMind claimed its AI tool GNoME discovered

In November, Google's AI outfit DeepMind published a press release titled “Millions of new materials discovered with deep learning.”

2023-12-21
the LAION dataset gave us a gimps into corp datasets locked in corp labs like those in OpenAI, Meta, & Google. you can be sure, those closed datasets — rarely examined by independent auditors — are much worse than the open LAION dataset
2023-12-21 View on X
Bloomberg

Stanford researchers: LAION-5B, a dataset of 5B+ images used by Stability AI and others, contains 1,008+ instances of CSAM, possibly helping AI to generate CSAM

most prominently, Stable Diffusion 1.5—to see to what degree CSAM itself might be present in the training data. https://purl.stanford.edu/... Alex Stamos / @alex.stamos : Lots of p...

not surprising, tbh. we found numerous disturbing and illegal content in the LAION dataset that didn't make it into our papers this is a win for individuals, especially children in the dataset subject to sexual abuse but an overall lose for dataset curation/audits/accountability
2023-12-21 View on X
Bloomberg

Stanford researchers: LAION-5B, a dataset of 5B+ images used by Stability AI and others, contains 1,008+ instances of CSAM, possibly helping AI to generate CSAM

most prominently, Stable Diffusion 1.5—to see to what degree CSAM itself might be present in the training data. https://purl.stanford.edu/... Alex Stamos / @alex.stamos : Lots of p...

2023-12-20
not surprising, tbh. we found numerous disturbing and illegal content in the LAION dataset that didn't make it into our papers this is a win for individuals, especially children in the dataset subject to sexual abuse but an overall lose for dataset curation/audits/accountability
2023-12-20 View on X
Bloomberg

Stanford researchers: LAION-5B, a dataset of 5B images used by Stability AI and others, contains 1,008 instances of CSAM, possibly helping to create AI CSAM

The dataset has been used to build popular AI image generators, including Stable Diffusion.  —  A massive public dataset used …

2023-11-23
“This unwavering loyalty [to Sam & Greg] stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units.” anonymous post from former OpenAI employees [image]
2023-11-23 View on X
Wall Street Journal

Sam Altman's return at OpenAI, partly formed on effective altruism principles, revealed hard limits and caps a bruising year for the divisive social movement

Sam Altman's firing showed the influence of effective altruism and its view that AI development must slow down; his return marked its limits

“This unwavering loyalty [to Sam & Greg] stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units.” anonymous post from former OpenAI employees [image]
2023-11-23 View on X
The Information

Source: a breakthrough spearheaded by OpenAI chief scientist Ilya Sutskever enabled a model that could solve basic math problems, stoking excitement and concern

One day before he was fired by OpenAI's board last week, Sam Altman alluded to a recent technical advance the company …

2023-08-14
🔥🔥🔥 https://www.rollingstone.com/ ...
2023-08-14 View on X
Rolling Stone

A look at the years of warnings about AI from researchers, including several women of color, who say we need to take the problems and risks seriously today

Today the risks of artificial intelligence are clear — but the warning signs have been there all along  —  T  —  IMNIT GEBRU DIDN'T set out to work in AI.

2023-07-06
so many grandiose sounding yet vacuous words in one blog post https://twitter.com/...
2023-07-06 View on X
TechCrunch

OpenAI forms Superalignment, a team for developing ways to steer and control “superintelligent” AI systems, with access to 20% of its compute secured to date

OpenAI is forming a new team led by Ilya Sutskever, its chief scientist and one of the company's co-founders …

“'AI Safety' might be attracting a lot of money and capturing the attention of policymakers and billionaires alike, but it brings nothing of value.” @emilymbender nails it! https://medium.com/...
2023-07-06 View on X
Emily M. Bender

Framing AI debates as a schism between people worried about AI going rogue and those illuminating actual harms is ahistorical and obscures important research

In two recent conversations with very thoughtful journalists, I was asked about the apparent ‘schism’ between those making a lot … Bluesky: @abeba.bsky.social , @mmitchell.bsky.soc...

2023-06-20
people like @sama ask to be regulated in public. behind doors, they lobby for the opposite. watch what these folks do not what they say https://twitter.com/...
2023-06-20 View on X
TIME

Documents show OpenAI lobbied for parts of the EU's AI Act to be watered down, including successfully avoiding its general purpose AI being deemed “high risk”

The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds …

2022-11-19
@ylecun I can also guarantee you the distraught to your “small team of people” is insignificant compared to marginalised communities that end up paying the highest price from failure/inaccuracies from these models 3/
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

@ylecun with great power (and you surely portrayed the model as extraordinary) comes great responsibility. it is up to you to do the work and make sure your model stands up to scrutiny. it didn't and now you're walking back your claims
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

@ylecun i find the way you continually try to displace responsibility away from meta (a powerful, wealthy and irresponsible corp) and onto someone else, kinda unhinged... while at the same time using our time and input towards “progress” for your model, which you will benefit from
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

@ylecun Yann, you're very close to getting it. Let's try again. Galactica was bad because it was spitting out incorrect and dangerous output. Meta, responsible for Galactica holds so much power, wealth and influence yet, it avoids responsibly for the damage it continues to cause. 1/
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

@ylecun asymmetry: building models & assembling datasets is much less tasking compared to auditing, assessing & testing. I can guarantee you, your “small team of people” are not as distraughted as the people (much less resourced & privileged) testing your model 2/
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

2022-11-01
this is what a clear conflict of interest looks like https://www.protocol.com/... Eric Schmidt, controlling and influencing AI legislation is like the tobacco industry deciding the direction and agenda of lung cancer research https://twitter.com/...
2022-11-01 View on X
Protocol

A look at Eric Schmidt's push to profit from an AI cold war between the US and China; CB Insights: Schmidt took part in investing $2B+ in AI-focused companies

both to democracy and to his own interests. https://www.protocol.com/... Kate Kaye / @katekayereports : Schmidt's story is an exploration of how a private sector tech mogul has pla...

2022-06-13
we have arrived at peak AI hype accompanied by minimal critical thinking
2022-06-13 View on X
Washington Post

A look at advanced large language models, as Google places an engineer on paid leave after he became convinced that its LaMDA chatbot generator was sentient

AI ethicists warned Google not to impersonate humans.  Now one of Google's own thinks there's a ghost in the machine.

2021-09-17
Insert “apologise” in there and that's the recursive loop https://twitter.com/...
2021-09-17 View on X
The Verge

Senator Markey and Reps. Castor and Trahan wrote to Facebook calling on it to abandon its plans to launch an Instagram app for kids in light of WSJ's report