/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Melanie Mitchell

@melmitchell1
14 posts
2024-10-19
Yesterday, a journalist asked me if I thought we were on a “path to AGI”. I replied that I thought AGI would be “redefined into existence” by big companies. I didn't realize that would happen so soon. https://www.nytimes.com/... [image]
2024-10-19 View on X
TechCrunch

Report: OpenAI sees a clause in its Microsoft contract, which cuts off Microsoft's access to OpenAI tech if OpenAI develops AGI, as a path to a better contract

The New York Times on Thursday published a look at the “fraying” relationship between OpenAI and its investor, partner, and …

2024-05-09
Apple: “let's literally crush all that is wonderful about human artistic creation.”
2024-05-09 View on X
AppleInsider

Apple's iPad Pro ad “Crush”, which shows paints, toys, guitars, sculptures, and more being crushed to reveal the thin device, draws criticism on social media

2023-12-10
Very interesting essay from @erikphoel throwing a bit of cold water on the economics of generative AI. https://www.theintrinsicperspective.com / ...
2023-12-10 View on X
The Intrinsic Perspective

A bear case for the AI industry, as the sectors LLMs seem capable of disrupting so far, like writing, digital art, and programming help, are not very lucrative

Gemini and the supply paradox of AI  —  Another day, another huge new AI model revealed.  This time it's Google's Gemini.

2023-10-12
@GaryMarcus ... Despite the title, I honestly did not see any real argument in the article that “the most important parts of [AGI] have already been achieved by the current generation of advanced AI large language models”. Moreover, I did not see any definition in there of “AGI”, did you?
2023-10-12 View on X
Noema

Current advanced LLMs from OpenAI and others have many flaws but they will, decades from now, be recognized as the first true examples of AGI, similar to ENIAC

Today's most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence. Mastodon: @rhys@mastod...

2023-05-31
Agreed! The message from Altman at al. seems to be “AI is so dangerous, powerful, and mysterious that only people at the top AI companies know enough to regulate it.” Regulatory capture is the point. https://twitter.com/...
2023-05-31 View on X
New York Times

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders...

2023-03-22
“GPT-4 and professional benchmarks: The wrong answer to the wrong question”. 💯 https://aisnakeoil.substack.com/ ...
2023-03-22 View on X
AI Snake Oil

OpenAI may have tested GPT-4 on its training data, violating the cardinal rule of ML, and GPT-4's exam performance says little about its real-world usefulness

OpenAI may have tested on the training data.  Besides, human benchmarks are meaningless for bots.

2023-01-24
LLMs cannot “plagiarize”, since that implies intent. But at least some of the time they are indeed stochastic parrots, which results in generating text from their training data (or rephrasings of it). We'll see a lot more of this, I'm sure. https://futurism.com/...
2023-01-24 View on X
Futurism

An investigation finds extensive evidence that AI used by CNET appears to have plagiarized the work of competitors and human writers at Bankrate and even CNET

CNET's AI-written articles aren't just riddled with errors.  They also appear to be substantially plagiarized.

2022-07-11
Dear fellow Monetizable Daily Active Users (yes, that's you!), This is a very fun and informative article about the Musk / Twitter situation. Recommended! https://www.bloomberg.com/...
2022-07-11 View on X
Bloomberg

Elon Musk's offer to buy Twitter likely was a joke, since he previously pretended he would take Tesla private, but he may not be able to get out of the deal

Good morning!  If you're looking to head to Bhutan anytime soon, we've got some news for you. Leigh Mc Gowran / Silicon Republic : What's going on with Elon Musk's Twitter deal? Ar...

2022-07-10
Dear fellow Monetizable Daily Active Users (yes, that's you!), This is a very fun and informative article about the Musk / Twitter situation. Recommended! https://www.bloomberg.com/...
2022-07-10 View on X
Bloomberg

Elon Musk's offer to buy Twitter likely was a joke, since he previously pretended he would take Tesla private, but he may not be able to get out of the deal

Programming note: Ugh, here we are again, huh?  —  Oh Elon  —  I think it is helpful to start with the big picture.

2022-06-13
Such a strange article. It's been known for *forever* that humans are predisposed to anthropomorphize even with only the shallowest of signals (cf. ELIZA). Google engineers are human too, and not immune. https://twitter.com/...
2022-06-13 View on X
Washington Post

A look at advanced large language models, as Google places an engineer on paid leave after he became convinced that its LaMDA chatbot generator was sentient

AI ethicists warned Google not to impersonate humans.  Now one of Google's own thinks there's a ghost in the machine.

2022-06-12
Such a strange article. It's been known for *forever* that humans are predisposed to anthropomorphize even with only the shallowest of signals (cf. ELIZA). Google engineers are human too, and not immune. https://twitter.com/...
2022-06-12 View on X
Washington Post

A look at advanced large language models, as a Google engineer is placed on paid leave after becoming convinced that its LaMDA chatbot generator became sentient

AI ethicists warned Google not to impersonate humans.  Now one of Google's own thinks there's a ghost in the machine.

2021-03-14
Are car companies allowed to call any level of driver-assist technology “Full Self-Driving”? https://www.cnbc.com/... https://twitter.com/...
2021-03-14 View on X
CNBC

NTSB sent a letter to the NHTSA asking for stricter standards on automated vehicle tech, citing Tesla's Level 2 Autopilot system tests as needing more oversight

comments are due by April 1st. https://www.regulations.gov/ ... https://twitter.com/... Lora Kolodny / @lorakolodny : NHTSA wants to hear from the public & experts on what should b...

2021-03-13
Are car companies allowed to call any level of driver-assist technology “Full Self-Driving”? https://www.cnbc.com/... https://twitter.com/...
2021-03-13 View on X
CNBC

NTSB sent a letter to the NHTSA asking for stricter standards on automated vehicle tech, citing Tesla's Level 2 Autopilot system tests as needing more oversight

- The National Transportation Safety Board is calling on its sister agency, the National Highway Traffic Safety Administration …

2020-04-28
Good reality check. Accuracy on a benchmark dataset doesn't necessarily reflect real-world complexities. https://twitter.com/...
2020-04-28 View on X
TechCrunch

Google's AI screening tool for diabetic retinopathy, trialed in Thailand, proved impractical in real-life testing, despite high theoretical accuracy

AI is frequently cited as a miracle workers in medicine, especially in screening processes, where machine learning models boast expert-level skills in detecting problems.