/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Sources: OpenAI's GPT-5, codenamed Orion, is behind schedule and faces technical hurdles, including high computing costs and limited high-quality training data

OpenAI has run into problem after problem on its new artificial-intelligence project, code-named Orion Bluesky: @allytibbitt.me , @tomashirstecon , @columnist , @madamehardy , @dawnnafus , @seed-corn-thoughts , @the-pizza-dude , @audreytruschke , @jeffreybigham.com , @gramsci , @sterow , @doormat9 , @vietdongsoldier , @andraswf , @nbhansen.dk , @benjaminjriley , @followtheh , @crankgrimes , @bazlyons , and @bft.wtf X: @conorsen and @amir . Forums: r/singularity Bluesky: Ally Tibbitt / @allytibbitt.me : “There may not be enough data in the world to make it smart enough.”  [embedded post] Tomas Hirst / @tomashirstecon : Going to sound glib, but the barrier to AI progress being “human beings aren't capable yet of producing enough high quality inputs to train the model” is sort of the tell here, no? [embedded post] @columnist : Describing this as absolute scenes doesn't come vaguely close.  The fundamental problem seems to be that there is simply not enough data in existence to sufficiently train the LLMs...  [embedded post] Madame Hardy / @madamehardy : “OpenAI also started developing what is called synthetic data, or data created by AI, to help train Orion. ”  I can see no way in which this could end badly.  —  Story illustrates why Altman is so desperate to steal copyrighted data.  [embedded post] Dawn Nafus / @dawnnafus : Just thinking about the emissions of all those failed training runs.  [embedded post] @seed-corn-thoughts : Well well well, looks like we're probably at the top of the exponential growth curve for AI models, which in turn means that they'll be, at best, what I said they'd be- labor-saving tools, but nothing that can legitimately replace creatives.  [embedded post] @the-pizza-dude : I run a business (when I can be bothered) .. number of times I call my accountant?  —  Every couple of weeks.  —  Number of times I've used fucking copilot?  — still trying to delete/disable it. Audrey Truschke / @audreytruschke : Almost like AI — in addition to being a massive copyright violation, misleading, and vague — is making us dumber.  Leading to “limited high-quality training data.”  —  I'm telling y'all — The best thing we can all do in the humanities, and as human beings, is to stay far away from this dumpster fire. … Jeffrey P. Bigham / @jeffreybigham.com : the “easy to get data” has been gotten, this is why things like world models and such are actually interesting — if you've run out of nicely prepackaged human data, you gotta start generating new data by interacting with stuff in sufficiently complex simulations or the real world [embedded post] Gramsci ZA / @gramsci : So, it appears they've nailed the artificial part, now a few more billion to sort the intelligence bit.  [embedded post] Stephen Rowley / @sterow : The LLMs are plateauing, and are going to be stuck at “incredibly expensive and largely useless.”  I cannot wait for this bubble to burst.  [embedded post] @doormat9 : love waiting around to see if this is all going to go the way of 3DTV, or end civilization 🙃 [embedded post] @vietdongsoldier : Been a whole series of these stories now, and it looks like the investor class is finally getting nervous about the chances of seeing meaningful returns on their AI investments [embedded post] András Forgács W / @andraswf : Well  —  Maybe it's time to get out of the AI thingy before you lose all your investment [embedded post] Nicolai B. Hansen / @nbhansen.dk : who could have thought that BIGGER COMPUTER AND MORE DATA wasnt the solution, nobody ever said tha..  [taps Searle, Dreyfus etc sign] [embedded post] Benjamin Riley / @benjaminjriley : There's a lot to chew on in this story but it's inescapably amusing to me that the desire for “more data” to feed into LLMs is fueling efforts to hire humans to produce knowledge.  The thousand monkey-thousand typewriter theory of software development, only we are the monkeys.  [embedded post] Tom Hearden / @followtheh : Meanwhile somehow the “valuation” magically rose from $87 b to $157 b over a 9 month span this year.  [embedded post] @crankgrimes : yet another exponential curve turns out to be sigmoid [embedded post] @bazlyons : They should ask AI how to fix it if it's so fucking smart! [embedded post] Bennett Tomlin / @bft.wtf : I think one $10b fundraising round will fix this [embedded post] X: Conor Sen / @conorsen : Interesting timing for this to publish the same day as the release of o3: https://www.wsj.com/... Amir Efrati / @amir : If you're wondering about OpenAI's Orion model, we wrote 5 and a half weeks ago that it's coming early next year. If I had to guess, it would be the base for o4. Remember: even marginal improvements in pretrained models end up helping reasoning models a lot... [image] Forums: r/singularity : The Next Great Leap in AI Is Behind Schedule and Crazy Expensive |  OpenAI has run into problem after problem on its new artificial-intelligence project, code-named Orion [WSJ Gift Link]

Wall Street Journal